Wednesday, 9 March 2022

Decision-makers don't dislike uncertain advice, but they do dislike advisors who are uncertain

One aspect of my research and consulting involves producing population projections, which local councils use for planning purposes. Population projections involve a lot of demographic changes, each of which are not perfectly known beforehand, so projections inherently have a lot of uncertainty (for more on that point, see here). [*] Understandably, while planners are trying to make plans for an uncertain future, reducing the uncertainty of that future makes their jobs easier. In my experience, planners are usually looking for projections that give them one single number for the total population (for each year) to focus their planning on. [**] So, it seems natural to me that decision-makers would be averse to uncertainty, and prefer to receive predictions or projections that convey a greater degree of certainty.

It turns out that might not be the case. This 2018 article by Celia Gaertig and Joseph Simmons (both University of Pennsylvania), published in the journal Psychological Science (ungated version here) demonstrates that decision-makers don't have an aversion to uncertain advice at all. Gaertig and Simmons conducted a number of experimental studies where they presented research participants with advice and asked them to make a decision. In the first six studies, using research participants recruited from Amazon Mechanical Turk:

...participants were asked to predict the outcomes of a series of sporting events on the day on which the games were played. Participants in Studies 1 and 2 predicted NBA games, and participants in Studies 3–6 predicted MLB games...

For each of the games that participants were asked to forecast, we told them that, “You will receive advice to help you make your predictions. For each question, the advice that you receive comes from a different person.” Importantly, participants always received objectively good advice, which was based on data from well-calibrated betting markets. For each game, we independently manipulated the certainty of the advice, and, in all but one study, we also manipulated the confidence of the advisor.

Gaertig and Simmons presented research participants with advice, some of which was certain, and some of which was uncertain, with uncertainty expressed in a variety of way, including probabilistically. Research participants were then asked about the quality of the advice they received, and they made an incentivised choice, where they were paid more if they correctly predicted the outcome of the sporting event (the specific outcomes to be predicted varied across the studies). They found that:

As predicted, and consistent with past research, these analyses revealed a large and significant main effect of advisor confidence... Advisors who said “I am not sure but . . .” were evaluated more negatively than advisors who expressed themselves confidently.

More importantly, participants did not evaluate uncertain advice more negatively than certain advice...

Thus, these studies provide no evidence that people inherently dislike uncertain advice in the form of ranges.

Moving on to probabilistic statements of uncertainty, Gaertig and Simmons found that:

As in the previous analysis, there was a large and significant main effect of advisor confidence in all regressions... Advisors who said “I am not sure but . . .” were evaluated more negatively than advisors who expressed themselves confidently. We also found, in Study 6, that advisors who preceded their advice by saying, “I am very confident that . . .” were evaluated more positively than advisors who did not express themselves with such high confidence...

Participants evaluated exact-chance advice (e.g., “There is a 57% chance that the Chicago Cubs will win the game”) more positively than certain advice (e.g., “The Chicago Cubs will win the game”)...

Participants also evaluated approximate-chance advice (e.g., “There is about a 57% chance that Chicago Cubs will win the game”) more positively than certain advice...

In Study 5, we introduced a percent-confident condition, in which participants received confident advice in the form of “I am X% confident that . . . ” We found that participants evaluated this advice the same as certain advice...

The results of the “probably” condition were different, as participants did evaluate advice of the form “The [predicted team] will probably win the game” more negatively than they evaluated certain advice...

Gaertig and Simmons then go on to show that the research participants were no less likely to follow the uncertain advice in their predictions than the certain advice. They also found similar results in a laboratory setting (in Study 7), which also showed that:

...people’s preference for uncertain versus certain advice was greater when the uncertain advice was associated with a larger probability.

In other words, people prefer for advice that demonstrates uncertainty, when the probability is very high (or very low), but not so much when the uncertainty says that the chances of an event are 50-50. That makes some sense. However, it doesn't really accord with the finding of a preference for uncertain advice over certain advice - do decision-makers really prefer advice that says something is 95% certain than saying something is 100% certain? Perhaps they feel that predictions expressed with 100% certainty lack credibility?

Finally, in the last two studies Gaertig and Simmons asked research participants to choose between two advisors, one of whom provided certain advice while the other provided uncertain advice. They found:

...a large and significantly positive effect of the uncertain-advice condition, indicating that more participants preferred Advisor 2 when Advisor 2 provided uncertain advice than when Advisor 2 provided certain advice. This was true both when the uncertain advice came in the form of approximate-chance advice and in the form of “more-likely” advice. When one advisor provided certain advice and the other approximate-chance advice, 82.4% of participants chose Advisor 2 when Advisor 2 provided approximate-chance advice, but only 16.2% of participants chose Advisor 2 when Advisor 2 provided certain advice...

Gaertig and Simmons conclude that:

Taken together, our results challenge the belief that advisors need to provide false certainty for their advice to be heeded. Advisors do not have a realistic incentive to be overconfident, as people do not judge them more negatively when they provide realistically uncertain advice.

It seems that I may have misjudged decision-makers' preferences for certainty. They don't prefer certain advice; they prefer advisors who are certain about their uncertainty. 

*****

[*] Here I'm using uncertainty in its everyday broad sense. Financial economists distinguish between uncertainty that can be quantified (which they refer to as risk), and uncertainty that cannot be easily quantified.

[**] This is an exaggeration of course, in two ways. First, planners recognise that there is uncertainty. However, explaining that uncertainty to elected decision-makers is difficult, so having a single number makes their job easier in that way as well. Second, planners don't only want a single number for the total population. They usually want to know a bit more detail about the age distribution, etc.

No comments:

Post a Comment