One of my research areas is population economics. As part of that stream of my research, I generate projections of future population. Those projections are used by many local councils in the Waikato region for their long-term planning. One of the things that I learned very early on was the sheer uncertainty associated with forecasting (indeed, Jacques Poot and I quantified this uncertainty in an article published in the Journal of Population Research in 2011). So, it is all very well to have some model that forecasts in an unbiased way (so that, on average, the forecast is correct). But you also need to take account of how much uncertainty there is in the forecast as well.
That is essentially the first lesson to be drawn from the 2021 book Noise, by Nobel Prize winner Daniel Kahneman, Olivier Sibony, and Cass Sunstein. They define bias as a systematic deviation from the target, while noise is random scatter. Both bias and noise are components of error in human judgment, but Kahneman et al. argue that while bias has attracted much attention, the role of noise is under-recognised. I tend to agree. When talking to decision-makers or policy people about population projections, they want to know how 'accurate' they are. By 'accurate', they are talking about the bias in the projections. They are usually not at all interested in hearing about the uncertainty in the projections (how noisy they are).
This book is about drawing attention to noise. The first two parts of the book describe the difference between bias and noise, and look at how to measure them. Kahneman et al. make use of a clever analogy - looking at the cluster of bullet holes in a target at a shooting range. How close on average those holes are to the centre of the target is a measure of bias. How spread out the holes are provides a measure of noise. They also point out that you can know how noisy decisions are without knowing anything about how biased they are. If you turn the target over, you can see the bullet holes, but not the target. So, you can still see the noise, even if the bias cannot be seen (I'll return to this point later).
The third part of the book looks at a particular type of human judgment: predictive judgments. Kahneman et al. then discuss the causes of noise, drawing heavily on psychology. That's because the main source of noise in human decision-making is the human part. Even faced with the same alternatives and the same information on which to base a decision, we may not choose the same alternative. Finally, the book looks at ways of reducing noise, and concludes by exploring some of the counterarguments and providing reasoned rebuttals.
This is a very thorough treatment of an important topic. And it was interesting to read this book after having just finished Gerd Gigerenzer's book Gut Feelings (which I reviewed here). There were some stark contrasts between the perspectives in the two books. For example, Kahneman et al. write that:
When they listen to their gut, decision makers hear the internal signal and feel the emotional reward it brings. This internal signal that a good judgment has been reached is the voice of confidence, of "knowing without knowing why." But an objective assessment of the evidence's true predictive power will rarely justify that level of confidence.
Shots fired! Gigerenzer's book was literally about how good decisions made using gut feelings generally were. While Gigerenzer argues that unconscious decision-making is often (but not always) successful, Kahneman et al. prefer that decision-makers make very conscious, slow decisions, appropriately weighing up the evidence. This almost mechanical approach to decision-making could be exemplified by algorithmic decisions. However, while Kahneman et al. acknowledge some clear benefits of an algorithmic approach, they also note that:
...an algorithm... could turn out to be as biased as human beings are. Indeed, in this regard, algorithms could be worse: since they eliminate noise, they could be more reliably biased than human judges.
Noise is clearly something that Kahneman et al. want to eliminate from decision-making, and they make a strong case for it. They conclude that:
The best amount of scatter is zero, even when the judgments are clearly biased.
On this point, I'm not sure that I fully agree. While it is undeniably good to have less bias in decision-making, the case is less clear for less noise, holding the amount of bias constant. Let's go back to the bullet holes in the target, viewed from behind (so that the target itself cannot be seen). The bias in the shooting cannot be seen, but the noise can. And the amount of noise gives some indication of how confident a decision-maker can be in what they are seeing. Seeing a tightly clustered set of bullet holes might increase the decision-makers confidence in the 'true' location of the centre of the target, whereas a more spread out set of bullet holes would give the decision-maker more pause. When there is unobserved (or unobservable) bias, it might be preferable to have more noise. There is a famous quote, often incorrectly attributed to John Maynard Keynes but actually from the late 19th-early 20th Century English philosopher Carveth Read, that "it is better to be vaguely right than exactly wrong". I think that applies here.
None of that is to say that reducing noise is a bad thing. And Kahneman et al. have identified a problem that is generally under-recognised. The book is probably longer than it needs to be to get its point across (I found this with Kahneman's earlier book Thinking, Fast and Slow (which I read before I started blogging, so there is no review). However, if you enjoyed that book, you will no doubt enjoy this one too. And, there is a lot to learn from this book in spite of its length. People who make decisions (that is, everyone) should at least be aware of noise, and this book provides one way of raising that awareness.

No comments:
Post a Comment