In his iconic 2011 book ‘Thinking, Fast and Slow’, the psychologist Daniel Kahneman presents an accessible introduction into the main results of his extended study of cognitive biases and heuristics carried out with Amos Tversky during the 1970s. Their results challenged the conventional economic paradigm of human rationality and helped found the field of behavioural economics. In his 2008 book chapter ‘Cognitive Biases Potentially Affecting Judgement of Global Risks’, Eliezer Yudkowsky considers how some of the biases outlined by Kahneman, and others, can affect our judgement when thinking about catastrophic risks. This post will discuss a few of these biases and the impact they can have on perceptions of existential risk.
Availability Bias
One of the best-known cognitive biases, and also one of the most pertinent to the study of existential risks, is the availability bias. This results from the automatic method of quickly assessing the likelihood of an event by recalling previous memorable examples of such an event. While in many cases this is an acceptable estimation technique, it is clearly invalid when considering events for which there are no historical occurrences. Bias can also be observed when comparing the likelihoods of two related events. For example, when asked to consider different causes of deaths, subjects estimate accidents cause roughly as many annual deaths as diseases, despite diseases causing approximately 16 times as many deaths as accidents do. It has been hypothesised that this results from the overwhelming visibility of deaths caused by accidents in the news, compared to coverage of deaths from disease leading people astray when asked to estimate the likelihoods of each cause. Indeed, in 1979 Barbara Combs and Paul Slovic showed that misperceptions of the likelihood of different causes of death strongly correlated with the relative number of newspaper articles addressing each cause1.
Anchoring
Anchoring refers to the phenomenon by which quantitative judgements can be subconsciously affected by the consideration of previous (potentially unrelated) values. One experiment carried out by Tversky and Kahneman in 1974 was to ask subjects whether they thought the percentage of African countries that were members of the UN was above or below a random number between 0 and 100 selected from a ‘Wheel of Fortune’ style spinner. Immediately afterwards the subject would be asked to give their true estimate. Tversky and Kahneman found that the subjects’ estimates were greatly affected by the number that had appeared on the wheel, despite the subjects’ knowledge that the number had been generated entirely arbitrarily. For example, the median estimates for subjects that had received random numbers of 10 and 65 were 25% and 45% respectively.2
Yudkowsky extends this concept to that of the logical fallacy of generalization from fictional evidence. He argues that, given how human judgements can be skewed by obviously irrelevant prior information, our judgements are liable to be similarly affected by fictional narratives, despite the knowledge that fiction is not necessarily an accurate representation of reality. In relation to existential risks, Yudkowsky points out how both public and academic perception of the dangers of artificial intelligence may be susceptible to subconscious influence from sci-fi presentations such as The Terminator or HAL from 2001: A Space Odyssey. This is surely not helped by the approach taken by ‘journalists who insist on putting a picture of the Terminator on every single article they publish of this topic [AI safety]’3. The situation may not be much better in academia with Yudkowsky claiming that; ‘[n]ot uncommonly in a discussion of existential risk, the categories, choices, consequences, and strategies derive from movies, books and television shows. There are subtler defeats, but this is outright surrender.’ 4
Scope Neglect
The final bias we will discuss here is that of scope neglect – when a subject displays an insensitivity to the scaling of a problem. An experiment that clearly shows this effect was carried out by Desvousges et al. in 1993, in which the subjects were split into three groups that were asked how much money they would be willing to donate to save either 2,000, 20,000, or 200,000 birds from drowning in oil ponds5. Despite the scope of the problem (i.e., number of birds affected) varying by a factor of 100, the median responses were $80 for the group asked about 2,000 birds, $78 to save 20,000 birds, and $88 to save 200,000 birds. A number of theories have been proposed to explain this phenomenon, including the affect heuristic, that causes subjects to picture one or a handful of unfortunate birds unable to escape the oil pond, and base their response on this emotive image; or the purchase of moral satisfaction, that means that subjects would pay however much they subconsciously regard to count as having done a good deed, regardless of whether that deed saves 2,000 birds or 200,000.
Philosopher Toby Ord has applied the problem of scope neglect to existential risks, arguing that it may lead many people to understate the significance of such catastrophic events. To quote him at length:
‘For example, we tend to treat nuclear war as an utter disaster, so we fail to distinguish nuclear wars between nations with a handful of nuclear weapons (in which millions would die) from a nuclear confrontation with thousands of nuclear weapons (in which a thousand times as many people would die, and our entire future may be destroyed). Since existential risk derives its key moral importance from the size of what is at stake, scope neglect leads us to seriously underweight its importance.’ 6
The above discussion covers a mere corner of the literature addressing cognitive biases and heuristics, and we highly recommend Yudkowsky’s article to those who are interested. Despite the incredible difficulty in overcoming their effects, Yudkowsky concludes that knowledge of such biases is ‘knowledge [that is] needful to a student of existential risk’ and that the field is in need of ‘experimental literature specific to the psychology of existential risk.’ Existential risk research is still in its adolescence, and we are yet to develop the desired scientific and analytical tools that would go a long way to ensure the rigour of our subject as a scientific discipline. As it stands though, many of the estimates of existential risks, and descriptions of the related catastrophes, are largely based on subjective, individual judgement, making them fertile land for the propagation of cognitive biases with potentially drastic effects.
-
Combs B, Slovic P. Newspaper Coverage of Causes of Death. Journalism Quarterly. 1979;56(4):837-849. doi:10.1177/107769907905600420
- Tversky, Amos, and Daniel Kahneman. “Judgment under Uncertainty: Heuristics and Biases.” Science, vol. 185, no. 4157, 1974, pp. 1124–1131. JSTOR, www.jstor.org/stable/1738360.
- https://intelligence.org/2018/02/28/sam-harris-and-eliezer-yudkowsky/
- Yudkowsky, Eliezer. 2008. “Cognitive Biases Potentially Affecting Judgment of Global Risks.” In Global Catastrophic Risks, edited by Nick Bostrom and Milan M. Ćirković, 91–119. New York: Oxford University Press.
- Desvousges, William H., F. Reed Johnson, Richard W. Dunford, Kevin J. Boyle, Sara P. Hudson, and K. Nicole Wilson. 1993. “Measuring Natural Resource Damages with Contingent Valuation: Tests of Validity and Reliability.” In Hausman 1993, 91–164.
- Ord, Toby. (2020). The Precipice: Existential Risk and the Future of Humanity, (London: Bloomsbury). Page 61.