The study of reproducible errors of human reasoning, and what these errors reveal about underlying mental processes, is known as the heuristics and biases program in cognitive psychology. This program has made discoveries highly relevant to assessors of global catastrophic risks. Eliezer Yudkowsky, who writes on the subject of cognitive biases at Overcoming Bias, presented at the Global Catastrophic Risks conference in Mountain View on the subject of cognitive biases in the assessment of risk.
Cognitive Biases in the Assessment of Risk
Suppose you have a die with four green faces and two red faces. You are going to roll this die twenty times and record the series of green and red faces that come up. If your chosen sequence appears, you’ll win $25. Would you prefer to bet on RGRRR, GRGRRR, or GRRRRR?
65% of 125 undergraduates playing with real cash chose to bet on sequence #2. The gotcha is that sequence 1 actually dominates sequence #2. Any game where you win on sequence number #2 is necessarily going to be a win for sequence #1 as well—it’s called dominance.
The reason that people would tend to bet on sequence #2 is that it most strongly resembles the die—that is, it has the largest proportion of green in it. Sequence #1 is shorter. What this shows is that we tend to choose future outcomes that are strongly representative of our beliefs about the generating process. We don’t pay quite enough attention to which hypotheses are the simplest.
The analog of this for futurism is that at the Second International Congress on Forecasting two independent groups of professional analysts had to rate the probability of a complete suspension of diplomatic relations between the USA and the Soviet Union or a Russian invasion of Poland (this was back when that was still a major possibility) and a complete suspension of diplomatic relations between the USA and the Soviet Union.
The group asked to rate version 2 responded with significantly higher probabilities. They thought that the more complex, more compound event was more probable. Why? Based on a number of experiments that have all been testing the same paradigm and nailing down exactly what is going on, in version #2 people are averaging out the probability of the Russian invasion of Poland with the conditional probability of a suspension of diplomatic relations. The problem is that they are looking over the whole argument and, instead of saying, “How likely is the entire thing in multiplying the probabilities, diminishing at each step,” they are averaging it out together.
Adding detail to a story can make the story sound more plausible, even though the story necessarily becomes less plausible. Lawyers exploit this effect by burying the weak links in their chain of argument in the middle and finishing with a flurry of strong, incontestable statements. Again, the longer you make this thing, the more the joint probability ought to drop. That is because if any one of the events is invalidated, the whole story is invalidated. Human psychology just does not seem to do the multiplication correctly—we average it out together.
Three groups of MBA students were asked how much they would pay for terrorism insurance on the flight from Thailand to the US for a round-trip flight or for the entire trip, including the actual stay in Thailand. For the flight from Thailand to the US they were willing to pay $17, for the round-trip flight they were willing to pay $14, and for the whole trip, $7.50.
Why is that? Well, because if you say “the flight from Thailand to the US,” you can very easily imagine something going wrong. The round-trip flight is more diffuse, and the whole trip is even more diffuse than that. They are responding to specific scenarios that create a lot of emotional arousal. That determines how much they are willing to pay for it.
Bruce Schneier, before Hurricane Katrina, suggested that the US government was getting itself into the wrong position by responding to movie plot scenarios of terrorism—”Let’s guard the Washington Monument”—and was doing this by taking away resources from emergency response capabilities that could respond to any catastrophe, including, as it turned out, Hurricane Katrina. Why? Because responding to any catastrophe is worth $7.44, but guarding the Washington Monument is worth $17.19.
This is an example of what is called “extensional neglect.” This is thinking in terms of the sentences, rather than the words referred to by the sentence. If you could literally imagine every moment of the flight and every moment of the trip and had a powerful enough computer to process the extensions directly and assign a probability to everything that could go wrong, everywhere along the way and add it all up together to find out how much you should be willing to pay for insurance, you would not be committing this kind of error. Instead, you are sort of looking at the sentences and some of these sentences scare you more than others.
Another example of extensional neglect is scope insensitivity, which you will find in the Global Catastrophic Risks book. Another version of the same thing is where people would only pay slightly more to save all the wetlands in Oregon than to save one protected wetland in Oregon, or people would pay the same amount to save two thousand, twenty thousand, or two hundred thousand oil-stroked birds from perishing in ponds. What is going on there is when you say, “How much would you donate to save 20,000 birds from perishing in oil ponds,” they will visualize one bird trapped, struggling to get free. That creates some level of emotional arousal, then the actual quantity gets thrown right out the window.
I think it was at one Singularity Summit or another when I was talking about a sufficiently powerful AI being able to solve all the problems of the world, or at least the ones such that we would want a sufficiently powerful AI to solve them, and someone said, “Should you really be spending money on that instead of paralyzed children?” That’s scope neglect. The quantities are getting tossed out the window and one archetypical example is determining your response.
Hindsight bias: subjects are presented with histories of unfamiliar incidents, such as the conflict between the Gurkhas and the British in 1814, and five groups of subjects are asked what probability they would have predicted for the British winning, the Gurkhas wining, a military stalemate without a peace settlement and a stalemate with a peace settlement. Four groups of subjects are told that this is what actually happened, and they are asked what probabilities they would have given. One group is not told the outcome.
In every single case, the group that is told the outcome says, “I would have predicted a 60% probability for that, on average,” but people actually predicted a 34% probability for that, on average. What is the probability that the Gurkhas win? If you are not told that the Gurkhas won, you say, “I would have given a 20% probability of that.” If you are told that the Gurkhas won, you say, “I would have given a 40% probability of that.
There was an actual legal case, and an experiment based on this legal case, where two groups had to estimate the probability of flood damage from winter blockage of a certain drawbridge. The instructions state that the city is legally negligent if the foreseeable probability of the damage is greater than 10%. 76% of the group, told only the background information that the city actually knew, decides that the flood is too unlikely. If you are given the information that the city knew, plus the fact that the flooding actually occurred, 57% of the group decides that the city was legally negligent because the foreseeable probability was high enough.
The third group, not shown here, which was instructed to try and ignore hindsight bias—in other words, you are not to take into account that the flood actually occurred in deciding what the foreseeable probability was. 56% of that group decided the city was legally negligent. The jury instruction to ignore post-facto information has no effect on what the jury actually decides. You cannot compensate for hindsight bias by being told to compensate for hindsight bias—it is an unconscious process. The effect of hindsight bias is that we vastly overestimate how well we understand history, because whenever something happens that event is in a sense testing our model of history. If it is something we wouldn’t have expected, that means that our model of the world as a whole is wrong. If, on the other hand, we think we would have expected it, then we are vastly overestimating how well our model fits the facts. For example, after the real estate collapse you find whole floods of people talking about how obvious the real estate collapse was, and yet very few of them actually made any money betting against the real estate collapse in advance.
In 1986 the space shuttle Challenger exploded for reasons eventually traced to an O-ring losing flexibility at low temperature. The cost of preventing this particular disaster is not the cost of fixing the O-rings—it is the cost of fixing every other problem which seemed about equally severe at the time without benefit of hindsight, bearing in mind that you cannot compensate for hindsight just by trying not to be hindsightful. You are basically going to have to fix a lot of problems that currently seem to you less severe than the O-rings if you actually want to get the equivalent of the O-rings in whatever future disaster you are trying to prevent.
Richard Feynman, as we all know, went with some other technical people and produced a comprehensive indictment of NASA’s management procedures. The committee of the House of Representatives said the underlying problem was not “poor communication or underlying procedures as applied by the scientists and engineers, rather the problem was poor technical decision-making over a period of several years by top NASA contractor personnel who failed to act decisively to solve the increasingly serious anomalies in the solid rocket booster joints.” In other words, the problem wasn’t NASA, it was just the O-rings.
Again, September 11th. You say they got warnings of Al-Qaeda activity, but they also got 4,203 other warnings. If you look up what they were actually worried about pre-September 11th, they were worried about terrorists getting together with international criminal syndicates and they were worried about biochemical terrorism, as opposed to this particular event of planes flying into buildings. After September 11th we had the famous response of “no one is allowed to take box cutters on an airplane anymore.”
Suppose I give you this amazing futuristic prediction that in the future the sky will be filled by billions of floating wax spheres, each sphere will be larger than all of the zeppelins that ever existed put together, and if you offer a sphere money it will lower a prostitute out of the sky on a bungie cord. This probably sounds like a pretty absurd prediction, right?
But now suppose it’s 1900 and you have to decide between that prediction and “in the future there will be a super-connected global network with billions of adding machines, each one of which has more power than all pre-1900 adding machines put together, and one of the primary uses of this network will be to transport moving pictures of pornography by pretending they are made out of numbers.
It probably sounds about equally absurd. Because of hindsight we look back and see this steady trend of the future becoming less and less absurd as it approaches the present. For example, way back in the Medieval Dark Ages, a woman couldn’t vote, which is absurd. As we get toward modern times we find that women can vote, which is normal. We see the past becoming more and more normal as it approaches the present, and we assume that the future is going to go the same way, that it will be more normal than the present.
There is a quote to the effect that historians make a story out of history by telling you about all the forces that actually rose to prominence and won, while discarding all the events that didn’t actually happen, as it were. All the things that could have happened but didn’t, all the forces that lost out in history, are discarded. You pick up a history book and it has got this really strong narrative. This happened, therefore this happened, therefore this happened. Then you sit down and try to tell a story about the future, and the story comes out the same way. This is going to happen, therefore this is going to happen, therefore this is going to happen. Not only is this hindsight bias, it is also the conjunction fallacy.
A history book can contain this whole long list of events that actually happened because it is a history book. If you are telling a story about the future, you say “this is going to happen” and your probability drops. “Therefore, this is going to happen,” and your probability drops further. “Therefore, this is going to happen,” and your probability drops to the floor.
We pay too much attention to how strongly representative things sounded and do not pay attention to the simplicity of the story or the impossibility of getting anything really complex right. To treat every additional detail of your story as an additional burden is what separates cautious futurism from incautious futurism. You can tell on sight whether a given futurist is telling a long story or if they are trying to keep their story as short as possible because they understand that every additional detail they add on is a chance to get their story wrong.
The last bias I am going to warn you against is the main bias you need to understand in order to study biases without becoming stupider. That is the notion of motivated skepticism. This is a study that examined the prior attitudes and attitude changes of students when exposed to political literature for and against controversial issues such as affirmative action and gun control. The study predicted a prior attitude effect. Even when urged to be objective, subjects would more favorably evaluate the strength of arguments that support their initial position. Right off the bat, I want to point out that if you had an ideal, rational Bayesian artificial intelligence, it would not be doing that. That may sound very human, but in fact it is completely crazy. We should never forget how crazy we all are.
Disconfirmation bias: subjects will spend more time and effort trying to refute contrary arguments as opposed to supportive arguments. Why is this a bad thing? Because it results in attitude polarization, which is that exposing subjects to an apparently valid set of pro and con arguments will exaggerate their initial polarization. When you gather more evidence, you are supposed to come closer to agreement. However, if in that stream of evidence you look at these various tidbits and the ones that you agree with you accept, while the ones you disagree with you scrutinize closely for possible flaws. You have two people who disagree and at the end of all this evidence they are further apart from where they started. That is a very disheartening result. The very dangerous part of disconfirmation bias is that when you are doing it you feel very virtuous—you’re being skeptical. It’s good to be skeptical, right? You don’t want to just accept anything you are told.
Confirmation bias: if you can choose among information sources, they seek out supportive sources rather than contrary sources. “Do you read the New York Times or watch Fox News?” Watching Fox News may sound like a bad idea, and probably is a bad idea, but nonetheless you should be asking yourself, “Am I reading things that mostly agree with me, or am I reading things that interestingly disagree with me?” The really disheartening part is the sophistication effect, which is that the more knowledge you have about politics, the more subject you are to the above biases, because you have more ammunition with which to argue away the facts that you dislike. The more you know, the worse you do.
That is why I am concluding with this bias over here. This is the thing that you need to comprehend in order to study other biases without having a new bias to detect in all the arguments you dislike, thus making you that much stupider. Of course, the stronger you feel about it, the larger the effect gets.
The study confirmed everything it had predicted, which would be a lot more suspicious if these things had not been separately confirmed independently before and they were not doing a neat experiment to test them all at the same time. If you say, “Oh, well, this study must just have found what they were looking for. That’s a well known bias.” Then you are now throwing out that piece of evidence and have become that much stupider on all future occasions because you now have a fully general counterargument, which is that “you’re just proving what you started out believing.”
For more information there is a site called Overcoming Bias. You would just go to the standard biases in the tag cloud and click on that. There are two nice books called Judgment Under Uncertainty and Rational Choice in an Uncertain World. Thank you and I’ll put the books back up.