Jun 8, 2011

Risk, Probability and How Brains Are Easily Misled

The World Science Festival’s panel on Probability and Risk started out in an unusual manner: MIT’s Josh Tennenbaum strode onto a stage and flipped a coin five times, claiming he was psychically broadcasting each result to the audience. The audience dutifully wrote down the results they thought he had seen on note cards, and handed them in when the experiment was over. Towards the end of the program, he announced there were low odds that even one person in the audience had guessed the right order of results. When he announced them, however, about a dozen people raised their hands, saying that was what they had written down.

Is Tennenbaum psychic? The audience sprinkled with liars?

Neither, according to Tennenbaum. Instead, we’re the victims of our own tendency to expect that a series of coin tosses will produce results that look satisfyingly random to us. As a result, we’re unlikely to suggest a series of four heads followed by a tails. In the same way, we’re likely to end up choosing something like TTHTH. So likely, in fact, that if the coin flips do happen to produce one of these random looking patterns, it’ll be overrepresented in whatever crowd we’re testing. Instant psychic ability, with built in statistical significance.


The funny thing is that this isn’t the product of some mental weakness—Tennenbaum suggested that it’s the product of an excellent built-in sense of what makes for a random pattern. If you graph the frequency of various possible results, it’s possible to see a pattern of peaks at random-looking series and valleys at the ones that chance would seem to disfavor. Comparing the graph generated from our audience to one produced in the 1930s, and it was obvious that the pattern was nearly identical—what we think of as random appears to be quite stable.

The one exception, he noted, was when he performed the experiment with a math-savvy audience. There, a part of the audience recognizes that any series is equally probable, so they are more likely to put down all heads or all tails.

Subverting wisdom

Although Tennenbaum clearly felt that our intuitive feel for randomness was a positive feature, other speakers on the panel noted that human decision-making could obviously get stuck or be manipulated. Mathematician Amir Aczel mentioned that many trained mathematicians can’t wrap their heads around the Monty Hall problem, in which changing probabilities dictate how you should act on a popular game show. It’s relatively easy to run through the probabilities that show which action you should take, but the answer remains counterintuitive—even for those with an exceptional grasp of math.

And that’s assuming, as co-panelist Gerd Gigerenzer noted, that Monty isn’t being malicious. A crowd experiment run by physicist Leonard Mlodinow showed how easy it is to manipulate a people’s answers to simple questions without doing anything overt. Mlodinow divided the audience in half, and asked both halves separately to estimate the number of countries in Africa. This is a standard “wisdom of the crowds” sort of question, where the mean should be somewhere close to the actual number. Instead, the two groups had wildly divergent means, with one half of the audience answering well above the actual answer, the second significantly below.

How’d he manage this? Prior to asking the actual number, Mlodinow had asked a question that subtly primed each group. For one half of the audience, he asked if they thought there were more than 180 countries in Africa; this group ended up with a much higher mean. The second half was asked if there were more than five. Their answers were, on average, too low. Although this was a case of conscious manipulation, it’s easy to see how a similar effect could be generated accidentally, simply based on (for example) the order of questions in a survey.

How do we fix this?

Does all this mean that humans will perpetually remain stuck when it comes to risk and probability? Possibly not, but we have to be careful. That was the message of Gerd Gigerenzer, who helps train decision makers in how to evaluate probabilities. Gigerenzer consistently noted that language was important when it comes to dealing with probabilities.

The most compelling example he gave was one he used when working in medical education. He described the probabilities associated with a breast cancer test: one percent of women tested have the disease, and the test is 90 percent accurate, with a nine percent false positive rate. With all that information, what do you tell a woman who tests positive about the likelihood they have the disease? For a lot of people in medicine, the question leaves them stumped; a typical survey of doctors (and the World Science Festival audience) reveals that there’s no single consensus about the probability that the test indicates a real case of cancer.

Read more at Wired Science

No comments:

Post a Comment