Google Track
Tuesday, July 24, 2012
The Future of Decision Making: Less Intuition, More Evidence
A fantastic post by Andrew McAfee
Human intuition can be astonishingly good, especially after it's improved by experience. Savvy poker players are so good at reading their opponents' cards and bluffs that they seem to have x-ray vision. Firefighters can, under extreme duress, anticipate how flames will spread through a building. And nurses in neonatal ICUs can tell if a baby has a dangerous infection even before blood test results come back from the lab.
The lexicon to describe this phenomenon is mostly mystical in nature. Poker players have a sixth sense; firefighters feel the blaze's intentions; Nurses just know what seems like an infection. They can't even tell us what data and cues they use to make their excellent judgments; their intuition springs from a deep place that can't be easily examined. . Examples like these give many people the impression that human intuition is generally reliable, and that we should rely more on the decisions and predictions that come to us in the blink of an eye.
This is deeply misguided advice. We should rely less, not more, on intuition.
A huge body of research has clarified much about how intuition works, and how it doesn't. Here's some of what we've learned:
•It takes a long time to build good intuition. Chess players, for example, need 10 years of dedicated study and competition to assemble a sufficient mental repertoire of board patterns.
•Intuition only works well in specific environments, ones that provide a person with good cues and rapid feedback . Cues are accurate indications about what's going to happen next. They exist in poker and firefighting, but not in, say, stock markets. Despite what chartists think, it's impossible to build good intuition about future market moves because no publicly available information provides good cues about later stock movements. Feedback from the environment is information about what worked and what didn't. It exists in neonatal ICUs because babies stay there for a while. It's hard, though, to build medical intuition about conditions that change after the patient has left the care environment, since there's no feedback loop.
•We apply intuition inconsistently. Even experts are inconsistent. One study determined what criteria clinical psychologists used to diagnose their patients, and then created simple models based on these criteria. Then, the researchers presented the doctors with new patients to diagnose and also diagnosed those new patients with their models. The models did a better job diagnosing the new cases than did the humans whose knowledge was used to build them. The best explanation for this is that people applied what they knew inconsistently — their intuition varied. Models, though, don't have intuition.
•It's easy to make bad judgments quickly. We have a many biases that lead us astray when making assessments. Here's just one example. If I ask a group of people "Is the average price of German cars more or less than $100,000?" and then ask them to estimate the average price of German cars, they'll "anchor" around BMWs and other high-end makes when estimating. If I ask a parallel group the same two questions but say "more or less than $30,000" instead, they'll anchor around VWs and give a much lower estimate. How much lower? About $35,000 on average, or half the difference in the two anchor prices. How information is presented affects what we think.
•We can't know tell where our ideas come from. There's no way for even an experienced person to know if a spontaneous idea is the result of legitimate expert intuition or of a pernicious bias. In other words, we have lousy intuition about our intuition.
My conclusion from all of this research and much more I've looked at is that intuition is similar to what I think of Tom Cruise's acting ability: real, but vastly overrated and deployed far too often.
So can we do better? Do we have an alternative to relying on human intuition, especially in complicated situations where there are a lot of factors at play? Sure. We have a large toolkit of statistical techniques designed to find patterns in masses of data (even big masses of messy data), and to deliver best guesses about cause-and-effect relationships. No responsible statistician would say that these techniques are perfect or guaranteed to work, but they're pretty good.
The arsenal of statistical techniques can be applied to almost any setting, including wine evaluation. Princeton economist Orley Ashenfleter predicts Bordeaux wine quality (and hence eventual price) using a model he developed that takes into account winter and harvest rainfall and growing season temperature. Massively influential wine critic Robert Parker has called Ashenfleter an "absolute total sham" and his approach "so absurd as to be laughable." But as Ian Ayres recounts in his great book Supercrunchers, Ashenfelter was right and Parker wrong about the '86 vintage, and the way-out-on-a-limb predictions Ashenfelter made about the sublime quality of the '89 and '90 wines turned out to be spot on.
Those of us who aren't wine snobs or speculators probably don't care too much about the prices of first-growth Bordeaux, but most of us would benefit from accurate predictions about such things as academic performance in college; diagnoses of throat infections and gastrointestinal disorders; occupational choice; and whether or not someone is going to stay in a job, become a juvenile delinquent, or commit suicide.
I chose those seemingly random topics because they're ones where statistically-based algorithms have demonstrated at least a 17 percent advantage over the judgments of human experts.
But aren't there at least as many areas where the humans beat the algorithms? Apparently not. A 2000 paper surveyed 136 studies in which human judgment was compared to algorithmic prediction. Sixty-five of the studies found no real difference between the two, and 63 found that the equation performed significantly better than the person. Only eight of the studies found that people were significantly better predictors of the task at hand. If you're keeping score, that's just under a 6% win rate for the people and their intuition, and a 46% rate of clear losses.
So why do we continue to place so much stock in intuition and expert judgment? I ask this question in all seriousness. Overall, we get inferior decisions and outcomes in crucial situations when we rely on human judgment and intuition instead of on hard, cold, boring data and math. This may be an uncomfortable conclusion, especially for today's intuitive experts, but so what? I can't think of a good reason for putting their interests over the interests of patients, customers, shareholders, and others affected by their judgments.
So do we just dispense with the human experts altogether, or take away all their discretion and tell them to do whatever the computer says? In a few situations, this is exactly what's been done. For most of us, our credit scores are an excellent predictor of whether we'll pay back a loan, and banks have long relied on them to make automated yes/no decisions about offering credit. (The sub-prime mortgage meltdown stemmed in part from the fact that lenders started ignoring or downplaying credit scores in their desire to keep the money flowing. This wasn't intuition as much as rank greed, but it shows another important aspect of relying on algorithms: They're not greedy, either).
In most cases, though, it's not feasible or smart to take people out of the decision-making loop entirely. When this is the case, a wise move is to follow the trail being blazed by practitioners of evidence-based medicine , and to place human decision makers in the middle of a computer-mediated process that presents an initial answer or decision generated from the best available data and knowledge. In many cases, this answer will be computer generated and statistically based. It gives the expert involved the opportunity to override the default decision. It monitors how often overrides occur, and why. it feeds back data on override frequency to both the experts and their bosses. It monitors outcomes/results of the decision (if possible) so that both algorithms and intuition can be improved.
Over time, we'll get more data, more powerful computers, and better predictive algorithms. We'll also do better at helping group-level (as opposed to individual) decision making, since many organizations require consensus for important decisions. This means that the 'market share' of computer automated or mediated decisions should go up, and intuition's market share should go down. We can feel sorry for the human experts whose roles will be diminished as this happens. I'm more inclined, however, to feel sorry for the people on the receiving end of today's intuitive decisions and judgments.
What do you think? Am I being too hard on intuitive decision making, or not hard enough? Can experts and algorithms learn to get along? Have you seen cases where they're doing so? Leave a comment, please, and let us know.
Labels:
algorithm,
analytics,
data,
Data analyse,
decision,
decision making,
fact,
facts,
intuition,
mining,
prediction,
Predictive,
random
Subscribe to:
Post Comments (Atom)
2 comments:
So nice
I have no words to appreciate this post .....
Decision Maker
Thank you decision12! You are welcomed to post your own thoughts and blogs always!
Post a Comment