Saturday, March 29, 2014

There’s No Substitute for Good Judgment

By Robert Seawright

So says Commander Lyle Tiberius Rourke in the Disney film Atlantis: The Lost Empire, referring to the famous expression attributed to the great American showman: “There’s a sucker born every minute.” Even though Barnum didn’t say it, we get it. In talking about the scientific method in his famous 1974 Cal Techcommencement address, Nobel laureate Richard Feynmanemphasized the point: “The first principle is that you must not fool yourself – and you are the easiest person to fool.”

Accordingly, we’re right to be skeptical about our decision-making abilities in general because our beliefs, judgments and choices are so frequently wrong. That is to say that they are mathematically in error, logically flawed, inconsistent with objective reality, or some combination thereof, largely on account of our behavioral and cognitive biases. Our intuition is simply not to be trusted.

Part of the problem is (as it so often is) explained by Nobel laureate Daniel Kahneman: “A remarkable aspect of your mental life is that you are rarely stumped. … you often have [supposed] answers to questions that you do not completely understand, relying on evidence that you can neither explain nor defend.” We thus jump to conclusions quickly – far too quickly – and without a proper basis.

We aren’t stupid, of course (or at least entirely stupid). Yet even the smartest, most sophisticated and most perceptive among us make such mistakes and make them repeatedly and predictably. That predictability, together with our innate intelligence, offers at least some hope that we can do something meaningful to counteract the problems.

One appropriate response to our difficulties in this area is to create a carefully designed and data-driven investment process with fewer imbedded decisions. When decision-making is risky business, it makes sense to limit the number of decisions that need to be made. For example, it makes sense to use a variety of screens for sorting prospective investments and to make sure that such investments meet certain criteria before we put our money to work.

It’s even tempting to try to create a fully “automated” system. However, the idea that we can (or should) weed-out human judgment entirely is silly. Choices about how to create one’s investment process must be made and somebody (or, better yet, a group of somebodies*) will have to make them. Moreover, a process built to be devoid of human judgment runs grave risks of its own.

Take the case of Adrionna Harris, a sixth grader in Virginia Beach, Virginia, for example.

Last week, Adrionna saw a classmate cutting himself with a razor. She took the razor away, immediately threw it out, and set out to convince him to stop hurting himself. By all accounts, she did what we’d all want our own kids to do. The next day she told school administrators what had happened. The school wouldn’t have known about the incident (and the boy’s situation) if Adrionna hadn’t come forward.

For her troubles, Adrionna didn’t get a parade. She didn’t get congratulated or even get offered thanks. Instead, she received a 10-day suspension with a recommendation for expulsion from school on account of the district’s “zero tolerance” policy. She had handled a dangerous weapon after all, even if just to protect a boy from harming himself. Only after a local television station got involved and started asking pesky questions did common sense prevail – school officials then (finally) agreed to talk with Adrionna’s parents and, in light of the bad publicity, lifted the suspension. When and where discretion is removed entirely, absurd – even dangerous – results can occur despite the best of intentions.

As noted, because our intuition isn’t trustworthy, we need to be sure that our investment process is data-driven at every point. We need to be able to check our work regularly. Generally speaking, it seems to me that the key is to use a carefully developed, consistent process to limit the number of decisions to be made and to avoid making “gut-level” decisions not based upon any evidence but also flexible enough to adjust when and as necessary.

No good process is static. Markets are adaptive and a good investment process needs to be adaptive. Approaches work for a while, sometimes even a long while, and then don’t. Markets change. People change. Trends change. Stuff happens. As Nobel laureate Robert Shiller recently told Institutional Investor magazine, big mistakes come from being “too formulaic and bureaucratic. People who belong to a group that makes decisions have a tendency to self-censor and not express ideas that don’t conform to the perceived professional standard. They’re too professional. They are not creative and imaginative in their approach.” The challenge then is to find a good balance so as to avoid having to make too many decisions while remaining flexible.

Several years ago, the Intelligence Advanced Research Projects Activity, a think tank for the intelligence community, launched the Good Judgment Project, headed by Philip Tetlock, University of Pennsylvania professor and author of the landmark book, Expert Political Judgment, which systematically describes the consistent errors of alleged experts and their lack of accountability for their forecasting failures. The idea is to use forecasting competitions to test the factors that lead analysts to make good decisions and to use what is learned to try to improve decision-making at every level.

The Project uses modern social science methods ranging from harnessing the wisdom of crowds to prediction markets to putting together teams of forecasters. The GJP research team attributes its success to a blend of getting the right people (i.e., the best individual forecasters), offering basic tutorials on inferential traps to avoid and best practices to embrace, concentrating the most talented forecasters onto the same teams, and constantly fine-tuning the aggregation algorithms it uses to combine individual forecasts into a collective prediction on each forecasting question.

Significantly, Tetlock has discovered that experts and so-called experts can be divided roughly into two overlapping yet statistically distinguishable groups. One group fails to make better forecasts than random chance and its decisions are much worse than extrapolation algorithms built with the aggregate forecasts of various groups. However, some of these experts can even beat the extrapolation algorithms sometimes, although not by a wide margin. Interestingly, what distinguishes the good forecasters from the poor ones is a style of thinking.

Poor forecasters tend to see things though one analytical (often ideological) lens. That’s why pundits, who typically see the world through a specific ideological prism, have suchlousy track records. Good forecasters use a wide assortment of analytical tools, seek out information from diverse sources (using “outside” sources is especially important), are comfortable with complexity and uncertainty, and are decidedly less sure of themselves. Sadly, it turns out that experts with the most inflated views of their own forecasting successes tended to attract the most media attention.

“Given the impressive power of this simple technique, we should expect people to go out of their way to use it. But they don’t,” says Harvard psychologist Daniel Gilbert. In a phrase created by Kahneman and his late research partner, Amos Tversky, they often suffer from “theory-induced blindness.” Per Michael Mauboussin, the reason is clear: most of us think of ourselves as different, and better, than those around us. Moreover, we are prone to see our situation as unique and special, or at least different. But in almost all cases, it isn’t.

“My counsel is greater modesty,” Tetlock says. “People should expect less from experts and experts should promise less.” The better forecasters are foxes – who know lots of little things – rather than hedgehogs – who “know” one big thing and who consistently see the world through that lens. For example, reading the first paragraph of a Frank Rich op-ed makes it possible to predict nearly everything the column will contain without having to read another word of it. In systems thinking terms, foxes have many models of the world while hedgehogs have one overarching model of the world. Foxes are skeptical about all grand theories, diffident in their forecasts, and always ready to adjust their ideas based upon what actually happens.

The very best performers are great teams* of people who create careful, data-driven statistical models based upon excellent analysis of the best evidence available in order to establish a rules-driven investment process. Yet, even at this point, the models are not of the be-all/end-all variety. Judgment still matters because all models are approximations at best and only work until they (inevitably) don’t anymore — think Long-Term Capital Management, for example.

Everyone who lives and works in the markets learns to deal with the inevitable – failure, uncertainty, and surprise. Some are better than others. But we can all still improve our decision-making skills and do with proper training.

According to Tetlock, the best way to a become a better forecaster and decision-maker is to get in the habit of making quantitative probability estimates that can be objectively scored for accuracy over long stretches of time. Explicit quantification enables explicit accuracy feedback, which enables learning. We need to be able to check our work quickly and comprehensively. If we can find a basis to justify our poor decisions – if we can find an “out” – we will. Those “outs” need to be prevented before they can be latched onto.

Going through the effort consistently and comprehensively to check our work requires extraordinary organizational patience, but the stakes are high enough to merit such a long-term investment. In the investment world, long-term performance measures provide this sort of accuracy feedback, much to the annoyance of money managers. But it’s hardly enough. Astonishingly, Berkeley’s Terry Odean examined 10,000 individual brokerage accounts to see if stocks bought outperformed stocks sold and found that the reverse was true. So there is obviously a lot of room for improvement. As in every field, those who make poor decisions propose all sorts of justifications and offer all kinds of excuses. They insist that they were right but early, right but gob-smacked by the highly improbable or unforeseeable, almost right, mostly right or wrong for the right reasons. As always, such nonsense should be interpreted unequivocally as just-plain-wrong.

A quick summary of some of the (often overlapping) ways we can improve our judgment follows.

  • Make sure every decision-maker has positive and negative skin in the game.
  • Focus more on what goes wrong and why than upon what works (what Harvard Medical School’s Atul Gawande calls “the power of negative thinking”).
  • Make sure your investment process is data-driven at every point.
  • Keep the investment process as decentralized as possible.
  • Invoke a proliferation of small-scale experimentation; whenever possible, test the way forward, gingerly, one cautious step at a time.
  • Move and read outside your own circles and interests.
  • Focus on process more than results.
  • Collaborate – especially with people who have very different ideas (what Kahneman calls “adversarial collaboration”).
  • Build in robust accountability mechanisms for yourself and your overall process.
  • Slow down and go through every aspect of the decision again (and again).
  • Establish a talented and empowered team charged with systematically showing you where and how you are wrong. In essence, we all need an empowered devil’s advocate.
  • Before making a big decision, affect a “pre-mortum” in order to legitimize doubt and empower the doubters. Gather a group of people knowledgeable about the decision and provide a brief assignment: “Imagine that we are a year into the future. We implemented the plan as it now exists. The outcome has been a disaster. Take 10 minutes to write a brief history of that disaster.” Discuss.

Per Kahneman, organizations are more likely to succeed at overcoming bias than individuals. That’s partly on account of resources, and partly because self-criticism is so difficult. As described above, perhaps the best check on bad decision-making we have is when someone (or, when possible, an empowered team) we respect sets out to show us where and how we are wrong. Within an organization that means making sure that everyone can be challenged without fear of reprisal and that everyone (and especially anyone in charge) is accountable.

But that doesn’t happen very often. Kahneman routinely asks groups how committed they are to better decision-making and if they are willing to spend even one percent of their budgets on doing so. Sadly, he hasn’t had any takers yet. Smart companies and individuals will take him up on that challenge. Those that are smarter will do even more because there’s no substitute for good judgment.

See the original article >>

No comments:

Post a Comment

Follow Us