Read Part 1 here
Mistake#4: Not imposing self-accountability.
.Solution: Maintain a decision-making journal.
The practice of maintaining a journal is valuable because it mitigates some common cognitive traps.
The first of these is hindsight bias; the sense that you knew what was going to happen, before the event occurred, with a greater probability than you actually did.
Creeping determinism is a related trap. This is the name for the sense that what happened was inevitable.
In both cases, your mind draws out the facts around an event that occurs and weaves a narrative to explain the result. You do this unconsciously and effortlessly. Knowledge of the outcome and the facts behind it bleed into your memory, and you start to believe that you knew more than you did.
Specific probabilities in your journal also allow you to keep score. This takes even more discipline, but can provide essential feedback. The brier score is a classic way to measure the accuracy of probabilistic forecasts. In its simplest form, a brier score is the square of the error, where everything is expressed in percentages. For example, if you predict that it will rain tomorrow with 100 percent probability and it does, then the brier score is zero (1.00 - 1 ^ 2). A zero brier score is a perfect forecast. If you predict rain tomorrow with 100 percent probability and it doesn’t rain, then the Brier score is 1(1 - 0 ^ 2). A brier score of one is the worst possible score.
In forecasting, there are two key measures of accuracy. The first is calibration, which captures how well your subjective probabilities match the objective probabilities over time. To illustrate, if it rains 70 percent of the days when you predict a 70 percent chance of rain, you are well calibrated.
The second measure is discrimination, which asks whether over the long haul you assign higher probabilities to things that actually occur. Lots of forecasts of 100 percent probability before rainy days and zero percent probability before sunny days would demonstrate good discrimination.
Besides their work on heuristics and biases, Kahneman and Tversky are also known for Prospect Theory.
This theory describes how the choices people make depart from normative economic theory when the decisions are in probabilistic settings and involve risk. For example, most people are loss averse, which means that they suffer from a loss roughly 2.0-2.5 times as much as they enjoy a comparable gain.
After publishing on prospect theory, Kahneman and Tversky sought to quantify the psychological weights, called “decision weights”, which people placed on different types of financial propositions.
In general, subjects offer accurate weights at the far extremes but have trouble in between. For instance, they tend to overweight low-probability events. Imagine you have a 1 percent chance of winning 1 million dollars, and you’ll know the outcome tomorrow. You have some hope, but it is slim. Subjects place a decision weight of 5.5 percent on a 1 percent objective probability. Kahneman calls this the “possibility effect”.
Subjects also tend to underweight high probability events. Now let’s say you have a 1 percent chance of not winning the 1 million dollars. Subjects assign a decision weight of 91.2 percent on a 99 percent objective probability.
Anxiety over the possibility of losing is more salient than the hope of winning. Kahneman calls this the “certainty effect”.
Mistake #5: Creating an environment that is not conducive to good decisions.
Solution: Be mindful of your surroundings and work to improve them.
One idea that is well established in social psychology is the fundamental attribution error, or correspondence bias. This idea says that when we observe the behavior of others, we attribute that behavior to the individual’s disposition and not to the situation. As important, there is substantial evidence that shows that the situation exerts a very powerful influence on the decisions that people make.
Zimbardo provides a day-by-day account of the events in his book, The Lucifer effect. While his (Stanford Prison experiment) experiment is an extreme example of this phenomenon, Zimbardo notes that the same conditions were in place for other cases of bad behavior, including the abuse of prisoners at Abu Harib, Iraq, in 2003 and 2004.
Most organisations don’t find themselves in situations as extreme as the Stanford Prison experiment, but the mistake of creating an environment that is less than ideal for quality decision making is prevalent nonetheless. One of the essential lessons from the fundamental attribution bias is that social context plays a major role in shaping decisions; we tend to underestimate that role.
If you lead an investment team, you may want to evaluate the environment you have created across a few dimensions. Ask these questions:
1. Does the analytical team have access to, and avail themselves of, base rate data so as to properly use the outside view?
2. As an organisation, are we open to new ideas that may challenge our mind-sets? Do we need to do a red-team exercise to confront our beliefs?
3. Are we always explicit about distinguishing between facts and opinions? Are we properly weighting the two?
4. Are we structured so that we can keep track of the quality of our decisions-our processes-as well as our outcomes? Are we communicating using probabilities instead of statements? How good are we at providing feedback?
5. Do we have the correct amount of stress in our organisation? Have we had episodes where we’ve veered toward too much stress, hence affecting our decisions?