Eight to Late

Sensemaking and Analytics for Organizations

Cognitive biases as project meta-risks – part 2

with 2 comments

Introduction

Risk management is fundamentally about making decisions in the face of uncertainty. These decisions  are based on perceptions of future events,  supplemented by analyses of data relating to those events.  As such, these decisions  are subject to cognitive biases –  human tendencies to base judgements on flawed perceptions of events and/or data. In an earlier post,  I argued that cognitive biases are meta-risks,  i.e.  risks of  risk analysis.   An awareness of how these biases operate can pave the way towards reducing their effects on risk-related decisions. In this post I therefore look into the nature of cognitive biases. In particular:

  1. The role of intuition and rational thought in the expression of cognitive biases.
  2. The psychological process of attribute substitution which underlies judgement-related cognitive biases

I  then take a brief look at ways in which the effect of  bias in decision-making can be reduced.

 The role of intuition and rational thought in the expression of cognitive biases

Research in psychology has established that human cognition works through two distinct processes:   System 1 which corresponds to intuitive thought and System 2 which corresponds to rational thought. In his Nobel Prize lecture, Daniel Kahneman had this to say about the two systems:

The operations of System 1 are fast, automatic, effortless, associative, and often emotionally charged; they are also governed by habit, and are therefore difficult to control or modify. The operations of System 2 are slower, serial, effortful, and deliberately controlled; they are also relatively flexible and potentially rule-governed.

The surprise is that judgements always involve System 2 processes. In Kahneman’s words:

 …the perceptual system and the intuitive operations of System 1 generate impressions of the attributes of objects of perception and thought. These impressions are not voluntary and need not be verbally explicit. In contrast, judgments are always explicit and intentional, whether or not they are overtly expressed. Thus, System 2 is involved in all judgments, whether they originate in impressions or in deliberate reasoning.

So, all judgements, whether intuitive or rational, are monitored by System 2. Kahneman suggests that this monitoring can be very cursory thus allowing System 1 impressions to be expressed directly, whether they are right or not. Seen in this light, cognitive biases are unedited (or at best lightly edited) expressions  of  often incorrect impressions.

 Attribute substitution: a common mechanism for judgement-related biases

In a paper entitled Representativeness Revisited,  Kahneman and Fredrick suggest that the psychological process of attribute substitution  is the mechanism that underlies many cognitive biases.  Attribute substitution is the tendency of people to answer a difficult decision-making question by interpreting it as a simpler (but related) one. In their paper, Kahneman and Fredrick describe attribute substitution as occurring when:

 …an individual assesses a specified target attribute of a judgment object by substituting a related heuristic attribute that comes more readily to mind…

An example might help decode this somewhat academic description.  I pick one from Kahneman’s Edge master class where he related the following:

 When I was living in Canada, we asked people how much money they would be willing to pay to clean lakes from acid rain in the Halliburton region of Ontario, which is a small region of Ontario. We asked other people how much they would be willing to pay to clean lakes in all of Ontario.

People are willing to pay the same amount for the two quantities because they are paying to participate in the activity of cleaning a lake, or of cleaning lakes. How many lakes there are to clean is not their problem. This is a mechanism I think people should be familiar with. The idea that when you’re asked a question, you don’t answer that question, you answer another question that comes more readily to mind. That question is typically simpler; it’s associated, it’s not random; and then you map the answer to that other question onto whatever scale there is—it could be a scale of centimeters, or it could be a scale of pain, or it could be a scale of dollars, but you can recognize what is going on by looking at the variation in these variables. I could give you a lot of examples because one of the major tricks of the trade is understanding this attribute substitution business. How people answer questions.

Attribute substitution boils down to making judgements based on specific, known instances of events or issues under consideration. For example,  people often overrate their own abilities because they base their self-assessments on specific instances where they did well, ignoring situations in which their performance was below par.  Taking another example from the Edge class,

 COMMENT: So for example in the Save the Children—types of programs, they focus you on the individual.

KAHNEMAN: Absolutely. There is even research showing that when you show pictures of ten children, it is less effective than when you show the picture of a single child. When you describe their stories, the single instance is more emotional than the several instances and it translates into the size of contributions.  People are almost completely insensitive to amount in system one. Once you involve system two and systematic thinking, then they’ll act differently. But emotionally we are geared to respond to images and to instances…

Kahnemann sums it up in a line in his Nobel lecture: The essence of attribute substitution is that respondents offer a reasonable answer to a question that they have not been asked.

Several decision-making biases in risk analysis operate via attribute substitution –  some of these include availability, representativeness, overconfidence and selective perception (see this post for specific examples drawn from high-profile failed projects).  Armed with this understanding of how these meta-risks operate, lets look at how their effect can be minimised.

 System two to the rescue, but…

The discussion of the previous section suggests that people often base judgements on specific instances that come to mind, ignoring the range of all possible instances. They do this because specific instances – usually concrete instances that have been experienced – come to mind more easily than the abstract “universe of possibilities.”

Those who make erroneous judgements will correct them only if they become aware of factors that they did not take into account when making the judgement, or when they realise that their conclusions are not logical. This can only happen through deliberation:   rational analysis,  which is possible only through a deliberate invocation of System 2 thinking.

Some of the ways in which System 2 can be helped along are:

  1. By reframing the question or issue in terms that forces analysts to consider the range of possible instances rather than specific instances.  A common manifestation of the latter is when risk managers base their plans on the assumption that average conditions will occur – an assumption that Professor Sam Savage calls the flaw of averages (see Dr. Savage’s very entertaining and informative book for more on the flaw of averages and related statistical fallacies).
  2. By requiring analysts to come up with pros and cons for any decision they make. This forces them to consider possibilities they may not have taken into account when making the original decision.
  3. By basing decisions on relevant empirical or historical data instead of relying on intuitive impressions.
  4. By making the analysts aware of their  propensity to be overconfident (or under-confident) by evaluating their probability calibration. One way to do this is by asking them  to answer a series of trivia questions with confidence estimates for each of their answers (i.e. their self-estimated probability of being right). Their confidence estimates are then compared to the fraction of questions correctly answered. A well calibrated individual’s confidence estimates should be close to the percentage of correct answers.  There is some evidence to suggest that analysts can be trained improve their calibration through cycles of testing and feedback.  Calibration training is discussed in Douglas Hubbard’s book, The Failure of Risk Management. However, as discussed here, improved calibration by through feedback and repeated tests may not carry over to judgements in real-life situations.

Each of the above options forces  analysts to consider instances other than the ones that readily come to mind.  That said, they aren’t a sure-cure for the problem:  System 2 thinking does not guarantee correctness.  Kahneman discusses several reasons why this is so.  First, it has been found that education and training in decision-related disciplines (like statistics) does not eliminate incorrect intuitions; it only reduces them in favourable circumstances (such as when the question is reframed to make statistical cues obvious). Second, he  notes that sytem 2 thinking is easily derailed: research has shown that the efficiency of system 2 is impaired by time pressure and multi-tasking. (Managers who put their teams under time and multi-tasking pressures should take note!). Third, highly accessible values, which form the basis for initial intuitive judgements serve as anchors for subsequent system 2-based corrections. These corrections are generally insufficient – i.e. too small.  And finally, System 2 thinking is of no use if it is based on incorrect assumptions:  as a colleague once said, “Logic doesn’t get you anywhere if  your premise is wrong.”

Conclusion

Cognitive biases are meta-risks that are  responsible for many incorrect judgements in project (or any other) risk analysis . An apposite example is the financial crisis of 2008, which can be traced back to several biases such as groupthink, selective perception and over-optimism (among many others). An understanding of how these meta-risks operate suggest ways in which their effects can be reduced,  though not eliminated altogether.  In the end,  the message is simple and obvious: for judgements that matter, there’s no substitute for  due diligence –  careful observation and thought, seasoned with an awareness of one’s own  fallibility.

Written by K

September 3, 2009 at 11:10 pm

2 Responses

Subscribe to comments with RSS.

  1. […] Part II of this post published here. […]

    Like

  2. […] has been consistently overlooked. See my post entitled Cognitive biases as meta-risks and its follow-up for more on this […]

    Like


Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.