Eight to Late

Sensemaking and Analytics for Organizations

Cognitive biases as project meta-risks

with 16 comments

Introduction and background

A  comment by John Rusk on this post got me thinking about the effects of  cognitive  biases on the perception and analysis of project  risks.  A cognitive bias is a human tendency to base a judgement or decision on a flawed perception or understanding of data or events.  A recent paper suggests that cognitive biases may have played a role in some high profile project failures.   The author of the paper, Barry Shore, contends that the failures were caused by poor  decisions which could be traced back to specific biases.  A direct implication is that  cognitive biases can have a significant negative  effect on how project risks are perceived and acted upon.  If true, this has consequences for the practice of risk management in projects (and other areas, for that matter). This essay discusses the role of cognitive biases in risk analysis, with a focus on project environments.

Following the pioneering work of Daniel Kahneman and Amos Tversky, there has been a lot of applied research on the role of cognitive biases in various areas of social sciences (see Kahneman’s Nobel Prize lecture for a very readable account of his work on cognitive biases).  A lot of this research highlights the fallibility of intuitive decision making.  But even judgements ostensibly based on data are subject to cognitive biases.  An example of this is when data is misinterpreted to suit the decision-maker’s preconceptions (the so-called confirmation bias). Project risk management is largely about making decisions regarding uncertain events that might impact a project. It involves, among other things, estimating the likelihood of these events occurring and the resulting impact on the project. These estimates and the decisions based on them can be erroneous for a host of reasons.  Cognitive biases are an often overlooked, yet universal,  cause of error.

Cognitive biases as project meta-risks

So, what role do cognitive biases play in project risk analysis? Many researchers have considered specific cognitive biases as project risks:  for example, in this paper, Flyvbjerg describes how the risks posed by optimism bias can be addressed using reference class forecasting (see my post on improving project forecasts for more on this).  However, as suggested in the introduction, one can go further. The first point to note is that biases are part and parcel of the mental make up of humans, so any aspect of risk management that involves human judgment is subject to bias. As such, then, cognitive biases may be thought of as meta-risks: risks that affect risk analyses. Second, because they are a part of the mental baggage of all humans,  overcoming them involves an understanding of the thought processes that govern decision-making,  rather than externally-directed analyses (as in the case of risks).  The analyst has to understand how his or her perception of risks may be affected by these meta-risks.

The publicly available research and professional literature on meta-risks in business and organisational contexts is sparse. One relevant reference is a paper by Jack Gray on meta-risks in financial portfolio management.  The first few lines of the paper state,

“Meta-risks are qualitative, implicit risks that pass beyond the scope of explicit risks. Most are born out the complex interaction between the behaviour pattern of individuals and those of organizational structures” (italics mine).

Although he doesn’t use the phrase, Gray seems to be referring to cognitive biases – at least in part. This is confirmed by a reading of the paper. It describes, among other things, hubris (which roughly corresponds to the  illusion of control) and discounting evidence that conflicts with one’s views (which corresponds to confirmation bias) as meta-risks. From this (admittedly small) sampling of the literature, it seems that the notion of cognitive biases as meta-risks has some precedent.

Next, let’s look at how biases can manifest themselves as meta-risks in a project environment. To keep the discussion manageable, I’ll focus on a small set of biases:

Anchoring: This refers to the tendency of humans to rely on a single piece of information when making a decision. I have seen this manifest itself in task duration estimation – where “estimates plucked out of thin air” by management serve as an anchor for subsequent estimation by the project team. See this post for more on anchoring in project situations. Anchoring is a meta-risk because the over-reliance on a single piece of information about a risk can have an adverse effect on decisions relating to that risk.

Availability: This refers to the tendency of people to base decisions on information that can be easily recalled, neglecting potentially more important information. As an example, a project manager might give undue weight to his or her most recent professional experiences when analysing project risks. Here availability is a meta-risk because it is a barrier to an objective consideration of risks that are not immediately apparent to the analyst.

Representativeness: This refers to the tendency to make judgements based on  seemingly representative, known samples . For example, a project team member  might base a task estimate based on another (seemingly) similar task, ignoring important differences between the two. Another manifestation of representativeness is when probabilities of events are estimated based on those of comparable, known events. An example of this is the gambler’s fallacy. This is clearly a meta-risk, especially where “expert judgement” is used as a technique to assess risk (Why? Because such judgements are invariably based on comparable tasks that the expert has encountered before.).

Selective perception: This refers to the tendency of individuals to give undue importance to data that supports their own views. Selective perception is a bias that we’re all subject to; we hear what we want to hear, see what we choose to see, and remain deaf  and blind to the rest. This is a meta-risk because it results in a skewed (or incomplete) perception of risks.

Loss Aversion: This refers to the tendency of people to give preference to avoiding losses (even small losses) over making gains. In risk analysis this might manifest itself as overcautiousness. Loss aversion is a meta-risk because it might, for instance, result in the assignment of an unreasonably large probability of occurrence to a risk.

A particularly common manifestation of loss aversion in project environments is the sunk cost bias. In situations where significant investments have been made in projects, risk analysts might be biased towards downplaying risks.

Information bias: This is the tendency of some analysts to seek as much data as they can lay their hands on prior to making a decision.  The danger here is of being swamped by too much irrelevant information. Data by itself does not improve the quality of decisions (see this post by Tim van Gelder for more on the dangers of data-centrism). Over-reliance on data – especially when there is no way to determine the quality and relevance of data as is often the case – can hinder risk analyses. Information bias is a meta-risk for two reasons already alluded to above; first, the data may not capture important qualitative factors and second, the data may not be relevant to the actual risk.

I could work my way through a few more of the biases listed here, but I think I’ve already made my point: projects encompass a spectrum of organisational and technical situations, so just about any cognitive bias is a potential meta-risk.

Conclusion

Cognitive biases are meta-risks because they can affect decisions pertaining to risks – i.e. they are risks of risk analysis.  Shore’s research suggests that the risks posed by these meta-risks are very real;  they can cause project failure  So, at a practical level, project managers need to understand  how cognitive biases could affect their own  risk-related judgements (or any other judgements for that matter).   The previous section provides illustrations of how selected cognitive biases  can affect risk analyses;  there are, of course,  many more.  Listing examples is illustrative, and helps make the point that cognitive biases are meta-risks. However, it is more useful and interesting to understand how biases operate and what we can do to overcome them.   As I have mentioned above, overcoming biases requires an understanding of the thought processes through which humans make decisions in the face of uncertainty.  Of particular interest is  the role of  intuition and rational thought in forming judgements, and the common mechanisms that underlie judgement-related cognitive biases.   A knowledge and awareness of these mechanisms  might help project managers in consciously countering the operation of cognitive biases in their own decision making.  I’m currently making some notes on these topics, with the intent of publishing them in a forthcoming essay – please stay tuned.

Note

Part II of this post published here.

Written by K

August 9, 2009 at 9:59 pm

16 Responses

Subscribe to comments with RSS.

  1. I’ve been reading a Douglas Hubbard book on risk management and he writes extensively about the biases that you talk of here and why, as a result, “low”, “medium”, “high” type ratings of risks in terms of liklihood and severity are not worth much.

    He talks of calibrating people in their estimating, and although the book doesn’t go into the full detail of the workshops where he does this, some of the methods are written about.

    For example one test I use on people quite a lot now is to ask them

    “What is the width of a 747 from wingtip to wingtip in metres? Give me an upper and lower range you are 90% confident with”

    AFter getting the answer, I tell them if their answer is right they win $10k. But they can also pull from a bag of marbles where 9 are red and one is black. Pull a red marble and you win $10k.

    After offering people the choice, most people choose the marble option, which suggests overconfidence in their original estimate since statistically speaking you should be completely ambivalent.

    Then when you ask people to change their range until they would not care if it is the marbles or their answer, then you start to get a more realistic estimate.

    regards

    Paul

    Like

    Paul Culmsee

    August 10, 2009 at 1:12 pm

  2. Paul,

    Thanks for your comments. I’ve ordered Hubbard’s books, and I’m really looking forward to reading them.

    The technique you’ve described is an excellent way to prompt folks to think carefully about their estimates. Once they do, they invariably realise that they’ve been answering the wrong question or a grossly simplified one. A brief explanation: there is a difference between 90% confidence in an estimate relating to an unfamiliar task or object and 90% confidence of pulling a red ball when one knows for certain that 9 of the 10 balls are red. In the first case, folks reduce (or simplify) the question to something they know the answer to – and then answer the simplified question (which generally does not correspond to the question asked). In the second case, no such simplification is necessary.

    I’ll be expanding on this in a future post.

    Regards,

    Kailash.

    Like

    K

    August 10, 2009 at 9:59 pm

  3. Kailash,

    I think for the first time I’ve got a case of blog envy. I love your essays. They are refreshingly well written, well thought out and well connected to current literature. You have inspired me to read more (and I thought I was already reading a lot!).

    The idea of the meta risk as risk to judgement makes me think about the idea of the observer affecting the observed in quantum mechanics. I guess I’m wondering if our bias is completly unjustified or can it have an affect on the “facts” of the project.

    Have you ever seen a project get turned around by someone with super high confidence, when all the facts were stacked against it? I wonder if a PM whose assessment clearly exhibits Avaliability bias (Confidence in having turned around several projects) can influence the project facts such that the facts change through the observation of that PM?

    Am I going off on a tangent? All this to say, again, thanks for your essays, you’ve got the cogs in my brain turning.

    Michiko

    Like

    Michiko Diby

    August 11, 2009 at 11:10 pm

  4. Michiko,

    Thanks for your very kind remarks on my essays – I truly appreciate the feedback.

    You’ve brought up a really good point. Can a project manager turn a project around through a positive attitude that’s based on his or her past record? May be so- but I think there’s a real danger of overconfidence in these situations. I’d argue that those who succeed in turning things around do so because they have an awareness of their biases, and hence approach each new project with an open mind and the willingness to see facts as they are.

    Regards,

    Kailash.

    Like

    K

    August 12, 2009 at 6:51 am

  5. Kailash,

    Thank you for widely researched and introspective, well written blogs.

    Re: your essay on meta risks due to a cognitive bias, I will divide my comments/ queries into two sections:

    1. Selection of projects (one from alternatives as a go/no go pattern)

    2. Working through risk analysis after selection of a particular project, both before the project is off the ground as well as during the project is being constructed (I’m a construction manager)

    1. Selection:

    The messy part to selection of a project is ensuring that the Board agrees on a choice of project, based on feasibility analysis, up the ranks on real and perceived risks in similar projects. So I guess we’re looking at biases at at least two levels here, one on the ground and another based on what senior management proposes. Would you consider this as compounding of meta-risks? Is there such a factor?

    Risk and reward being subjective, leadership is really tested in ensuring that an organisation moves towards a common goal.

    In the real world, selection of work is rarely based on prudent examination of capabilities; often, it involves choosing projects that can impact competition adversely rather than the company positively. I’ve sat in on projects that are now folding due to lack of demand and hindsight reveals a lot of the biases you’ve mentioned, with very few advantages to the successful bidder. I guess this would qualify as ‘hubris’.

    I feel a lot of professionals accept risks offered by the client organisation blindly, because, ‘the client will always have to be managed’; also, the issue of work culture might well be a bigger problem between the public and private sectors, particularly in India than is acknowledged openly.

    2. RM post selection:

    A risk practitioner I met with in the UK suggested the creation of a common risk ‘dictionary’ within the organisation he worked in, to ensure everyone one was on the same page. This would at least remove linguistic ambiguity and ensure a certain analysis before declaring a specific incident as a full blown disaster. How do you react to this? Is it doable? Remember that most project teams are temporary creations and the team members might soon be on opposite sides of the table; that’s the rule in construction and I guess its true for most fields. And then, the larger the organisation, the greater the costs.

    Another specialist emphatically declared that if the risk management for selection of a project was improperly done, anything else subsequently would qualify as fire fighting, not risk management.

    While proposing a protocol for a company, how would you allow for a subjective bias without being prescriptive in your risk management techniques?

    I look forward to hearing from you,

    Narendra

    Like

    Narendra Khanna

    August 18, 2009 at 6:32 pm

  6. Narendra,

    Thanks for your thoughtful comments. I’ll answer your questions as best I can.

    In an ideal world, projects should be selected according to criteria that the business considers important (this the domain of portfolio management). The criteria should include a proper mix of “ground level” and “strategic” elements (including capacity, capability etc,). Typically these criteria have numerical scores attached to them – ratings from 0 to 5 or 10, for example. Then, once all criteria are scored, a final number is calculated via a predetermined formula (a weighted average). Some problems with such a “portfolio analysis” include:

    1. The use of vague criteria, such as “alignment with business objectives”, leave people free to assign any number they feel like (giving free rein to biases).

    2. Scores assigned are rarely validated – even after the fact.

    3. The scoring methods suffer from serious deficiencies.

    Douglas Hubbard’s book on risk management (mentioned by Paul Culmsee in the first comment) contains an excellent discussion of these points. I should admit that I’ve written a post on portfolio management which glosses over the aforementioned problems. I’d do it differently now, in light of what I’ve learnt.

    Differences in culture between organisations is always an issue – possibly more so in India than in some other countries. In my experience, one way of reducing risks is to use blended teams. When folks work together everyday, there’s more opportunity for open communication and understanding the other point of view. However, this approach is not always practical.

    As far as a risk dictionary is concerned – anything that helps folks achieve a common understanding of terminology is a good thing. Not sure it would help in analysis, though.

    There are ways to reduce bias in judgements – I’ll cover some of these briefly in a forthcoming post. Essentially, they amount to slowing down and thinking things through before coming to a decision. There are also techniques such as calibration (which Paul describes in his comment), which help to reduce bias due to overconfidence. Many of the techniques are discussed in Hubbard’s book.

    Finally, in order to be complete, risk management should really cover the entire spectrum of risks – across the organisation and its environment. If risks are analysed in isolation, theres a danger that fixing up things in one place may make things worse elsewhere. Over-committing resources to one project (to the detriment of others) is a very common example of this error.

    Hope I’ve answered some of your questions – at least partially. Thanks again for your comments. Do stop by again.

    Regards,

    Kailash

    Like

    K

    August 18, 2009 at 10:55 pm

  7. […] Cognitive biases as project meta-risks « Eight to Late eight2late.wordpress.com/2009/08/09/cognitive-biases-as-project-meta-risks – view page – cached #Eight to Late RSS Feed Eight to Late » Cognitive biases as project meta-risks Comments Feed Eight to Late Keeping meetings on track: “over-time offender” stereotypes A walk in the park An introduction to the critical chain method — From the page […]

    Like

  8. […] human tendencies to base judgements on flawed perceptions of events and/or data. In an earlier post,  I argued that cognitive biases are meta-risks,  i.e.  risks of  risk analysis.   An […]

    Like

  9. […] or modelling tool that you use, when you do this, you will always still find that you have your own cognitive biases that will not necessarily deliver the shared understanding that you think you are delivering. […]

    Like

  10. […] written  about the effect of cognitive biases in project risk management -see this post and this one, for […]

    Like

  11. […] correct” (note that some of them aren’t correct, see my  posts on cognitive biases as project meta-risks and the limitations of scoring methods in risk analysis for more). Anyway, regardless of the […]

    Like

  12. […] my posts limitations of scoring methods in risk analysis and cognitive biases as project meta-risks for more on the above […]

    Like

  13. […] on the individual rather than the group) and economics fall into this category – as examples see this post for an example drawn from psychology and this one for one drawn from economics.  Unfortunately […]

    Like

  14. […] Such estimates are prone to overconfidence and a range of other cognitive biases. See my post on cognitive biases as project meta-risks for a detailed discussion of how these biases manifest themselves in project […]

    Like

  15. […] who’ve read my articles on cognitive biases in project management (see this post and this one)  may be wondering how these fit in to the above argument. According to Goldratt, […]

    Like

  16. […] Kailash writes about the risk that cognitive bias can play in project failure, particularly in the perception of risks. overcoming biases requires an understanding of the thought processes through which humans make decisions in the face of uncertainty.  Of particular interest is  the role of  intuition and rational thought in forming judgements, and the common mechanisms that underlie judgement-related cognitive biases.   A knowledge and awareness of these mechanisms  might help project managers in consciously countering the operation of cognitive biases in their own decision making. […]

    Like


Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.