Eight to Late

Sensemaking and Analytics for Organizations

The role of cognitive biases in project failure

with 37 comments

Introduction

There are two distinct views of project management practice: the rational view which focuses on management tools and techniques such as those espoused by frameworks and methodologies, and the social/behavioural view which looks at the social aspect of projects – i.e. how people behave and interact in the context of a project and the wider organisation. The difference between the two is significant: one looks at how projects should be managed,  it prescribes tools, techniques and practices;  the other at what actually happens on projects, how people interact and how managers make decisions.  The gap between the two can sometimes  spell the difference between project success and failure. In many failed projects, the failure can be traced back to poor decisions, and the decisions themselves to cognitive biases: i.e.  errors in judgement based on perceptions. A paper entitled, Systematic Biases and Culture in Project Failure, by Barry Shore looks at the role played by selected cognitive biases in the failure of some high profile projects. The paper also draws some general conclusions on the relationship between organisational culture and cognitive bias. This post presents a summary and review of the paper.

The paper begins with a brief discussion of the difference in the rational and social/behavioural view of project management.  The rational view is prescriptive  – it describes management procedures and techniques which claim to increase the chances of success if followed. Further, it emphasises causal effects (if you follow X procedure then Y happens).  The social/behavioural view is less well developed because it looks at human behaviour which is hard to study in controlled conditions,  let alone in projects. Yet, developments in behavioural economics – mostly based on the pioneering work of Kahnemann and Tversky – can be directly applied to project management (see my post on biases in project estimation, for instance).  In the paper, Shore looks at eight case studies of failed projects  and attempts to attribute their failure to selected cognitive biases. He  also looks into the relationship between (project and organisational) culture and the prevalence of the selected biases. Following Hofstede, he defines organisational culture as shared perceptions of organisational work practices and, analogously, project culture as shared perceptions of project work practices. Since projects take place within organisations, project culture is obviously influenced by the organisational culture.

Scope and Methodology

In this section I present a brief discussion of the biases that the paper focuses on and the study methodology.

There are a large number of cognitive biases in the literature. The author selects the following for his study:

Available data:  Restricting oneself to using data that is readily or conveniently available. Note that “Available data” is a non-standard term: it is normally referred to as a sampling bias, which in turn is a type of selection bias.

Conservatism (Semmelweis reflex): Failing to consider new information or negative feedback.

Escalation of commitment:  Allocating additional resources to a project that is unlikely to succeed.

Groupthink: Members of a project group under pressure to think alike, ignoring evidence that may threaten their views.

Illusion of control: Management believing they have more control over a situation than an objective evaluation would suggest.

Overconfidence:  Having a level of confidence that is unsupported by evidence or performance.

Recency (serial position effect): Undue emphasis being placed on most recent data (ignoring older data)

Selective perception: Viewing a situation subjectively; perceiving only certain (convenient) aspects of a situation.

Sunk cost: Not accepting  that costs already incurred cannot be recovered and should not be considered as criteria for future decisions. This bias is closely related to loss aversion.

The author acknowledges that there is a significant overlap between some of these effects: for example, illusion of control has much in common with overconfidence. This implies a certain degree of subjectivity in assigning these as causes for project failures.

The failed projects studied in the paper are high profile efforts that failed in one or more ways.  The author obtained data for the projects from public and government sources. He then presented the data and case studies to five independent groups of business professionals (constituted from a class he was teaching) and asked them to reach a consensus on which biases could have played a role in causing the failures. The groups presented their results to the entire class, then through discussions, reached agreement on which of the  biases may have lead to the failures.

The case studies

This section describes the failed project studied and the biases that the group identified as being relevant.

Airbus 380: Airbus was founded as a consortium of independent aerospace companies. The A380 project which was started in 2000 – was aimed at creating the A380 superjumbo jet with a capacity of 800 passengers. The project involved coordination between many sites.  Six years into the project, when the aircraft was being assembled in Toulouse, it was found that a wiring harness produced in Hamburg failed to fit the airframe.

The group identified the following biases as being relevant to the failure of the Airbus project:

Selective perception: Managers acted to guard their own interests and constituencies.

Groupthink:  Each participating organisation  worked in isolation from the others, creating an environment in which groupthink would thrive.

Illusion of control:  Corporate management assumed they had control over participating organisations.

Availability bias: Management in each of the facilities did not have access to data in other facilities, and thus made decisions based on limited data.

Coast Guard Maritime Domain Awareness Project: This project, initated in 2001, was aimed at creating the maritime equivalent of an air traffic control system. It was to use a range of technologies, and involved coordination between many US government agencies. The goal of the first phase of the project was to  create a surveillance system that would be able to track boats as small as jet skis. The surveillance data was to be run through a software system that would flag potential threats.  In 2006  – during the testing phase – the surveillance system failed to meet quality criteria. Further, the analysis software was not ready for testing.

The group identified the following biases as being relevant to the failure of the  Maritime Awareness project:

Illusion of control: Coordinating several federal agencies is a complex task. This suggests that project managers may have thought they had more control than they actually did.

Selective perception: Separate agencies worked only on their portions of the project,  failing to see the larger picture. This suggests that project groups may have unwittingly been victims of selective perception.

Columbia Shuttle: The Columbia Shuttle disaster was caused by a piece of foam insulation breaking off the propellant tank and damaging the wing. The problem with the foam sections was known, but management had assumed that it posed no risk.

In their analysis, the group found the following biases to be relevant to the failure of this project:

Conservatism: Management failed to take into account negative data.

Overconfidence:  Management was confident there were no safety issues.

Recency:  Although foam insulation had broken off on previous flights, it had not caused any problems.

Denver Airport Baggage Handling System: The Denver airport project, which was scheduled for completion in 1993, was to feature a completely automated baggage handling system. The technical challenges were enormous because the proposed system was an order of magnitude more complex than those that existed at the time. The system was completed in 1995, but was riddled with problems. After almost a decade of struggling to fix the problems, not to mention being billions over-budget, the project was abandoned in 2005.

The group identified the following biases as playing a role in the failure of this project:

Overconfidence: Although the project was technically very ambitious, the contractor (BAE systems) assumed that all technical obstacles could be overcome within the project timeframes.

Sunk cost: The customers (United Airlines) did not pull out of the project even when other customers pulled out, suggesting that they were reluctant to write off already incurred costs.

Illusion of control: Despite evidence to the contrary, management assumed that problems could be solved and that the project remained  under control.

Mars Climate Orbiter and Mars Polar Lander: Telemetry signals from the Mars climate orbiter ceased when the spacecraft approached its destination. The root cause of the problem was found to be a failure to convert between metric and British units: apparently the contractor, Lockheed, had used British units in the engine design but NASA scientists who were responsible for operations and flight assumed the data was in metric units. A few months after the climate orbiter disaster, another spacecraft, the Mars polar lander fell silent just short of landing on the surface of Mars. The failure was attributed to a software problem that caused the engines to shutdown prematurely, thereby causing the spacecraft to crash.

The group attributed the above project failures to the following biases:

Conservatism: Project engineers failed to take action when they noticed that the spacecraft was off-trajectory early in the flight.

Sunk cost: Managers were under pressure to launch the spacecraft on time – waiting until the next launch window would have entailed a wait of many months thus “wasting” the effort up to that point. (Note: In my opinion this is an incorrect interpretation of sunk cost)

Selective perception: The spacecraft modules  were constructed by several different teams. It is very likely that teams worked with a very limited view of the project (one which was relevant to their module).

Merck Vioxx: Vioxx was a very successful anti-inflammatory medication developed and marketed by Merck. An article published in 2000 suggested that Merck misrepresented clinical trial data, and another paper published in 2001 suggested that those who took Vioxx were subject to a significantly increased risk of assorted cardiac events. Under pressure, Merck put a warning label on the product in 2002. Finally, the drug was withdrawn from the market in 2004 after over 80 million people had taken it.

The group found the following biases to be relevant to the failure of this project:

Conservatism:  The company ignored early warning signs about the toxicity of the drug.

Sunk cost: By the time concerns were raised, the company had already spent a large amount of money in developing the drug. It is therefore likely that there was a reluctance to write off the costs incurred to that point.

Microsoft Xbox 360: The Microsoft Xbox console was released to market in 2005, a year before comparable offerings from its competitors. The product was plagued with problems from the start; some of them include: internet connectivity issues, damage caused to game disks, faulty power cords and assorted operational issues. The volume of problems and complaints prompted Microsoft to extend the product warranty from one to three years at an expected cost of $1 billion.

The group thought that the following biases were significant in this case:

Conservatism: Despite the early negative feedback (complaints and product returns), the development group seemed to acknowledge that there were problems with the product.

Groupthink:  It is possible that the project team ignored data that threatened their views on the product. The group reached this conclusion because Microsoft seemed reluctant to comment publicly on the causes of problems.

Sunk cost: By the time problems were identified, Microsoft had invested a considerable sum of money on product development. This suggests that the sunk cost trap may have played a role in this project failure.

NYC Police Communications System: (Note: I couldn’t find any pertinent links to this project). In brief: the project was aimed at developing a communications system that would enable officers working in the subway system to communicate with those on the streets. The project was initiated in 1999 and scheduled for completion in 2004 with a budgeted cost of $115 million. A potential interference problem was identified in 2001 but the contractors ignored it. The project was completed in 2007, but during trials it became apparent that interference was indeed a problem. Fixing the issue was expected to increase the cost by $95 million.

The group thought that the following biases may have contributed to the failure of this project:

Conservatism: Project managers failed to take early data on intereference account.

Illusion of control: The project team believed – until very late in the project – that the interference issue could be fixed.

Overconfidence:  Project managers believed that the design was sound, despite evidence to the contrary.

Analysis and discussion

The following four biases appeared more often than others: Conservatism, illusion of control, selective perception and sunk cost.

The following biases appeared less often: groupthink and overconfidence.

Recency and availability were mentioned only once.

Based on the small data sample and the somewhat informal means of analysis, the author concludes that the first four biases may be dominant in project management. In my opinion this conclusion is shaky because the study has a few shortcomings, which I list below:

  • The sample size is small
  • The sample covers a range of domains.
  • No checks were done to verify the  group members’ understanding of  all the biases.
  • The data on which the conclusions are based is incomplete – based only on publicly available data. (perhaps is this an example of the available data bias at work?)
  • A limited set of biases is used – there could be other biases at work.
  • The conclusions themselves are subject to group-level biases such as groupthink. This is a particular concern because the group was specifically instructed to look at the case studies through the lens of the selected cognitive biases.
  • The analysis is far from exhaustive or objective; it  was done as a part of classroom exercise.

For the above reasons, the analysis is at best suggestive:  it indicates that biases may play a role in the decisions  that lead to project failures.

The author also draws a link between organisational culture and environments in which biases might thrive. To do this, he maps the biases on to the competing values framework of organisational culture, which views organisations along two dimensions:

  • The focus of the organisation – internal or external.
  • The level of management control in the organisation  – controlling (stable) or discretionary (flexible).

According to the author, all nine biases are more likely in a stability (or control) focused environment than a flexible one, and all barring sunk cost are more likely to thrive in a internal focused organisation than an externally focused one. This conclusion makes sense: project teams are more likely to avoid biases when empowered to make decisions,  free from management and organisational pressures. Furthermore, biases are also less likely to play a role when external input – such as customer feedback –  is taken seriously.

That said, the negative effects of internally focused, high control organisations can be countered. The author quotes two examples:

  1. When designing the 777 aircraft, Boeing introduced a new approach to project management wherein teams were required to include representatives from all groups of stakeholders. The team was encouraged to air differences in opinion and to deal with these in an open manner. This approach has been partly credit for the success of the 777 project.
  2. Since the Vioxx debacle, Merck rewards research scientists who terminate projects that do not look promising.

Conclusions

Despite my misgivings about the research sample and methodology, the study does suggest that standard project management practices could benefit by incorporating insights from behavioural studies.  Further, the analysis indicates that cognitive biases may have indeed played a role in the failure of some high profile projects.  My biggest concern here, as stated earlier,  is that the groups were required to associate the decisions with specific biases – i.e. there was an assumption that one or more of the biases from the (arbitrarily chosen) list was responsible for the failure. In reality, however,  there may have been other more important  factors at work.

The connections with organisational culture are interesting too, but hardly surprising: people are more likely to do the right thing when management  empowers them with responsibility and authority.

In closing: I found the paper interesting because it deals with an area that isn’t very well represented in the project management literature. Further, I  believe these biases play a significant role in project decision making, especially in internally focussed / controlled organisations (project managers are human, and hence not immune…).  However, although the paper supports this view, it doesn’t make a wholly convincing case for it.

Further Reading

For more on cognitive biases in organisations, see Chapter 2 of my book, The Heretic’s Guide to Best Practices.

Written by K

May 8, 2009 at 5:47 am

37 Responses

Subscribe to comments with RSS.

  1. Great post, I’ll reference it

    Just a few comments …

    http://www.pmimilehisym.org/presentations2009/Stephen%20Garfein%20Boeing%20787%20vs%20Airbus%20A380%2002-23-09.pdf is a nice presentation at our PMI Symposium.

    I use the DIA bagage system several time a month. What is not well documented is when Continental pulled out of terminal A, United absorbed Terminal B and rerouted the bagage system. The trouble was the concrete was already poured and United uped the speed from the “known to work” Munich II system which BAE had installed. So United is the primary cause of the problem. Changing performance and route AFTER the concrete dried is expensive.

    Thanks for the Tversky link I had lost my copy.

    Glen B. Alleman
    VP, Program Planning and Controls
    Aerospace and Defense
    Denver, Colorado USA

    Like

    Glen B. Alleman

    May 12, 2009 at 11:11 am

  2. Glen,

    Thanks for your feedback – much appreciated!

    Thanks too for your comment on the DIA baggage system project – from what I’ve read, it’s truly amazing how that project got the go-ahead at all…

    Regards,

    Kailash.

    Like

    K

    May 12, 2009 at 6:41 pm

  3. Kailash,
    The bagage system change was made AFTER the construction had stated. When Continential pulled out of Terminal A, they had already completed most of the constructoon and fit out of the concourse, then they canceled their agreement with the city. United switched to Denver as a hub – replacing Continental and demanded rework of already dried concrete and change in the bagage system performance of 2X. BAE protested but took the contract to change the Munich 2 system to run at twice the speed.

    It went down hill from there. The system is used for some interflight exchanges, but the promise of fast delivery to the exiting passanger at DIA has never been fulfilled. I still wait the “normal” time for my bag. Transit passangers have a better reponse.

    Like

    Glen B. Alleman

    May 12, 2009 at 9:31 pm

  4. Glen,

    A salutary lesson in the consequences of not thinking through proposed changes.

    Thanks for the clarification.

    Regards,

    Kailash.

    Like

    K

    May 13, 2009 at 7:09 am

  5. Hi Kailash,
    This is a brilliant post. Waiting for your next post…

    Like

    Prakash

    May 13, 2009 at 8:58 am

  6. Thanks Prakash.

    Regards,

    Kailash.

    Like

    K

    May 13, 2009 at 6:28 pm

  7. Great post!!!
    Do you have a take on Initial Goals Bias…

    Paul Ritchie does a nice summary on this one at http://crossderry.wordpress.com/2008/04/14/initial-goal-bias-the-experience-trap/

    It is a comment on the HBR article The Experience Trap.

    Similar threads lead me to other conclusions – see my post at http://blogs.oracle.com/asparks/2008/12/unstuck_by_ooda.html

    Like

    Andrew Sparks

    May 13, 2009 at 11:11 pm

  8. Andrew,

    Thanks for your comments and the links to your and Paul Ritchie’s articles. Very interesting.

    Here’s my take: initial goal bias has a lot to do with the culture of the organisation. My guess – and I have no proof here – is that it is less prevalent in flexible organisations, where it is OK to “call it as one sees it.” Problem is, a large majority of organisations are at the other end of the spectrum – i.e. control oriented. In such organisations people are reluctant to make hard calls – like suggesting that a project’s objective needs to be reassessed – it could well be a career limiting move.

    Now, that’s not to say that people’s tendency (bias) to be fixated on externally set goals plays no role at all. My point is simply that the culture of the organisation must encourage behaviours that counter this bias.

    BTW – I’ve reviewed the HBR paper here. I’d value your comments if you get a chance.

    Thanks again for your comments.

    Regards,

    Kailash.

    Like

    K

    May 14, 2009 at 6:04 am

  9. Nice post!

    Have you looked into the effect of Framing [ http://en.wikipedia.org/wiki/Framing_(economics) ] on project success or failure?

    I wonder if that particular effect may call into question the usual approach of specifying a relatively “big” goal, and then listing the risks that may cause it not to be reached. (E.g. the typical “risk register”). This frames the uncertainty as losses, which has been shown to trigger risk-seeking decisions. So, ironically, does the act of creating a risk register actually _create_ _additional_ _risk_?

    Would it be better instead, to frame the uncertainty as possible gains, over and above some “bare minimum viable scope”? Framing uncertainty as gains has been shown to trigger more conservative decision making.

    I.e. aim low and list opportunities to do better; instead of aiming high and listing risks of falling short?

    Like

    John Rusk

    May 16, 2009 at 9:00 am

  10. John,

    Thanks for your very interesting comment. You’re absolutely right that big goals (usually) imply big risks and hence potentially big losses. This may bring loss aversion into play – i.e. the tendency to go to great lengths to avoid losses (in this case, project failure) over gains (project success). This increases the chance that a project that might well succeed gets canned because of over-cautiousness.

    I think framing plays a big role in early stages of projects: particularly when stakeholders debate whether or not a project should be given the go-ahead. Those making a case for the project frame their arguments in a way that make the proposition attractive (by accentuating the positives) whereas those against it do the opposite.

    Regards,

    Kailash.

    Like

    K

    May 16, 2009 at 3:50 pm

    • I absolutely agree with your framing comments. I guess that what I had in mind was the combination of
      (a) framing to make the proposition attractive, often by describing a (near) maximal scope rather than a minimal one
      (b) then framing the uncertainies as things that may “take away” some of that lovely big scope that everyone had their heart set on.

      The particular effect I was referring to was the one where people will take a relatively large risk to avoid a loss [such as (b) above] but they act much more conservatively when faced with a possible gain of the same magnitude. The classic example is the so-called “Asian Disease Experiment” http://en.wikipedia.org/wiki/Asian_disease .

      I’m a software engineer (rather than a psychologist 😉 so my perspective on this comes from a sense that people take risks to try make things turn out in the way the project was originally envisaged.

      Like

      John Rusk

      May 18, 2009 at 6:00 pm

  11. Ah, got it. What you say makes sense, but leads to the counter-intuitive conclusion that risk analysis increases risk!

    Here’s one possible resolution of this apparent paradox (with the disclaimer that I’m no psychologist either…)

    Kahneman and Tversky recognised that there are two sytems of cognition: intuitive and reflective (see this paper by Kahneman and Fredrickson for more on the two systems). As I understand it, the first system operates by gut feeling whereas the latter involves deliberation and analysis. Biases typically operate in the first system but not the second. Risk analysis involves reflection rather than intuition, hence biases would be considerably less likely to play a role.

    Does this make sense?

    Thanks for your comment and clarification (I’m a bit slow – especially after a beer on a Saturday afternoon!).

    Like

    K

    May 18, 2009 at 6:56 pm

    • >the counter-intuitive conclusion that risk analysis increases risk

      I’m suggesting that might be true 😉

      Certainly, everyone who does risk analysis _intends_ that it will reduce risk, but intention does not necessarily make it so.

      I am indeed suggesting that this thing which we all do, thinking we are reducing risk, is in fact increasing it! I can’t prove this idea, but I do think it would explain a lot.

      It would be interesting to see a project that still lists the uncertainties, but which phrases them in positive terms. An “opportunity register” so to speak, with it consisting of opportunties to do better than a “worst case” baseline scenario. E.g. the opportunity register would contain things like “Integration with third party components may go more smoothly than planned [relative to our pessimistic baseline plan]”. Hopefully, such a project would still be analyzing the uncertainties, but without introducing risk-seeking cognitive biases.

      Like

      John Rusk

      May 19, 2009 at 4:31 pm

  12. John,

    If risk analysis is done right, biases such as framing should not play a role because such analyses are done through reflective (not intuitive) cognition. What you’re suggesting is the opposite: that in reality folks tend to use intuition rather than reflection; they do an “informal” risk analysis based on their perception of reality, rather than any objective criteria. Looking back at some of the failures discussed in my post, I think you may well be on to something here.

    Regards from this side of the ditch.

    Kailash.

    Like

    K

    May 19, 2009 at 7:05 pm

  13. […] an article (available here and also discussed here) Shore (2008) summarizes a number of cognitive biases as shown in the table below: (Shore, B […]

    Like

  14. […] A  comment by John Rusk on this post got me thinking about the effects of  cognitive  biases on the perception and analysis of project  risks.  A cognitive bias is a human tendency to base a judgement or decision on a flawed perception or understanding of data or events.  Research by Barry Shore  suggests that cognitive biases may have played a role in the failure of some high profile projects failures (see Shore’s paper for details).   Shore contends  that failures are invariably caused by poor  decisions, many of  which  can be traced back to specific biases.  Put another way,  cognitive biases can have a significant (negative)  effect on how project risks are perceived. These misperceptions can then lead to fatal decisions for the project.  If true, this finding has implications for the practice of risk management in projects (and other areas, for that matter). This essay discusses the role of cognitive biases in risk analysis, with a focus on risks in projects.  […]

    Like

  15. […] I have  previously written  about the effect of cognitive biases in project risk management -see this post and this one, for […]

    Like

  16. […] rife with stories of projects on which risks were managed inadequately, or not managed at all (see this post for some pertinent case studies).  This begs the question:  are there rational reasons for not […]

    Like

  17. […] biases – errors of perception that can lead us to making incorrect judgements. See my post on the role of cognitive bias in project failure for more on how these biases have affected high profile […]

    Like

  18. […] and is well-known, groupthink remains alive and well in project environments;  see my post on the role of cognitive biases in project failure for examples of high-profile projects that fell victim to […]

    Like

  19. […] who’ve read my articles on cognitive biases in project management (see this post and this one)  may be wondering how these fit in to the above argument. According to Goldratt, most people tend […]

    Like

  20. […] a number of articles on project failure, covering topics ranging from  definitions of success to the role of biases in project failure.  As interesting as these issues are, they are somewhat removed from the day-to-day concerns of […]

    Like

  21. […] data or evidence. A common example of such a bias is overconfidence in one’s own judgement. See this post for a discussion of how failures of high-profile projects may have been due to cognitive […]

    Like

  22. […] errors in perception. Many high profile project failures can be attributed to such biases:  see my post on cognitive bias and project failure for more on this. Given these points, potential risks should be discussed from different […]

    Like

  23. […] discussion of these biases at work in a project situation). Further examples can be found in in my post on the role of cognitive biases in project failure,  which discusses how many high profile project failures can be attributed to systematic errors in […]

    Like

  24. […] The role of cognitive biases in project failure Introduction There are two distinct views of project management practice: the rational view which focuses on management tools and techniques such as those espoused by frameworks and methodologies,… Source: eight2late.wordpress.com […]

    Like

  25. […] organisational culture and cognitive bias. This post presents a summary and review of the paper. Via eight2late.wordpress.com Share this:CondivisioneFacebookTwitterDiggLinkedInRedditStumbleUponEmailStampaLike this:LikeBe the […]

    Like

  26. […] The role of cognitive biases in project failure. The same thing will happen with the European … […]

    Like

  27. […] The role of cognitive biases in project failure. The same thing will happen with the European … […]

    Like

  28. […] The role of cognitive biases in project failure. The same thing will happen with the European … […]

    Like

  29. […] of these include availability, representativeness, overconfidence and selective perception (see this post for specific examples drawn from high-profile failed projects).  Armed with this understanding of […]

    Like

  30. […] Introduction There are two distinct views of project management practice: the rational view which focuses on management tools and techniques such as those espoused by frameworks and methodologies, …  […]

    Like

  31. […] bias – persistent biases in human perception or judgement   (Editors’ Note: also see this post on the role of cognitive bias in project failure). The leadership attribution error is precisely […]

    Like

  32. […] and experience of the estimator. As such, it is prone to being affected by errors of judgement and cognitive biases.  However, provided one keeps those caveats in mind, the probability-based approach described […]

    Like

  33. […] teams from working productively as a group. Cognitive biases underpin failed projects such as the Airbus 380, the Mars Climate Orbiter, Microsoft Xbox 360 and the proposed NHS Care […]

    Like


Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.