Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Bias’ Category

Six common pitfalls in project risk analysis

with one comment

The discussion of risk in presented in most textbooks and project management courses follows the well-trodden path of risk identification, analysis, response planning and monitoring (see the PMBOK guide, for example).  All good stuff, no doubt.  However, much of the guidance offered is at a very high level. Among other things, there is little practical advice on what not to do. In this post I address this issue by outlining some of the common pitfalls in project risk analysis.

1. Reliance on subjective judgement: People see things differently:  one person’s risk may even be another person’s opportunity. For example, using a new technology in a project can be seen as a risk (when focusing on the increased chance of failure) or opportunity (when focusing on the opportunities afforded by being an early adopter). This is a somewhat extreme example, but the fact remains that individual perceptions influence the way risks are evaluated.  Another problem with subjective judgement is that it is subject to cognitive biases – errors in perception. Many high profile project failures can be attributed to such biases:  see my post on cognitive bias and project failure for more on this. Given these points, potential risks should be discussed from different perspectives with the aim of reaching a common understanding of what they are and how they should be dealt with.

2. Using inappropriate historical data: Purveyors of risk analysis tools and methodologies exhort project managers to determine probabilities using relevant historical data. The word relevant is important: it emphasises that the data used to calculate probabilities (or distributions) should be from situations that are similar to the one at hand.  Consider, for example, the probability of a particular risk – say,  that a particular developer will not be able to deliver a module by a specified date.  One might have historical data for the developer, but the question remains as to which data points should be used. Clearly, only those data points that are from projects that are similar to the one at hand should be used.  But how is similarity defined? Although this is not an easy question to answer, it is critical as far as the relevance of the estimate is concerned. See my post on the reference class problem for more on this point.

3. Focusing on numerical measures exclusively: There is a widespread perception that quantitative measures of risk are better than qualitative ones. However,  even where reliable and relevant data is available,  the measures still need to  based on sound methodologies. Unfortunately, ad-hoc techniques abound in risk analysis:  see my posts on Cox’s risk matrix theorem and limitations of risk scoring methods for more on these.  Risk metrics based on such techniques can be misleading.  As Glen Alleman points out in this comment, in many situations qualitative measures may be more appropriate and accurate than quantitative ones.

4. Ignoring known risks: It is surprising how often known risks are ignored.  The reasons for this have to do with politics and mismanagement. I won’t dwell on this as I have dealt with it at length in an earlier post.

5. Overlooking the fact that risks are distributions, not point values: Risks are inherently uncertain, and any uncertain quantity is represented by a range of values, (each with an associated probability) rather than a single number (see this post for more on this point). Because of the scarcity or unreliability of historical data, distributions are often assumed a priori: that is, analysts will assume that the risk distribution has a particular form (say, normal or lognormal) and then evaluate distribution parameters using historical data.  Further, analysts often choose simple distributions that that are easy to work with mathematically.  These distributions often do not reflect reality. For example,  they may be vulnerable to “black swan” occurences because they do not account for outliers.

6. Failing to update risks in real time: Risks are rarely static – they evolve in time, influenced by circumstances and events both in and outside the project. For example, the acquisition of a key vendor by a mega-corporation is likely to affect the delivery of that module you are waiting on –and quite likely in an adverse way. Such a change in risk is obvious; there may be many that aren’t. Consequently, project managers need to reevaluate and update risks periodically. To be fair, this is a point that most textbooks make – but it is advice that is not followed as often as it should be.

This brings me to the end of my (subjective) list of risk analysis pitfalls. Regular readers of this blog will have noticed that some of the points made in this post are similar to the ones I made in my post on estimation errors. This is no surprise: risk analysis and project estimation are activities that deal with an uncertain future, so it is to be expected that they have common problems and pitfalls. One could generalize this point:  any activity that involves gazing into a murky crystal ball will be plagued by similar problems.

Written by K

June 2, 2011 at 10:21 pm

Why deliberation trumps standard decision-making methods

with 10 comments

Wikipedia defines decision analysis as the discipline comprising the philosophy, theory, methodology, and professional practice necessary to address important decisions in a formal manner.  Standard decision-making techniques generally involve the following steps:

  1. Identify available options.
  2. Develop criteria for rating options.
  3. Rate options according to criteria developed.
  4. Select the top-ranked option.

This sounds great in theory, but as Tim van Gelder points out in an article entitled the The Wise Delinquency of Decision Makers, formal methods of decision analysis are not used as often as textbooks and decision-theorists would have us believe.  This, he argues, isn’t due to ignorance:  even those trained in such methods often do not use them for decisions that really matter. Instead they resort to deliberation –    weighing up options in light of the arguments and evidence for and against them. He discusses why this is so, and also points out some problems with deliberative methods and what can be done do fix them. This post is a summary of the main points he makes in the article.

To begin with, formal methods aren’t suited to many decision-making problems encountered in the real world. For instance:

  1. Real-world options often cannot be quantified or rated in a meaningful way. Many of life’s dilemmas fall into this category. For example, a decision to accept or decline a job offer is rarely made on the basis of material gain alone.
  2. Even where ratings are possible, they can be highly subjective. For example, when considering a job offer, one candidate may give more importance to financial matters whereas another might consider lifestyle-related matters (flexi-hours, commuting distance etc.) to be paramount. Another complication here is that there may not be enough information to settle the matter conclusively. As an example, investment decisions are often made on the basis of quantitative information that is based on questionable assumptions.
  3. Finally, the problem may be wicked – i.e. complex, multi-faceted and difficult to analyse using formal decision making methods. Classic examples of wicked problems are climate change (so much so, that some say it is not even a problem) and city / town planning. Such problems cannot be forced into formal decision analysis frameworks in any meaningful way.

Rather than rating options and assigning scores, deliberation involves making arguments for and against each option and weighing them up in some consistent (but qualitative) way. In contrast to textbook methods of decision analysis, this is essentially an informal process; there is no prescribed method that one must follow. One could work through an arguments oneself or in conversation with others.  Because of the points listed above, deliberation is often better suited to deal with many of the decisions we are confronted with in our work and personal lives (see this post for a real-life example of deliberative decision making)

However, as Van Gelder points out,

The trouble is that deliberative decision making is still a very problematic business. Decisions go wrong all the time. Textbook decision methods were developed, in part, because it was widely recognized that our default or habitual decision making methods are very unreliable.

He  lists four problems with deliberative methods:

  1. Biases – Many poor decisions can be traced back to cognitive biaseserrors of judgement based on misperceptions of situations, data or evidence. A common example of such a bias is overconfidence in one’s own judgement. See this post for a discussion of how failures of high-profile projects may have been due to cognitive biases.
  2. Emotions – It is difficult, if not impossible, to be completely rational when making a decision – even a simple one.  However, emotions can cloud judgement and lead to decisions being made on the basis of pride, anger or envy rather than a clear-headed consideration of known options and their pros and cons.
  3. Tyranny of the group – Important decisions are often made by committees. Such decisions are subjected to collective biases such as groupthink – the tendency of group members to think alike and ignore external inputs so as to avoid internal conflicts. See this post for a discussion of groupthink in project environments. 
  4. Lack of training – People end up making poor decisions because they lack knowledge of informal logic and argumentation, skills that can be taught and then honed through practice.

Improvements in our ability to deliberate matters can be brought about by addressing the above. Clearly, it is difficult to be completely objective when confronted with tough decisions just as it is impossible to rid ourselves of our (individual and collective) biases.  That said, any technique that lays out all the options and arguments for and against them in a easy-to-understand way may help in making our biases and emotions (and those of others) obvious. Visual notations such as  IBIS (Issue-Based Information Systems) and  Argument Mapping do just that.  See this post for more on why it is better to represent reasoning visually than in prose.

The use of techniques such as the ones listed in the previous paragraph can lead to immediate improvements in corporate decision making. Firstly,  because gaps in logic and weaknesses in supporting evidence are made obvious, those responsible for formulating, say, a business case can focus on improving the their arguments prior to presenting them to senior managers. Secondly, decision makers can see the logic, supporting materials and the connections between them at a glance. In short: those formulating an argument and those making decisions based on it can focus on the essential points of the matter without having to wade through reams of documentation or tedious presentations.

To summarise: formal decision-making techniques are unsuited to complex problems  or those that have  options that cannot be quantified in a meaningful way. For such issues, deliberation –  supplemented by visual notations such as IBIS or Argument Mapping – offers a better alternative.

Written by K

May 13, 2011 at 5:32 am

Pathways to folly: a brief foray into non-knowledge

leave a comment »

One of the assumptions of managerial practice is that organisational knowledge is based on valid data. Of course, knowledge is more than just data. The steps from data to knowledge and beyond are described in the much used (and misused)  data-information-knowledge-wisdom (DIKW) hierarchy. The model organises the aforementioned elements in a “knowledge pyramid” as shown in Figure 1.  The basic idea is that data, when organised in a way that makes contextual sense, equates to information which, when understood and assimilated, leads to knowledge which then, finally, after much cogitation and reflection, may lead to wisdom.

Figure 1: Data-Information-Knowledge-Wisdom (DIKW) Pyramid

In this post, I explore “evil twins” of the DIKW framework:  hierarchical models of non-knowledge. My discussion is based on a paper by Jay Bernstein, with some  extrapolations of my own. My aim is to illustrate  (in  a not-so-serious way)  that there are many more managerial pathways to ignorance and folly than there are to knowledge and wisdom.

I’ll start with a quote from the paper.  Bernstein states that:

Looking at the way DIKW decomposes a sequence of levels surrounding knowledge invites us to wonder if an analogous sequence of stages surrounds ignorance, and where associated phenomena like credulity and misinformation fit.

Accordingly he starts his argument by noting opposites for each term in the DIKW hierarchy. These are listed in the table below:

DIKW term Opposite
Data Incorrect data, Falsehood, Missing data,
Information Misinformation, Disinformation, Guesswork,
Knowledge Delusion, Unawareness, Ignorance
Wisdom Folly

This is not an exhaustive list of antonyms –   only a few terms that make sense in the context of an “evil twin” of DIKW are listed. It should also be noted that   I have added some antonyms that Bernstein does not mention.  In the remainder of this post, I will focus on discussing the possible relationships between these terms that are opposites of those that appear in the DIKW model.

The first thing to note is that there is generally more than one antonym for each element of the DIKW hierarchy. Further, every antonym has a  different meaning from others. For example – the absence of data is different from incorrect data which in turn is different from a deliberate falsehood. This is no surprise – it is simply a manifestation of the principle that there are many more ways to get things wrong than there are to get them right.

An implication of the above is that there can be more than one road to folly depending on how one gets things wrong. Before we discuss these, it is best to nail down the meanings of some of the words listed above (in the sense in which they are used in this article):

Misinformation – information that is incorrect or inaccurate

Disinformation – information that is deliberately manipulated to mislead.

Delusion – false belief.

Unawareness – the state of not being fully cognisant of the facts.

Ignorance – a lack of knowledge.

Folly – foolishness, lack of understanding or sense.

The meanings of the other words in the table are clear enough and need no elaboration.

Meanings clarified, we can now look at the some of the “pyramids of folly” that can be constructed from the opposites listed in the table.

Let’s start with incorrect data. Data that is incorrect will mislead, hence resulting in misinformation. Misinformed people end up with false beliefs – i.e. they are deluded. These beliefs can cause them to make foolish decisions that betray a lack of understanding or sense. This gives us the pyramid of delusion shown in Figure 2.

Figure 2: A pyramid of delusion

Similarly, Figure 3 shows a pyramid of unawareness that arises from falsehoods and Figure 4, a pyramid of ignorance that results from missing data.

Figure 3: A pyramid of unawareness

Figure 4: A pyramid of ignorance

Figures 2 through 4 are distinct pathways to folly. I  reckon many of my readers would have seen examples of these in real life situations. (Tragically, many managers who traverse these pathways are unaware that they are doing so.  This may be a manifestation of the Dunning-Kruger effect.)

There’s more though – one can get things wrong at higher level independent of whether or not the lower levels are done right. For example, one can draw the wrong conclusions from (correct) data. This would result in the pyramid shown in Figure 5.

Figure 5: Yet another pyramid of delusion

Finally, I should mention that it’s even worse: since we are talking about non-knowledge, anything goes. Folly needs no effort whatsoever, it can be achieved without any data, information or knowledge (or their opposites).  Indeed, one can play endlessly with  antonyms and near-antonyms of the DIKW terms (including those not listed here) and come up with a plethora of pyramids, each denoting a possible pathway to folly.

Written by K

March 3, 2011 at 11:13 pm

Six ways in which project estimates go wrong

with 7 comments

Despite the increasing focus on  project estimation, the activity still remains  more guesswork than art or science.  In his book on the fallacies of software engineering,  Robert Glass has this to say about it:

Estimation, as you might imagine, is the process by which we determine how long a project will take and how much it will cost. We do estimation very badly in the software field. Most of our estimates are more like wishes than realistic targets. To make matters worse, we seem to have no idea how to improve on those very bad practices. And the result is, as everyone tries to meet an impossible estimation target, shortcuts are taken, good practices are skipped, and the inevitable schedule runaway becomes a technology runaway as well…

Moreover, he suggests that poor estimation is one of the top two causes of project failure.

Now, there are a number of reasons why project estimates go wrong,  but in my experience there are half-dozen standouts. Here they are, in no particular order:

1.  False analogies: Project estimates based on historical data are generally considered to be more reliable than those developed using other methods such as expert judgement (see this article, from the MS Project support site for example). This is fine and good as long as one uses data from historical projects that are identical to the one at hand in relevant ways. Problem is, one rarely knows what is relevant and what isn’t.  It is all too easy too select a project that is superficially similar to the one at hand, but actually differs in critical ways.  See my posts on false analogies and the reference class problem for more on this point.

2.  False precision: Project estimates are often quoted as single numbers rather than ranges. Such estimates are incorrect because they ignore the fact that uncertain quantities should be quantified by a range of numbers (or more accurately, a distribution) rather than  point values. As Dr. Sam Savage emphasises in his book, The Flaw of Averages: an uncertain quantity is a shape, not a number (see my review of the book for more on this point). In short, an estimate quoted as a single number is almost guaranteed to be incorrect.

3.  Estimation by decree: It should be obvious that estimation must be done by those who will do the work. Unfortunately this principle is one of the first to be sacrificed on Death March Projects. In such projects, schedules  are shoe-horned into  predetermined timelines, with estimates cooked up by those who have little or no idea of the actual effort involved in doing the work.

4.   Subjectivity: This is where estimates are plucked out of thin air and “justified” based on gut-feel and other subjective notions. Such estimates are prone to overconfidence and a range of other cognitive biases. See my post on cognitive biases as project meta-risks for a detailed discussion of how these biases manifest themselves in project estimates.

5.  Coordination neglect: Projects consist of diverse tasks that need to be coordinated and integrated carefully. Unfortunately, the time and effort needed for coordination and integration  is often underestimated (or even totally overlooked)  by project decision makers. This is referred to as coordination neglect. Coordination neglect is a problem in projects of all sizes, but  is generally more significant for projects involving large teams (see this paper for an empirical study of the effect of team size on coordination neglect).  As one might imagine, coordination neglect also becomes a significant problem in projects that consist of a large number of dependent tasks  or have a large number of external dependencies.

6.  Too coarse grained: Large tasks are made up of smaller tasks strung together in specific ways. Consequently, estimates for large tasks should be built up from estimates for the smaller sub-tasks. . Teams often short-circuit the process by attempting to estimate the large task directly. Such estimates usually turn out to be incorrect because sub-tasks  are overlooked. Another problem is coordination neglect  between sub-tasks, as discussed in the earlier point. It is true – – the devil is always in the details.

I should emphasise that the above list based on personal experience, not on any systematic study.

I’ll conclude this piece with another fragment from Glass, who  is not very optimistic about improvements in the area of project estimation. As he states in his book:

The bottom line is that, here in the first decade of the twenty-first century, we don’t know what constitutes a good estimation approach, one that can yield decent estimates with good confidence that they will really predict when a project will be completed and how much it will cost. That is a discouraging bottom line. Amidst all the clamor to avoid crunch mode and end death marches, it suggests that so long as faulty schedule and cost estimates are the chief management control factors on software projects, we will not see much improvement.

True enough,  but being aware of the ways in which estimates can go wrong is the first step towards improving them.

Written by K

October 1, 2010 at 4:55 am

Politics and counter-politics in project evaluation

leave a comment »

Introduction

Ideally, project evaluation decisions should be made on the basis of objective criteria (cost/benefit, strategic value etc.).   In  reality, however, there is often a political dimension to the process: personal agendas, power games etc. play a significant role in determining how key stakeholders perceive particular projects. In a paper entitled Seven Ways to get Your Favoured IT Project Accepted – Politics in IT Evaluation, Egon Berghout, Menno Nijland and Kevin Grant discuss seven political ploys that managers use to influence IT project selection.  This post presents a discussion of these tactics and some strategies that can be used to counter them.

The seven tactics

Before outlining the tactics it is worth mentioning some of the differences between political and rational justifications for a project. In general, the former are characterised by a lot of rhetoric and platitudes  whereas the latter dwell on seemingly objective measures (ROI, cost vs. benefit etc.). Further, political justifications tend to take a “big picture” view as opposed to the detail-oriented focus of the rational ones.   Finally, it is worth mentioning that despite their negative connotation, political ploys aren’t always bad –   there are situations in which they can lead to greater buy-in and commitment than would be possible with purely rational decision-making methods.

With that for background, let’s look at seven commonly used political tactics used to influence IT project decisions. Although the moves described are somewhat stereotypical and rather obvious, I do believe they are used quite often on real-world projects.

Here they are:

1.   Designate the project as being strategic:   In this classic ploy, the person advocating the project designates a project as being necessary in order to achieve the organisation’s strategic goals. To do this one may only need to show a tenuous connection between the project objectives and the organisation’s strategy. Once a project is deemed as being strategic, it will attract support from the upper echelons of management – no questions asked.

2.  The “lights on” ploy: This strategy is to dub the project as an operational necessity. Here the idea is to indulge in scare-mongering by saying things like – “if we don’t do this, we run an 80% chance of system failure within the next year.” Such arguments are often used to justify expensive upgrades to legacy systems.

3.   The “phase” tactic: Here the idea is to slice up a project into several smaller sub-projects and pursue them one at a time. This strategy keeps things under the organisation’s financial radar until the project is already well under way, a  technique  often used by budget-challenged IT managers.

4.   Creative analysis: Most organisations have a standard process by which IT projects are evaluated. Typically such processes involve producing metrics to support the position that the project is worth pursuing.  The ideas here is to manipulate the analysis  to support  the preferred decision.  Some classic ways of doing this include  ignoring negative aspects (certain risks, say)  and/or overstating the benefits of desired option.

5.   Find a problem for your solution: This strategy is often used to justify introducing a cool new technology into an organisation. The idea here is to create the perception that the organisation has a problem (where there isn’t necessarily one) and that the favoured technology is the only way out of it. See my post on solutions in search problems for a light-hearted look at warning signs that this strategy is in use.

6.   No time for a proposal: Here the idea is to claim that it would take too much time to do a proper evaluation (the  implication being that the person charged with doing the evaluation is too busy with other important matters). If successful, one can get away with doing a bare-bones evaluation which leaves out all inconvenient facts and details.

7.   Old wine in a new bottle: This strategy is employed with unsuccessful project proposals. The idea here is to resubmit the proposal with cosmetic changes in the hope that it gets past the evaluation committee. Sometimes a change in the title and focus of the project is all that’s needed to sneak it past an unwary bunch of evaluators.

Of course, as mentioned earlier, there’s a degree of exaggeration in the above: those who sanction projects are not so naïve as to be taken in by the rather obvious strategies mentioned above.  Nevertheless, I would agree with Berghout et. al. that more subtle variants of these strategies are sometimes used to push projects that would otherwise be given the chop.

Countering politics

The first step in countering political ploys such as the ones listed above is to understand when they are being used. The differences between political and rational behaviour were outlined by Richard Daft in his book on organisational theory and design. These are summarised in the table below (adapted from the paper):

Organisational Feature relating to decision making Rational response or behaviour Political response or behaviour
Goals Similar for all participants – aligned with organisational goals Diversity of goals, depending on preferences and personal agendas
Processes Rational, orderly. Haphazard – determined by dominant group
Rules/Norms Optimisation – to make the “best” decision (based on objective criteria) Free for all – characterised by conflict
Information Unambiguous and freely available to everyone Ambiguous, can be withheld for strategic reasons
Beliefs about cause-effect Known, even if only incompletely Disagreement about cause-effect relationships
Basis of decisions Maximisation of utility Bargaining, interplay of interests
Ideology Organisational efficiency and effectiveness Individual/ group interest

Although Daft’s criteria can help identify  politically influenced decision-making processes,  it is usually pretty obvious when politics takes over. The question then is: what can one do to counter such tactics?

The authors suggest the following:

1.   Go on the offensive: This tactic hinges on finding holes in the opponents’ arguments and proposals.  Another popular way is to attack the credibility of the individuals involved.

2.   Develop a support base-: Here the tactic is to get a significant number of people to buy into your idea. Here it is important to focus efforts on getting support from people who are influential in the organisation.

3.   Hire a consultant: This is a frequently used tactic, where one hires an “independent” consultant to research and support one’s favoured viewpoint.

4.   Quid pro quo: This is the horse-trading scenario where you support the opposing group’s proposal with the understanding that they’ll back you  on other matters in return.

Clearly these tactics are not those one would admit to using,  and indeed, the authors’ language is somewhat tongue-in-cheek when they describe these.  That said,  it is true that such tactics – or subtle variants thereof –  are often used when countering politically motivated decisions regarding the fate of projects.

Finally, it is important to realise that those involved in decision making may not even be aware that they are engaging in political behaviour. They may think they are being perfectly rational, but may in reality be subverting the process to suit their own ends.

Conclusion

The paper presents a practical view on how politics can manifest itself in project evaluation. The authors’ focus on specific tactics and counter tactics makes the paper particularly relevant for project professionals.  Awareness of these tactics will help project managers recognise the ways in which politics can be used to influence decisions as to whether or not projects should be given the go-ahead .

In closing it is worth noting the role of politics in collective decision-making of any kind. A group of people charged with making a decision will basically argue it out. Individuals (or sub-groups) will favour certain positions regarding the issue at hand and the group must collectively debate the pros and cons of each position. In such a process there is no restriction on the positions taken and the arguments presented for and against them. The ideas and arguments needn’t be logical or rational – it is enough that someone in the group supports them. In view of this it seems irrational to believe that collective decision making – in IT project evaluation or any other domain – can ever be an entirely rational process.

Written by K

September 23, 2010 at 10:13 pm

Groupthink in project environments

with 9 comments

Introduction

Groupthink refers to the tendency of members of a group to think alike because of peer pressure and insulation from external opinions. The term was coined by the psychologist Irving Janis in 1972. In a recent paper entitled, Groupthink in Temporary Organizations, Markus Hallgren looks at how groupthink manifests itself in temporary organisations and what can be done to minimize it. This post, which is based on  Hallgren’s paper and some of the references therein, discusses the following aspects of groupthink:

  1. Characteristics of groups prone to groupthink.
  2. Symptoms of groupthink.
  3. Ways to address it.

As we’ll see, Hallgren’s discussion of groupthink is particularly relevant for those who work in project environments.

Background

Hallgren uses a fascinating case study to illustrate how groupthink contributes to poor decision-making in temporary organisations: he analyses events that occurred in the ill-fated 1996 Everest Expedition. The expedition has been extensively analysed by a number of authors and, as Hallgren puts it:

Together, the survivors’ descriptions and the academic analysis have provided a unique setting for studying a temporary organization. Examining expeditions is useful to our understanding of temporary organizations because it represents the outer boundary of what is possible. Among the features claimed to be a part of the 1996 tragedy’s explanation are the group dynamics and organizational structure of the expeditions. These have been examined across various parameters including leadership, goal setting, and learning. They all seem to point back to the group processes and the fact that no one interfered with the soon-to-be fatal process which can result from groupthink.

Mountaineering expeditions are temporary organisations: they are time-bound activities which are directed towards achieving a well-defined objective using pre-specified resources. As such, they are planned as projects are, and although the tools used in “executing” the work of climbing are different from those used in most projects, essential similarities remain. For example, both require effective teamwork and communication for successful execution.  One aspect of this is the need for team members to be able to speak up about potential problems or take unpopular positions without fear of being ostracized by the group.

Some characteristics of groups that are prone to groupthink are:

  1. A tightly knit group.
  2. Insulation from external input.
  3. Leaders who promote their own preferred solutions (what’s sometimes called promotional leadership)
  4. Lack of clear decision-making process
  5. Homogenous composition of group.

Additional, external factors that can contribute to groupthink are:

  1. Presence of an external threat.
  2. Members (and particularly, influential members) have low self-esteem because of previous failures in similar situations.

Next we’ll take a brief look at how groups involved in the expedition displayed the above characteristics and how these are also relevant to project teams.

Groupthink in the 1996 Everest Expedition and its relevance to project teams

Much has been written about the ill-fated expedition, the most well-known account  being Jon Krakauer’s best-selling book, Into Thin Air.  As Hallgren points out, the downside of having a popular exposition is that analyses tend to focus on the account presented in it, to the exclusion of others. Most of these accounts, however, focus on the events themselves rather than the context and organizational structure in which they occur.  In contrast, Hallgren’s interest is in the latter – the context, hierarchy and the role played by these in supporting groupthink. Below I outline the connections he makes between organizational features and groupthink characteristics as they manifested themselves on the expedition. Following Hallgren, I also point out how these are relevant to project environments.

Highly cohesive group

The members of the expedition were keen on getting to the summit because of the time and money they had individually invested in it. This shared goal lead to a fair degree of cohesion within the group, and possibly caused warnings signs to be ignored and assumptions rationalized. Similarly, project team members have a fair degree of cohesion because of their shared (project) goals.

Insulation from external input

The climbing teams were isolated from each other. As a result there was little communication between them. This was exacerbated by the fact that only team leaders were equipped with communication devices. A similar situation occurs on projects where there is little input from people external to the project,  other teams working on similar projects or even “lessons learned” documents from prior projects. Often the project manager takes on the responsibility for communication, further insulating team members from external input.

Promotional leadership

Group leaders on the expedition had a commercial interest in getting as many clients as possible to the summit. This may have caused them to downplay risks and push clients harder than they should have. This is similar to situations in projects which are seen as the “making of project managers.”  The pressure to succeed can cause project managers to display promotional leadership.

Lack of clear decision making process

All decisions on the expedition were made by group leaders. Although this may have been necessary because group members lacked mountaineering expertise, decisions were not communicated in a timely manner (this is related to the point about insulation of groups) and there was no clear advice to groups about when they should turn back. This situation is similar to projects in which decisions are made on an ad-hoc basis, without adequate consultation or discussion with those who have the relevant expertise. Ideally, decision-making should be a collaborative process, involving all those who have a stake in its outcome.

Homogeneous composition of group

Expedition members came from similar backgrounds – folks who had the wherewithal to pay for an opportunity to get to the summit. Consequently, they were all highly motivated to succeed (related to the point about cohesion). Similarly, project teams are often composed of highly motivated individuals (albeit, drawn from different disciplines). The  shared motivation to succeed can lead to  difficulties being glossed over and risks ignored.

External threat

The expedition was one of many commercial expeditions on the mountain at that time. This caused an “us versus them” mentality, which lead to risky decisions being made. In much the similar way, pressure from competitors (or even project sponsors)  can cloud a project manager’s judgement,  leading to poor  decisions  regarding project scope and timing.

Low self esteem

Expedition leaders were keen to prove themselves because of previous failures in getting clients to the summit. This may have lead to a single-minded pursuit of succeeding this time. A similar situation can occur in projects where managers use the project as a means to build their credibility and self-esteem.

Symptoms and solutions

The above illustrates how project teams can exhibit characteristics of groups prone to groupthink. Hallgren’s case study highlights that temporary organisations – be they mountaineering expeditions or projects – can unwittingly encourage groupthink because of their time-bound, goal-focused nature.

Given this, it is useful for those involved in projects to be aware of some of the warning signs to watch for.  Janis identified the following symptoms of groupthink:

  1. Group members feel that they are invulnerable
  2. Warnings that challenge the groups assumptions are rationalized or ignored.
  3. Unquestioned belief in the group’s mission..
  4. Negative stereotyping of those outside the group.
  5. Pressure on group members to conform.
  6. Group members self-censor thoughts that contradict the group’s core beliefs.
  7. There is an illusion of unanimity because no dissenting opinions are articulated.
  8. Group leaders take on the role of “mind-guards” – i.e. they “shield” the group from dissenting ideas and opinions.

Regardless of the different contexts in which groupthink can occur, there are some stock-standard ways of avoiding it. These are:

  1. Brainstorm all alternatives.
  2. Play the devil’s advocate – consider scenarios contrary to those popular within the group.
  3. Avoid prejudicing team members’ opinions. For example, do not let managers express their opinions first.
  4. Bring in external experts.
  5. Discuss ideas independently with people outside the group.

Though this advice (also due to Janis) has been around for a while, and is well-known, groupthink remains alive and well in project environments;  see my post on the role of cognitive biases in project failure for examples of high-profile projects that fell victim to it.

Conclusion

Hallgren’s case study is an excellent account of the genesis and consequences of groupthink in a temporary organisation. Although his example is extreme, the generalizations he makes from it hold lessons for all project managers and leaders. Like the Everest expedition, projects are invariably run under tight time and budgetary constraints. This can give rise to conditions that breed groupthink. The best way to avoid groupthink is to keep an open mind and encourage dissenting opinions –  easier said than done, but the consequences of not doing so can be extreme.

Written by K

September 14, 2010 at 5:27 am

On the interpretation of probabilities in project management

with 3 comments

Introduction

Managers have to make decisions based on an imperfect and incomplete knowledge of future events.  One approach to improving managerial decision-making is to quantify uncertainties using probability.  But what does it mean to assign a numerical probability to an event?  For example, what do we mean when we say that the probability of finishing a particular task in 5 days is 0.75?   How is this number to be interpreted? As it turns out there are several ways of interpreting probabilities.  In this post I’ll look at three of these via an example drawn from project estimation.

Although the question raised above may seem somewhat philosophical, it is actually of great practical importance because of the increasing use of probabilistic techniques (such as Monte Carlo methods) in decision making. Those who advocate the use of these methods generally assume that probabilities are magically “given” and that their interpretation is unambiguous. Of course, neither is true – and hence the importance of clarifying what a numerical probability really means.

The example

Assume there’s a task that needs doing – this may be a project task or some other job that a manager is overseeing. Let’s further assume that we know the task can take anywhere between 2 to 8 days to finish, and that we (magically!) have numerical probabilities associated with completion on each of the days (as shown in the table below). I’ll say a teeny bit more about how these probabilities might be estimated shortly.

Task finishes on Probability
Day 2 0.05
Day 3 0.15
Day 4 0.3
Day5 0.25
Day 6 .15
Day 7 .075
Day 8 .025

This table is a simple example of what’s technically called a probability distribution. Distributions express probabilities as a function of some variable. In our case the variable is time.

How are these probabilities obtained? There is no set method to do this but commonly used techniques are:

  1. By using historical data for similar tasks.
  2. By asking experts in the field.

Estimating probabilities is a hard problem. However, my aim in this article is to discuss what probabilities mean, not how they are obtained. So I’ll take the probabilities mentioned above as given and move on.

The rules of probability

Before we discuss the possible interpretations of probability, it is necessary to mention some of the mathematical properties we expect probabilities to possess. Rather than present these in a formal way, I’ll discuss them in the context of our example.

Here they are:

  1. All probabilities listed are numbers that lie between 0 (impossible) and 1 (absolute certainty).
  2. It is absolutely certain that the task will finish on one of the listed days. That is, the sum of all probabilities equals 1.
  3. It is impossible for the task not to finish on one of the listed days. In other words, the probability of the task finishing on a day not listed in the table is 0.
  4. The probability of finishing on any one of many days is given by the sum of the probabilities for all those days.  For example, the probability of finishing on day 2 or day 3 is 0.20 (i.e,  0.05+0.15). This holds because the two events are mutually exclusive – that is, the occurence of one event precludes the occurence of the other. Specifically,  if we finish on day 2 we cannot finish on day 3 (or any other day) and vice-versa.

These statements illustrate the mathematical assumptions (or axioms) of probability. I won’t write them out in their full mathematical splendour, those interested in this should head off to the Wikipedia article on the axioms of probability.

Another useful concept is that of cumulative probability which, in our example, is the probability that the task will be completed by a particular day . For example,  the  probability that the task will be completed by day 5  is 0.75  (the sum of probabilities for days 2 through 5).   In general, the cumulative probability of finishing on any particular day is the sum of probabilities of completion for all days up to and including that day.

Interpretations of probability

With that background out of the way, we can get to the main point of this article which is:

What do these probabilities mean?

We’ll explore this question using the cumulative probability example mentioned above,  and by drawing on a paper by Glen Shafer entitled, What is Probability?

OK, so  what is meant by the statement, “There is a 75% chance that the task will finish in 5 days.” ?

It could mean that:

  1. If this task is done many times over, it will be completed within 5 days in 75% of the cases. Following Shafer, we’ll call this the frequency interpretation.
  2. It is believed that there is a 75% chance of finishing this task in 5 days. Note that belief can be tested by seeing if the person who holds the belief is willing to place a bet on task completion with odds that are equivalent to the believed probability. Shafer calls this the belief interpretation.
  3. Based on a comparison to similar tasks this particular task has a 75% chance of finishing in 5 days.  Shafer refers to this as the support interpretation.

(Aside: The belief and support interpretations involve subjective and objective states of knowledge about the events of interest respectively. These are often referred to as subjective and objective Bayesian interpretations because knowledge about these events can be refined using Bayes Theorem, providing one has relevant data regarding the occurrence of events.)

The interesting thing is that all the above interpretations can be shown to  satisfy the axioms of probability discussed earlier (see Shafer’s paper for details). However, it is clear from the above that each of these interpretations have very different meanings.  We’ll take a closer look at this next.

More about the interpretations and their limitations

The frequency interpretation appears to be the most rational one because it interprets probabilities in terms of results of experiments – I.e.  it interprets probabilities as experimental facts, not beliefs. In Shafer’s words:

According to the frequency interpretation, the probability of an event is the long-run frequency with which the event occurs in a certain experimental setup or in a certain population. This frequency is a fact about the experimental setup or the population, a fact independent of any person’s beliefs.

However, there is a big problem here: it assumes that such an experiment can actually be carried out. This definitely isn’t possible in our example: tasks cannot be repeated in exactly the same way – there will always be differences, however small.

There are other problems with the frequency interpretation. Some of these include:

  1. There are questions about whether a sequence of trials will converge to a well-defined probability.
  2. What if the event cannot be repeated?
  3. How does one decide on what makes up the population of all events. This is sometimes called the reference class problem.

See Shafer’s article for more on these.

The belief interpretation treats probabilities as betting odds. In this interpretation a 75% probability of finishing in 5 days means that we’re willing to put up 75 cents to win a dollar if the task finishes in 5 days (or equivalently 25 cents to win a dollar if it doesn’t).  Note that this says nothing about how the bettor arrives at his or her odds.  These are subjective (personal) beliefs. However, they are experimentally determinable – one can  determine peoples’ subjective odds by finding out how theyactually place bets.

There is a good deal of debate about whether the belief interpretation is normative or descriptive: that is, do the rules of probability tell us what people’s beliefs should be or do they tell us what they actually are. Most people trained in statistics would claim the former – that the rules impose conditions that beliefs should satisfy. In contrast, in management and behavioural science, probabilities based on subjective beliefs are often assumed to describe how the world actually is. However, the wealth of literature on cognitive biases suggests that the people’s actual beliefs, as reflected in their decisions, do not conform to the rules of probability.  The latter observation seems to favour normative option, but arguments can be made in support (or refutation) of either position.

The problem mentioned the previous paragraph is a perfect segue into the support interpretation,  according to which the probability of an event occurring is the degree to which we should believe that it will occur (based on available evidence).  This seems fine  until we realize that evidence can come in many “shapes and sizes.”  For example, compare the statements “the last time we did something similar we finished in 5 days, based on which we reckon there’s a 70-80% chance we’ll finish in 5 days” and “based on historical data for gathered for 50 projects, we believe that we have a 75% chance of finishing in 5 days. “ The two pieces of evidence offer very different levels of support. Therefore, although the support interpretation appears to be more objective than the belief interpretation, it isn’t actually so because it is difficult to determine which evidence one should use.  So, unlike the case of subjective beliefs (where one only has to ask people about their personal odds), it is not straightforward to determine these probabilities empirically.

So we’re left with a situation in which we have three interpretations, each of which address specific aspects of probability but also have major shortcomings.

Is there any way to break the impasse?

A resolution?

Shafer suggests that the three interpretations of probability are best viewed as highlighting different aspects of a single situation: that of an idealized case where we have a sequence of experiments with known probabilities.  Let’s see how this statement (which is essentially the frequency interpretation) can be related to the other two interpretations.

Consider my belief that that the task has a 75% chance of finishing in 5 days. This is analogous to saying that if the task is done several times over, I believe it would finish in 5 days in 75% of the cases.  My belief can be objectively confirmed by testing my willingness to put up 75 cents to win a dollar if the task finishes in five days.  Now, when I place this bet I have my (personal)  reasons for doing so. However, these reasons ought to relate to knowledge of the fair odds involved in the said bet.  Such fair odds can only be derived from knowledge of what would happen in a (possibly hypothetical) sequence of experiments.

The key assumption in the above argument is that my personal odds aren’t arbitrary – I should be able to justify them to another (rational) person.

Let’s look at the support interpretation. In this case I have hard evidence for stating that there’s a 75% chance of finishing in 5 days. I can take this hard evidence as my personal degree of belief (remember, as stated in the previous paragraph, any personal degree of belief should have some such rationale behind it.) However, since it is based on hard evidence, it should be rationally justifiable and hence can be associated with a sequence of experiments.

So what?

The main point from the above is the following: probabilities may be interpreted in different ways, but they have an underlying unity. That is, when we state that there is a 75% probability of finishing a task in 5 days, we are implying all the following statements (with no preference for any particular one):

  1. If we were to do the task several times over, it will finish within five days in three-fourths of the cases. Of course, this will hold only if the task is done a sufficiently large number of times (which may not be practical in most cases)
  2. We are willing to place a bet given 3:1 odds of completion within five days.
  3. We have some hard evidence to back up statement (1) and our betting belief (2).

In reality, however,  we tend to latch on to one particular interpretation depending on the situation. One is unlikely to think in terms of hard evidence when one is buying a lottery ticket but hard evidence is a must when estimating a project. When tossing a coin one might instinctively use the frequency interpretation but when estimating a task that hasn’t been done before one might use personal belief. Nevertheless, it is worth remembering that regardless of the interpretation we choose, all three are implied. So the  next time someone gives you a probabilistic estimate, ask them if they have the evidence to back it up for sure,  but don’t forget to  ask  if they’d be willing to accept a bet based on their own stated odds. 🙂

Written by K

July 1, 2010 at 10:09 pm

%d bloggers like this: