Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Bias’ Category

Six common pitfalls in project risk analysis

with 3 comments

The discussion of risk in presented in most textbooks and project management courses follows the well-trodden path of risk identification, analysis, response planning and monitoring (see the PMBOK guide, for example).  All good stuff, no doubt.  However, much of the guidance offered is at a very high level. Among other things, there is little practical advice on what not to do. In this post I address this issue by outlining some of the common pitfalls in project risk analysis.

1. Reliance on subjective judgement: People see things differently:  one person’s risk may even be another person’s opportunity. For example, using a new technology in a project can be seen as a risk (when focusing on the increased chance of failure) or opportunity (when focusing on the opportunities afforded by being an early adopter). This is a somewhat extreme example, but the fact remains that individual perceptions influence the way risks are evaluated.  Another problem with subjective judgement is that it is subject to cognitive biases – errors in perception. Many high profile project failures can be attributed to such biases:  see my post on cognitive bias and project failure for more on this. Given these points, potential risks should be discussed from different perspectives with the aim of reaching a common understanding of what they are and how they should be dealt with.

2. Using inappropriate historical data: Purveyors of risk analysis tools and methodologies exhort project managers to determine probabilities using relevant historical data. The word relevant is important: it emphasises that the data used to calculate probabilities (or distributions) should be from situations that are similar to the one at hand.  Consider, for example, the probability of a particular risk – say,  that a particular developer will not be able to deliver a module by a specified date.  One might have historical data for the developer, but the question remains as to which data points should be used. Clearly, only those data points that are from projects that are similar to the one at hand should be used.  But how is similarity defined? Although this is not an easy question to answer, it is critical as far as the relevance of the estimate is concerned. See my post on the reference class problem for more on this point.

3. Focusing on numerical measures exclusively: There is a widespread perception that quantitative measures of risk are better than qualitative ones. However,  even where reliable and relevant data is available,  the measures still need to  based on sound methodologies. Unfortunately, ad-hoc techniques abound in risk analysis:  see my posts on Cox’s risk matrix theorem and limitations of risk scoring methods for more on these.  Risk metrics based on such techniques can be misleading.  As Glen Alleman points out in this comment, in many situations qualitative measures may be more appropriate and accurate than quantitative ones.

4. Ignoring known risks: It is surprising how often known risks are ignored.  The reasons for this have to do with politics and mismanagement. I won’t dwell on this as I have dealt with it at length in an earlier post.

5. Overlooking the fact that risks are distributions, not point values: Risks are inherently uncertain, and any uncertain quantity is represented by a range of values, (each with an associated probability) rather than a single number (see this post for more on this point). Because of the scarcity or unreliability of historical data, distributions are often assumed a priori: that is, analysts will assume that the risk distribution has a particular form (say, normal or lognormal) and then evaluate distribution parameters using historical data.  Further, analysts often choose simple distributions that that are easy to work with mathematically.  These distributions often do not reflect reality. For example,  they may be vulnerable to “black swan” occurences because they do not account for outliers.

6. Failing to update risks in real time: Risks are rarely static – they evolve in time, influenced by circumstances and events both in and outside the project. For example, the acquisition of a key vendor by a mega-corporation is likely to affect the delivery of that module you are waiting on –and quite likely in an adverse way. Such a change in risk is obvious; there may be many that aren’t. Consequently, project managers need to reevaluate and update risks periodically. To be fair, this is a point that most textbooks make – but it is advice that is not followed as often as it should be.

This brings me to the end of my (subjective) list of risk analysis pitfalls. Regular readers of this blog will have noticed that some of the points made in this post are similar to the ones I made in my post on estimation errors. This is no surprise: risk analysis and project estimation are activities that deal with an uncertain future, so it is to be expected that they have common problems and pitfalls. One could generalize this point:  any activity that involves gazing into a murky crystal ball will be plagued by similar problems.

Written by K

June 2, 2011 at 10:21 pm

Why deliberation trumps standard decision-making methods

with 12 comments

Wikipedia defines decision analysis as the discipline comprising the philosophy, theory, methodology, and professional practice necessary to address important decisions in a formal manner.  Standard decision-making techniques generally involve the following steps:

  1. Identify available options.
  2. Develop criteria for rating options.
  3. Rate options according to criteria developed.
  4. Select the top-ranked option.

This sounds great in theory, but as Tim van Gelder points out in an article entitled the The Wise Delinquency of Decision Makers, formal methods of decision analysis are not used as often as textbooks and decision-theorists would have us believe.  This, he argues, isn’t due to ignorance:  even those trained in such methods often do not use them for decisions that really matter. Instead they resort to deliberation –    weighing up options in light of the arguments and evidence for and against them. He discusses why this is so, and also points out some problems with deliberative methods and what can be done do fix them. This post is a summary of the main points he makes in the article.

To begin with, formal methods aren’t suited to many decision-making problems encountered in the real world. For instance:

  1. Real-world options often cannot be quantified or rated in a meaningful way. Many of life’s dilemmas fall into this category. For example, a decision to accept or decline a job offer is rarely made on the basis of material gain alone.
  2. Even where ratings are possible, they can be highly subjective. For example, when considering a job offer, one candidate may give more importance to financial matters whereas another might consider lifestyle-related matters (flexi-hours, commuting distance etc.) to be paramount. Another complication here is that there may not be enough information to settle the matter conclusively. As an example, investment decisions are often made on the basis of quantitative information that is based on questionable assumptions.
  3. Finally, the problem may be wicked – i.e. complex, multi-faceted and difficult to analyse using formal decision making methods. Classic examples of wicked problems are climate change (so much so, that some say it is not even a problem) and city / town planning. Such problems cannot be forced into formal decision analysis frameworks in any meaningful way.

Rather than rating options and assigning scores, deliberation involves making arguments for and against each option and weighing them up in some consistent (but qualitative) way. In contrast to textbook methods of decision analysis, this is essentially an informal process; there is no prescribed method that one must follow. One could work through an arguments oneself or in conversation with others.  Because of the points listed above, deliberation is often better suited to deal with many of the decisions we are confronted with in our work and personal lives (see this post for a real-life example of deliberative decision making)

However, as Van Gelder points out,

The trouble is that deliberative decision making is still a very problematic business. Decisions go wrong all the time. Textbook decision methods were developed, in part, because it was widely recognized that our default or habitual decision making methods are very unreliable.

He  lists four problems with deliberative methods:

  1. Biases – Many poor decisions can be traced back to cognitive biaseserrors of judgement based on misperceptions of situations, data or evidence. A common example of such a bias is overconfidence in one’s own judgement. See this post for a discussion of how failures of high-profile projects may have been due to cognitive biases.
  2. Emotions – It is difficult, if not impossible, to be completely rational when making a decision – even a simple one.  However, emotions can cloud judgement and lead to decisions being made on the basis of pride, anger or envy rather than a clear-headed consideration of known options and their pros and cons.
  3. Tyranny of the group – Important decisions are often made by committees. Such decisions are subjected to collective biases such as groupthink – the tendency of group members to think alike and ignore external inputs so as to avoid internal conflicts. See this post for a discussion of groupthink in project environments. 
  4. Lack of training – People end up making poor decisions because they lack knowledge of informal logic and argumentation, skills that can be taught and then honed through practice.

Improvements in our ability to deliberate matters can be brought about by addressing the above. Clearly, it is difficult to be completely objective when confronted with tough decisions just as it is impossible to rid ourselves of our (individual and collective) biases.  That said, any technique that lays out all the options and arguments for and against them in a easy-to-understand way may help in making our biases and emotions (and those of others) obvious. Visual notations such as  IBIS (Issue-Based Information Systems) and  Argument Mapping do just that.  See this post for more on why it is better to represent reasoning visually than in prose.

The use of techniques such as the ones listed in the previous paragraph can lead to immediate improvements in corporate decision making. Firstly,  because gaps in logic and weaknesses in supporting evidence are made obvious, those responsible for formulating, say, a business case can focus on improving the their arguments prior to presenting them to senior managers. Secondly, decision makers can see the logic, supporting materials and the connections between them at a glance. In short: those formulating an argument and those making decisions based on it can focus on the essential points of the matter without having to wade through reams of documentation or tedious presentations.

To summarise: formal decision-making techniques are unsuited to complex problems  or those that have  options that cannot be quantified in a meaningful way. For such issues, deliberation –  supplemented by visual notations such as IBIS or Argument Mapping – offers a better alternative.

Written by K

May 13, 2011 at 5:32 am

Pathways to folly: a brief foray into non-knowledge

leave a comment »

One of the assumptions of managerial practice is that organisational knowledge is based on valid data. Of course, knowledge is more than just data. The steps from data to knowledge and beyond are described in the much used (and misused)  data-information-knowledge-wisdom (DIKW) hierarchy. The model organises the aforementioned elements in a “knowledge pyramid” as shown in Figure 1.  The basic idea is that data, when organised in a way that makes contextual sense, equates to information which, when understood and assimilated, leads to knowledge which then, finally, after much cogitation and reflection, may lead to wisdom.

Figure 1: Data-Information-Knowledge-Wisdom (DIKW) Pyramid

In this post, I explore “evil twins” of the DIKW framework:  hierarchical models of non-knowledge. My discussion is based on a paper by Jay Bernstein, with some  extrapolations of my own. My aim is to illustrate  (in  a not-so-serious way)  that there are many more managerial pathways to ignorance and folly than there are to knowledge and wisdom.

I’ll start with a quote from the paper.  Bernstein states that:

Looking at the way DIKW decomposes a sequence of levels surrounding knowledge invites us to wonder if an analogous sequence of stages surrounds ignorance, and where associated phenomena like credulity and misinformation fit.

Accordingly he starts his argument by noting opposites for each term in the DIKW hierarchy. These are listed in the table below:

DIKW term Opposite
Data Incorrect data, Falsehood, Missing data,
Information Misinformation, Disinformation, Guesswork,
Knowledge Delusion, Unawareness, Ignorance
Wisdom Folly

This is not an exhaustive list of antonyms –   only a few terms that make sense in the context of an “evil twin” of DIKW are listed. It should also be noted that   I have added some antonyms that Bernstein does not mention.  In the remainder of this post, I will focus on discussing the possible relationships between these terms that are opposites of those that appear in the DIKW model.

The first thing to note is that there is generally more than one antonym for each element of the DIKW hierarchy. Further, every antonym has a  different meaning from others. For example – the absence of data is different from incorrect data which in turn is different from a deliberate falsehood. This is no surprise – it is simply a manifestation of the principle that there are many more ways to get things wrong than there are to get them right.

An implication of the above is that there can be more than one road to folly depending on how one gets things wrong. Before we discuss these, it is best to nail down the meanings of some of the words listed above (in the sense in which they are used in this article):

Misinformation – information that is incorrect or inaccurate

Disinformation – information that is deliberately manipulated to mislead.

Delusion – false belief.

Unawareness – the state of not being fully cognisant of the facts.

Ignorance – a lack of knowledge.

Folly – foolishness, lack of understanding or sense.

The meanings of the other words in the table are clear enough and need no elaboration.

Meanings clarified, we can now look at the some of the “pyramids of folly” that can be constructed from the opposites listed in the table.

Let’s start with incorrect data. Data that is incorrect will mislead, hence resulting in misinformation. Misinformed people end up with false beliefs – i.e. they are deluded. These beliefs can cause them to make foolish decisions that betray a lack of understanding or sense. This gives us the pyramid of delusion shown in Figure 2.

Figure 2: A pyramid of delusion

Similarly, Figure 3 shows a pyramid of unawareness that arises from falsehoods and Figure 4, a pyramid of ignorance that results from missing data.

Figure 3: A pyramid of unawareness

Figure 4: A pyramid of ignorance

Figures 2 through 4 are distinct pathways to folly. I  reckon many of my readers would have seen examples of these in real life situations. (Tragically, many managers who traverse these pathways are unaware that they are doing so.  This may be a manifestation of the Dunning-Kruger effect.)

There’s more though – one can get things wrong at higher level independent of whether or not the lower levels are done right. For example, one can draw the wrong conclusions from (correct) data. This would result in the pyramid shown in Figure 5.

Figure 5: Yet another pyramid of delusion

Finally, I should mention that it’s even worse: since we are talking about non-knowledge, anything goes. Folly needs no effort whatsoever, it can be achieved without any data, information or knowledge (or their opposites).  Indeed, one can play endlessly with  antonyms and near-antonyms of the DIKW terms (including those not listed here) and come up with a plethora of pyramids, each denoting a possible pathway to folly.

Written by K

March 3, 2011 at 11:13 pm

%d bloggers like this: