Eight to Late

Sensemaking and Analytics for Organizations

Six common pitfalls in project risk analysis

with 3 comments

The discussion of risk in presented in most textbooks and project management courses follows the well-trodden path of risk identification, analysis, response planning and monitoring (see the PMBOK guide, for example).  All good stuff, no doubt.  However, much of the guidance offered is at a very high level. Among other things, there is little practical advice on what not to do. In this post I address this issue by outlining some of the common pitfalls in project risk analysis.

1. Reliance on subjective judgement: People see things differently:  one person’s risk may even be another person’s opportunity. For example, using a new technology in a project can be seen as a risk (when focusing on the increased chance of failure) or opportunity (when focusing on the opportunities afforded by being an early adopter). This is a somewhat extreme example, but the fact remains that individual perceptions influence the way risks are evaluated.  Another problem with subjective judgement is that it is subject to cognitive biases – errors in perception. Many high profile project failures can be attributed to such biases:  see my post on cognitive bias and project failure for more on this. Given these points, potential risks should be discussed from different perspectives with the aim of reaching a common understanding of what they are and how they should be dealt with.

2. Using inappropriate historical data: Purveyors of risk analysis tools and methodologies exhort project managers to determine probabilities using relevant historical data. The word relevant is important: it emphasises that the data used to calculate probabilities (or distributions) should be from situations that are similar to the one at hand.  Consider, for example, the probability of a particular risk – say,  that a particular developer will not be able to deliver a module by a specified date.  One might have historical data for the developer, but the question remains as to which data points should be used. Clearly, only those data points that are from projects that are similar to the one at hand should be used.  But how is similarity defined? Although this is not an easy question to answer, it is critical as far as the relevance of the estimate is concerned. See my post on the reference class problem for more on this point.

3. Focusing on numerical measures exclusively: There is a widespread perception that quantitative measures of risk are better than qualitative ones. However,  even where reliable and relevant data is available,  the measures still need to  based on sound methodologies. Unfortunately, ad-hoc techniques abound in risk analysis:  see my posts on Cox’s risk matrix theorem and limitations of risk scoring methods for more on these.  Risk metrics based on such techniques can be misleading.  As Glen Alleman points out in this comment, in many situations qualitative measures may be more appropriate and accurate than quantitative ones.

4. Ignoring known risks: It is surprising how often known risks are ignored.  The reasons for this have to do with politics and mismanagement. I won’t dwell on this as I have dealt with it at length in an earlier post.

5. Overlooking the fact that risks are distributions, not point values: Risks are inherently uncertain, and any uncertain quantity is represented by a range of values, (each with an associated probability) rather than a single number (see this post for more on this point). Because of the scarcity or unreliability of historical data, distributions are often assumed a priori: that is, analysts will assume that the risk distribution has a particular form (say, normal or lognormal) and then evaluate distribution parameters using historical data.  Further, analysts often choose simple distributions that that are easy to work with mathematically.  These distributions often do not reflect reality. For example,  they may be vulnerable to “black swan” occurences because they do not account for outliers.

6. Failing to update risks in real time: Risks are rarely static – they evolve in time, influenced by circumstances and events both in and outside the project. For example, the acquisition of a key vendor by a mega-corporation is likely to affect the delivery of that module you are waiting on –and quite likely in an adverse way. Such a change in risk is obvious; there may be many that aren’t. Consequently, project managers need to reevaluate and update risks periodically. To be fair, this is a point that most textbooks make – but it is advice that is not followed as often as it should be.

This brings me to the end of my (subjective) list of risk analysis pitfalls. Regular readers of this blog will have noticed that some of the points made in this post are similar to the ones I made in my post on estimation errors. This is no surprise: risk analysis and project estimation are activities that deal with an uncertain future, so it is to be expected that they have common problems and pitfalls. One could generalize this point:  any activity that involves gazing into a murky crystal ball will be plagued by similar problems.

Written by K

June 2, 2011 at 10:21 pm

3 Responses

Subscribe to comments with RSS.

  1. […] “Six common pitfalls in project risk analysis” – Eight to Late The discussion of risk in presented in most textbooks and project management courses follows the well-trodden path of risk identification, analysis, response planning and monitoring (see the PMBOK guide, for example). All good stuff, no doubt. Click here to continue reading […]

    Like

  2. >>4. Ignoring known risks
    Hello Kailash,
    have you checked on Slavoi Zizek? This Lacanian guy added to Donald Rumsfeld’s triad of risks (Known knowns, Known Unknows and unknown unknowns) a 4th class, the unknown knowns, or those things we KNOW about but that we won’t map.
    Much like the idealization prices, where you subtract from a subject some undesirable/unbearable characteristics. Say, you think of a girl in terms of body shape, eyes color, hair but you don’t map her digestive processes and skin fat, for instance.
    Idealisation is great, however not ok form managerial purposes.
    I mean, maybe this very item “ignoring known risks” is harder than its seems at first sight…
    Tum chaló!

    Like

    Ricardo Guido Lavalle

    June 7, 2018 at 4:28 am


Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.