Eight to Late

Sensemaking and Analytics for Organizations

The failure of risk management: a book review

with 7 comments

Introduction

Any future-directed activity has a degree of uncertainty, and uncertainty implies risk. Bad stuff happens – anticipated events don’t unfold as planned and unanticipated events occur.  The main function of risk management is to deal with this negative aspect of uncertainty.  The events of the last few years suggest that risk management as practiced in many organisations isn’t working.  A book by Douglas Hubbard entitled, The Failure of Risk Management – Why it’s Broken and How to Fix It, discusses why many commonly used risk management practices are flawed and what needs to be done to fix them. This post is a summary and review of the book.

Interestingly, Hubbard began writing the book well before the financial crisis of 2008 began to unfold.  So although he discusses matters pertaining to risk management in finance, the book has a much broader scope. For instance, it will be of interest to project and  program/portfolio management professionals because many of the flawed risk management practices that Hubbard mentions are often used in project risk management.

The book is divided into three parts: the first part introduces the crisis in risk management; the second deals with why some popular risk management practices are flawed; the third discusses what needs to be done to fix these.  My review covers the main points of each section in roughly the same order as they appear in the book.

The crisis in risk management

There are several risk management methodologies and techniques in use ;  a quick search will reveal some of them. Hubbard begins his book by asking the following simple questions about these:

  1. Do these risk management methods work?
  2. Would any organisation that uses these techniques know if they didn’t work?
  3. What would be the consequences if they didn’t work

His contention is that for most organisations the answers to the first two questions are negative.  To answer the third question, he gives the example of the crash of United Flight 232 in 1989. The crash was attributed to the simultaneous failure of three independent (and redundant) hydraulic systems. This happened because the systems were located at the rear of the plane and debris from a damaged turbine cut lines to all them.  This is an example of common mode failure – a single event causing multiple systems to fail.  The probability of such an event occurring was estimated to be less than one in a billion. However, the reason the turbine broke up was that it hadn’t been inspected properly (i.e. human error).  The probability estimate hadn’t considered human oversight, which is way more likely than one-in-billion.  Hubbard uses this example to make the point that a weak risk management methodology can have huge consequences.

Following a very brief history of risk management from historical times to the present, Hubbard presents a list of common methods of risk management. These are:

  1. Expert intuition – essentially based on “gut feeling”
  2. Expert audit – based on expert intuition of independent consultants.  Typically involves the development  of checklists and also uses stratification methods (see next point)
  3. Simple stratification methodsrisk matrices are the canonical example of stratification methods.
  4. Weighted scores – assigned scores for different criteria (scores usually assigned by expert intuition), followed by weighting based on perceived importance of each criterion.
  5. Non-probabilistic financial analysis –techniques such as computing the financial consequences of best and worst case scenarios
  6. Calculus of preferences – structured decision analysis techniques such as multi-attribute utility theory and analytic hierarchy process. These techniques are based on expert judgements. However, in cases where multiple judgements are involved these techniques ensure that the judgements are logically consistent  (i.e. do not contradict the principles of logic).
  7. Probabilistic models – involves building probabilistic models of risk events.  Probabilities can be based on historical data, empirical observation or even intuition.  The book essentially builds a case for evaluating risks using probabilistic models, and provides advice on how these should be built

The book also discusses the state of risk management practice (at the end of 2008) as assessed by surveys carried out by The Economist, Protiviti and Aon Corporation. Hubbard notes that the surveys are based  largely on self-assessments of risk management effectiveness. One cannot place much confidence in these because self-assessments of risk are subject to well known psychological effects such as cognitive biases (tendencies to base judgements on flawed perceptions) and the Dunning-Kruger effect (overconfidence in one’s abilities).   The acid test for any assessment  is whether or not it use sound quantitative measures.  Many of the firms surveyed fail on this count: they do not quantify risks as well as they claim they do. Assigning weighted scores to qualitative judgements does not count as a sound quantitative technique – more on this later.

So, what are some good ways of measuring the effectiveness of risk management? Hubbard lists the following:

  1. Statistics based on large samples – the use of this depends on the availability of historical or other data that is similar to the situation at hand.
  2. Direct evidence – this is where the risk management technique actually finds some problem that would not have been found otherwise. For example, an audit that unearths dubious financial practices
  3. Component testing – even if one isn’t able to test the method end-to-end, it may be possible to test specific components that make up the method. For example, if the method uses computer simulations, it may be possible to validate the simulations by applying them to known situations.
  4. Check of completeness – organisations need to ensure that their risk management methods cover the entire spectrum of risks, else there’s a danger that mitigating one risk may increase the probability of another.  Further, as Hubbard states, “A risk that’s not even on the radar cannot be managed at all.” As far as completeness is concerned, there are four perspectives that need to be taken into account. These are:
    1. Internal completeness – covering all parts of the organisation
    2. External completeness – covering all external entities that the organisation interacts with.
    3. Historical completeness – this involves covering worst case scenarios and historical data.
    4. Combinatorial completeness – this involves considering combinations of events that may occur together; those that may lead to common-mode failure discussed earlier.

Finally, Hubbard closes the first section with the observation that it is better not to use any formal methodology than to use one that is flawed. Why? Because a flawed methodology can lead to an incorrect decision being made  with high confidence.

Why it’s broken

Hubbard begins this section by identifying the four major players in the risk management game. These are:

  1. Actuaries:  These are perhaps the first modern professional risk managers.  They use quantitative methods to manage risks in the insurance and pension industry.  Although the methods actuaries use are generally sound, the profession is slow to pick up new techniques. Further, many investment decisions that insurance companies do not come under the purview of actuaries. So, actuaries typically do not cover the entire spectrum of organizational risks.
  2. Physicists and mathematicians: Many rigorous risk management techniques came out of statistical research done during the second world war. Hubbard therefore calls this group War Quants. One of the notable techniques to come out of this effort is the Monte Carlo Method – originally proposed by Nick Metropolis, John Neumann and Stanislaw Ulam as a technique to calculate the averaged trajectories of neutrons in fissile material  (see this article by Nick Metropolis for a first-person account of how the method was developed). Hubbard believes that Monte Carlo simulations offer a sound, general technique for quantitative risk analysis. Consequently he spends a fair few pages discussing these methods, albeit at a very basic level. More about this later.
  3. Economists:  Risk analysts in investment firms often use quantitative techniques from economics.  Popular techniques include modern portfolio theory and models from options theory (such as the Black-Scholes model) . The problem is that these models are often based on questionable assumptions. For example, the Black-Scholes model assumes that the rate of return on a stock is normally distributed (i.e.  its value is lognormally distributed) – an assumption that’s demonstrably incorrect as witnessed by the events of the last few years .  Another way in which economics plays a role in risk management is through behavioural studies,  in particular the recognition that decisions regarding future events (be they risks or stock prices) are subject to cognitive biases. Hubbard suggests that the role of cognitive biases in risk management has been consistently overlooked. See my post entitled Cognitive biases as meta-risks and its follow-up for more on this point.
  4. Management consultants: In Hubbard’s view, management consultants and standards institutes are largely responsible for many of the ad-hoc approaches  to risk management. A particular favourite of these folks are ad-hoc scoring methods that involve ordering of risks based on subjective criteria. The scores assigned to risks are thus subject to cognitive bias. Even worse, some of the tools used in scoring can end up ordering risks incorrectly.  Bottom line: many of the risk analysis techniques used by consultants and standards have no justification.

Following the discussion of the main players in the risk arena, Hubbard discusses the confusion associated with the definition of risk. There are a plethora of definitions of risk, most of which originated in academia. Hubbard shows how some of these contradict each other while others are downright non-intuitive and incorrect. In doing so, he clarifies some of the academic and professional terminology around risk. As an example, he takes exception to the notion of risk as a “good thing” – as in the PMI definition, which views risk as  “an uncertain event or condition that, if it occurs, has a positive or negative effect on a project objective.”  This definition contradicts common (dictionary) usage of the term risk (which generally includes only bad stuff).  Hubbard’s opinion on this may raise a few eyebrows (and hackles!) in project management circles, but I reckon he has a point.

In my opinion, the most important sections of the book are chapters 6 and 7, where Hubbard discusses why “expert knowledge and opinions” (favoured by standards and methodologies are flawed) and why a very popular scoring method (risk matrices) is “worse than useless.”  See my posts on the  limitations of scoring techniques and Cox’s risk matrix theorem for detailed discussions of these points.

A major problem with expert estimates is overconfidence. To overcome this, Hubbard advocates using calibrated probability assessments to quantify analysts’ abilities to make estimates. Calibration assessments involve getting analysts to answer trivia questions and eliciting confidence intervals for each answer. The confidence intervals are then checked against the proportion of correct answers.  Essentially, this assesses experts’ abilities to estimates by tracking how often they are right. It has been found that  people can improve their ability to make subjective estimates through calibration training – i.e. repeated calibration testing followed by feedback. See this site for more on probability calibration.

Next Hubbard tackles several “red herring” arguments that are commonly offered as reasons not to manage risks using rigorous quantitative methods.  Among these are arguments that quantitative risk analysis is impossible because:

  1. Unexpected events cannot be predicted.
  2. Risks cannot be measured accurately.

Hubbard states that the first objection is invalid because although some events (such as spectacular stockmarket crashes) may have been overlooked by models, it doesn’t prove that quantitative risk as a whole is flawed. As he discusses later in the book, many models go wrong by assuming Gaussian probability distributions where fat-tailed ones would be more appropriate. Of course, given limited data it is difficult to figure out which distribution’s the right one. So, although Hubbard’s argument is correct, it offers little comfort to the analyst who has to model events before they occur.

As far as the second is concerned, Hubbard has written another book on how just about any business variable (even intangible ones) can be measured. The book makes a persuasive case that most quantities of interest can be measured, but there are difficulties.  First, figuring out the factors that affect a variable  is not a straightforward task.  It depends, among other things,  on the availability of reliable data, the analyst’s experience etc. Second, much depends on the judgement of the analyst, and such judgements are subject to bias. Although calibration may help reduce certain biases such as overconfidence, it is by no means a panacea for all biases.  Third, risk-related measurements generally  involve events that are yet to occur.  Consequently, such measurements are  based on  incomplete information.  To make progress one often has to make additional assumptions which may not justifiable a priori.

Hubbard is a strong advocate for quantitative techniques such as Monte Carlo simulations in managing risks. However,  he believes that they are often used incorrectly.  Specifically:

  1. They are often used without empirical data or validation – i.e. their inputs and results are not tested through observation.
  2. Are generally used piecemeal – i.e. used in some parts of an organisation only, and often to manage low-level, operational risks.
  3. They frequently focus on variables that are not important (because these are easier to measure) rather than those that are important. Hubbard calls this perverse occurrence measurement inversion. He contends that analysts often exclude the most important variables because these are considered to be “too uncertain.”
  4. They use inappropriate probability distributions. The Normal distribution (or bell curve) is not always appropriate. For example, see my posts on the inherent uncertainty of project task estimates for an intuitive discussion of the form of the probability distribution for project task durations.
  5. They do not account for correlations between variables. Hubbard contends that many analysts simply ignore correlations between risk variables (i.e. they treat variables as independent when they actually aren’t). This almost always leads to an underestimation of risk because correlations can cause feedback effects and common mode failures.

Hubbard dismisses the argument that rigorous quantitative methods such as Monte Carlo are “too hard.” I  agree, the principles behind Monte Carlo techniques aren’t hard to follow – and I take the opportunity to plug my article entitled  An introduction to Monte Carlo simulations of project tasks 🙂 .  As far as practice is concerned,  there are several commercially available tools that automate much of the mathematical heavy-lifting. I won’t recommend any, but a search using the key phrase monte carlo simulation tool will reveal many.

How to Fix it

The last part of the book outlines Hubbard’s recommendations for improving the practice of risk management. Most of the material presented here draws on the previous section of the book. His main suggestions are to:

  1. Adopt the language, tools and philosophy of uncertain systems. To do this he recommends:
    • Using calibrated probabilities to express uncertainties. Hubbard believes that any person who makes estimates that will be used in models should be calibrated. He offers some suggestions on people can improve their ability to estimate through calibration – discussed earlier and on this web site.
    • Employing quantitative modelling techniques to model risks. In particular, he advocates the use of Monte Carlo methods to model risks. He also provides a list of commercially available PC-based Monte Carlo tools. Hubbard makes the point that modelling forces analysts to decompose the systems  of interest and understand the relationships between their components (see point 2 below).
    • Developing an understanding of the basic rules of probability including independent events, conditional probabilities and Bayes’ Theorem. He gives examples of situations in which these rules can help analysts extrapolate

    To this, I would also add that it is important to understand the idea that an estimate isn’t a number, but a  probability distribution – i.e. a range of numbers, each with a probability attached to it.

  2. Build, validate and test models using reality as the ultimate arbiter. Models should be built iteratively, testing each assumption against observation. Further, models need to incorporate mechanisms (i.e. how and why the observations are what they are), not just raw observations. This is often hard to do, but at the very least models should incorporate correlations between variables.  Note that correlations are often (but not always!) indicative of an underlying mechanism. See this post for an introductory example of Monte Carlo simulation involving correlated variables.
  3. Lobbying for risk management to be given appropriate visibility in organisation.s

In the penultimate chapter of the book, Hubbard fleshes out the characteristics or traits of good risk analysts. As he mentions several times in the book, risk analysis is an empirical science – it arises from experience. So, although the analytical and mathematical  (modelling) aspects of risk are important,  a good analyst must, above all, be an empiricist – i.e. believe that knowledge about risks can only come from observation of reality. In particular, tesing models by seeing how well they match historical data and tracking model predictions are absolutely critical aspects of a risk analysts job. Unfortunately, many analysts do not measure the performance of their risk models. Hubbard offers some excellent suggestions on how analysts can refine and improve their models via observation.

Finally, Hubbard emphasises the importance of creating an organisation-wide approach to managing risks. This ensures that organisations will tackle the most important risks first, and that its risk management budgets  will be spent in the most effective way. Many of the tools and approaches that he suggests in the book are most effective if they are used in a consistent way across the entire organisation. In reality, though,  risk management languishes way down in the priorities of senior executives. Even those who profess to understanding the  importance of managing risks in a rigorous way, rarely offer risk managers the organisational visibility and support they need to do their jobs.

Conclusion

Whew, that was quite a bit to go through, but for me it was was worth it.  Hubbard’s views impelled me to take a closer look at the foundations of project risk management and  I learnt a great deal from doing so.  Regular readers of this blog would have noticed that I have referenced the book (and some of the references therein)  in a few of my articles on risk analysis.

I should  add that I’ve never felt entirely comfortable with the risk management approaches advocated by project management methodologies.  Hubbard’s book articulates these shortcomings and offers solutions to fix them. Moreover, he does so in a way that is entertaining and accessible.  If there is a gap, it is that he does does not delve into the details of model building, but then his other book deals with this in some detail.

To summarise:  the book is a must read for anyone interested in risk management. It is  especially recommended for project professionals who manage risks using methods that  are advocated by project management standards and methodologies.

Written by K

February 11, 2010 at 10:11 pm

7 Responses

Subscribe to comments with RSS.

  1. Thanks for the detailed review of my book. (Google Alerts have been very handy in spotting new blog entries about my books.)

    I only have two comments about your review. Yes, I know that my proposed definition would raise eyebrows (and hackles) with man PM’s. But I assure you the definition used by PM’s raises eyebrows and hackles with acturaries and quants who do probabilistic risk analysis for nuclear power or oil exploration. My piont was that they are speaking the same language and that is a detriment to the broader application of risk management across disciplines. Also, there is already a widely used word for what PM’s want to call risk – its uncertainty. I think PM’s want to, for some reason, redefine risk analysis to mean what is now call decision analysis. Risk is certainly part of decision analysis. So is uncertain reward. There is no need to lump them into the same word.

    Second, where you said my recommendation to use historical data is “little comfort to the analyst who has to model events before they occur” misses my point. In the case of the financial crisis, better analysis of history would have done helped analysts see the finacial crisis before it occured. A more complete historical analysis (not just going back 5 years) would have shown that this type of event was much more probable than their models indicated. I also was rejecting the idea that models are flawed merely because they didn’t predict things exactly. Remember, even inutiion is a model and intuition didn’t predict the financial crisis exactly, either. The question is, over the long run, which model performs better because none of the models (intuition or quantitative simulations) predicted every event perfectly.

    Thanks again for the detailed review.
    Douglas W. Hubbard

    Like

    Douglas W. Hubbard

    February 12, 2010 at 12:25 am

  2. Sorry, in the second sentence I meant “My point was that they are NOT speaking the same language…”

    Like

    Douglas W. Hubbard

    February 12, 2010 at 12:27 am

  3. Can I just replace the original comment and the two subsequent attempted (and incomplete) corrections with this? I really shouldn’t try to write something before breakfast.

    Thanks for the detailed review of my book. (Google Alerts have been very handy in spotting new blog entries about my books.)

    I only have two comments about your review. Yes, I know that my proposed definition would raise eyebrows (and hackles) with man PM’s. But I assure you the definition used by PM’s raises eyebrows and hackles with actuaries and quants who do probabilistic risk analysis for nuclear power or oil exploration. My point was that they are not speaking the same language and that is a detriment to the broader application of risk management across disciplines. Also, there is already a widely used word for what PM’s want to call risk – its uncertainty. I think PM’s want to, for some reason, redefine risk analysis to mean what is now call decision analysis. Risk is certainly part of decision analysis. So is uncertain reward. There is no need to lump them into the same word.

    Second, where you said my recommendation to use historical data is “little comfort to the analyst who has to model events before they occur” misses my point. In the case of the financial crisis, better analysis of history would have helped analysts see the financial crisis before it occurred. A more complete historical analysis (not just going back 5 years) would have shown that this type of event was much more probable than their models indicated. I also was rejecting the idea that models are flawed merely because they didn’t predict things exactly. Remember, even intuition is a model and intuition didn’t predict the financial crisis exactly, either. The question is, over the long run, which model performs better because none of the models (intuition or quantitative simulations) predicted every event perfectly.

    Thanks again for the detailed review.
    Douglas W. Hubbard

    Like

    Douglas W. Hubbard

    February 12, 2010 at 12:42 am

  4. Thanks for taking the time to read my review. Before addressing your comments, I should reiterate that I really enjoyed the book and think it should be required reading for anyone involved in modelling and/or analysing risks.

    First, I agree with you about redefining risk and have said so in the review. Redefining terms that already have established meanings is generally not a good idea: it only causes confusion when communicating across (or even within) disciplines.

    Second, perhaps my choice of words where I say “little comfort…” wasn’t the best. What I’m getting at is that the analyst can never be certain that his or her model includes all relevant variables. Good (though incomplete) quantitative models are better than anything else we have, but one has to be very aware of the limits of one’s models.

    Thanks again for your comments.

    Regards,

    Kailash.

    Like

    K

    February 12, 2010 at 7:31 am

  5. And I do appreciate the review. An author always likes to see reviews that show evidence that the reviewer both read and understood the book. Clearly, you did.

    I consider my comments quite minor points about a review tht really captures all of the ideas quite well. So, again, thanks for taking the time to both read the book and write such a detailed review.

    Doug Hubbard

    Like

    Douglas W. Hubbard

    February 16, 2010 at 2:27 am

  6. […] to risk. An uncertain event is a risk only if there is a potential loss or gain involved. See my review of Douglas Hubbard’s book on the failure of risk management for more on risk vs. […]

    Like


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: