Archive for the ‘Risk analysis’ Category
Last week, my friend Paul Culmsee conducted an internal workshop in my organisation on the theme of collaborative problem solving. Dialogue mapping is one of the tools of the tools he introduced during the workshop. This piece, primarily intended as a follow-up for attendees, is an introduction to dialogue mapping via a vignette that illustrates its practice (see this post for another one). I’m publishing it here as I thought it might be useful for those who wish to understand what the technique is about.
Dialogue mapping uses a notation called Issue Based Information System (IBIS), which I have discussed at length in this post. For completeness, I’ll begin with a short introduction to the notation and then move on to the vignette.
A crash course in IBIS
The IBIS notation consists of the following three elements:
- Issues(or questions): these are issues that are being debated. Typically, issues are framed as questions on the lines of “What should we do about X?” where X is the issue that is of interest to a group. For example, in the case of a group of executives, X might be rapidly changing market condition whereas in the case of a group of IT people, X could be an ageing system that is hard to replace.
- Ideas(or positions): these are responses to questions. For example, one of the ideas of offered by the IT group above might be to replace the said system with a newer one. Typically the whole set of ideas that respond to an issue in a discussion represents the spectrum of participant perspectives on the issue.
- Arguments: these can be Pros (arguments for) or Cons (arguments against) an issue. The complete set of arguments that respond to an idea represents the multiplicity of viewpoints on it.
In Compendium, IBIS elements are represented as nodes as shown in Figure 1: issues are represented by blue-green question marks; positions by yellow light bulbs; pros by green + signs and cons by red – signs. Compendium supports a few other node types, but these are not part of the core IBIS notation. Nodes can be linked only in ways specified by the IBIS grammar as I discuss next.
The IBIS grammar can be summarized in three simple rules:
- Issues can be raised anew or can arise from other issues, positions or arguments. In other words, any IBIS element can be questioned. In Compendium notation: a question node can connect to any other IBIS node.
- Ideas can only respond to questions– i.e. in Compendium “light bulb” nodes can only link to question nodes. The arrow pointing from the idea to the question depicts the “responds to” relationship.
- Arguments can only be associated with ideas– i.e. in Compendium “+” and “–“ nodes can only link to “light bulb” nodes (with arrows pointing to the latter)
The legal links are summarized in Figure 2 below.
…and that’s pretty much all there is to it.
The interesting (and powerful) aspect of IBIS is that the essence of any debate or discussion can be captured using these three elements. Let me try to convince you of this claim via a vignette from a discussion on risk.
The Risk – a Dialogue Mapping vignette
“Morning all,” said Rick, “I know you’re all busy people so I’d like to thank you for taking the time to attend this risk identification session for Project X. The objective is to list the risks that we might encounter on the project and see if we can identify possible mitigation strategies.”
He then asked if there were any questions. The head waggles around the room indicated there were none.
“Good. So here’s what we’ll do,” he continued. “I’d like you all to work in pairs and spend 10 minutes thinking of all possible risks and then another 5 minutes prioritising. Work with the person one your left. You can use the flipcharts in the breakout area at the back if you wish to.”
Twenty minutes later, most people were done and back in their seats.
“OK, it looks as though most people are done…Ah, Joe, Mike have you guys finished?” The two were still working on their flip-chart at the back.
“Yeah, be there in a sec,” replied Mike, as he tore off the flip-chart page.
“Alright,” continued Rick, after everyone had settled in. “What I’m going to do now is ask you all to list your top three risks. I’d also like you tell me why they are significant and your mitigation strategies for them.” He paused for a second and asked, “Everyone OK with that?”
Everyone nodded, except Helen who asked, “isn’t it important that we document the discussion?”
“I’m glad you brought that up. I’ll make notes as we go along, and I’ll do it in a way that everyone can see what I’m writing. I’d like you all to correct me if you feel I haven’t understood what you’re saying. It is important that my notes capture your issues, ideas and arguments accurately.”
Rick turned on the data projector, fired up Compendium and started a new map. “Our aim today is to identify the most significant risks on the project – this is our root question” he said, as he created a question node. “OK, so who would like to start?”
“Sure,” we’ll start, said Joe easily. “Our top risk is that the schedule is too tight. We’ll hit the deadline only if everything goes well, and everyone knows that they never do.”
“OK,” said Rick, “as he entered Joe and Mike’s risk as an idea connecting to the root question. “You’ve also mentioned a point that supports your contention that this is a significant risk – there is absolutely no buffer.” Rick typed this in as a pro connecting to the risk. He then looked up at Joe and asked, “have I understood you correctly?”
“Yes,” confirmed Joe.
“That’s pretty cool,” said Helen from the other end of the table, “I like the notation, it makes reasoning explicit. Oh, and I have another point in support of Joe and Mike’s risk – the deadline was imposed by management before the project was planned.”
Rick began to enter the point…
“Oooh, I’m not sure we should put that down,” interjected Rob from compliance. “I mean, there’s not much we can do about that can we?”
…Rick finished the point as Rob was speaking.
“I hear you Rob, but I think it is important we capture everything that is said,” said Helen.
“I disagree,” said Rob. “It will only annoy management.”
“Slow down guys,” said Rick, “I’m going to capture Rob’s objection as “this is a management imposed-constraint rather than risk. Are you OK with that, Rob?”
Rob nodded his assent.
I think it is important we articulate what we really think, even if we can’t do anything about it,” continued Rick. There’s no point going through this exercise if we don’t say what we really think. I want to stress this point, so I’m going to add honesty and openness as ground rules for the discussion. Since ground rules apply to the entire discussion, they connect directly to the primary issue being discussed.”
“OK, so any other points that anyone would like to add to the ones made so far?” Queried Rick as he finished typing.
He looked up. Most of the people seated round the table shook their heads indicating that there weren’t.
“We haven’t spoken about mitigation strategies. Any ideas?” Asked Rick, as he created a question node marked “Mitigation?” connecting to the risk.
“Yeah well, we came up with one,” said Mike. “we think the only way to reduce the time pressure is to cut scope.”
“OK,” said Rick, entering the point as an idea connecting to the “Mitigation?” question. “Did you think about how you are going to do this? He entered the question “How?” connecting to Mike’s point.
“That’s the problem,” said Joe, “I don’t know how we can convince management to cut scope.”
“Hmmm…I have an idea,” said Helen slowly…
“We’re all ears,” said Rick.
“…Well…you see a large chunk of time has been allocated for building real-time interfaces to assorted systems – HR, ERP etc. I don’t think these need to be real-time – they could be done monthly…and if that’s the case, we could schedule a simple job or even do them manually for the first few months. We can push those interfaces to phase 2 of the project, well into next year.”
There was a silence in the room as everyone pondered this point.
“You know, I think that might actually work, and would give us an extra month…may be even six weeks for the more important upstream stuff,” said Mike. “Great idea, Helen!”
“Can I summarise this point as – identify interfaces that can be delayed to phase 2?” asked Rick, as he began to type it in as a mitigation strategy. “…and if you and Mike are OK with it, I’m going to combine it with the ‘Cut Scope’ idea to save space.”
“Yep, that’s fine,” said Helen. Mike nodded OK.
Rick deleted the “How?” node connecting to the “Cut scope” idea, and edited the latter to capture Helen’s point.
“That’s great in theory, but who is going to talk to the affected departments? They will not be happy.” asserted Rob. One could always count on compliance to throw in a reality check.
“Good point,” said Rick as he typed that in as a con, “and I’ll take the responsibility of speaking to the department heads about this,” he continued entering the idea into the map and marking it as an action point for himself. “Is there anything else that Joe, Mike…or anyone else would like to add here,” he added, as he finished.
“Nope,” said Mike, “I’m good with that.”
“Yeah me too,” said Helen.
“I don’t have anything else to say about this point,” said Rob, “ but it would be great if you could give us a tutorial on this technique. I think it could be useful to summarise the rationale behind our compliance regulations. Folks have been complaining that they don’t understand the reasoning behind some of our rules and regulations. ”
“I’d be interested in that too,” said Helen, “I could use it to clarify user requirements.”
“I’d be happy to do a session on the IBIS notation and dialogue mapping next week. I’ll check your availability and send an invite out… but for now, let’s focus on the task at hand.”
The discussion continued…but the fly on the wall was no longer there to record it.
I hope this little vignette illustrates how IBIS and dialogue mapping can aid collaborative decision-making / problem solving by making diverse viewpoints explicit. That said, this is a story, and the problem with stories is that things go the way the author wants them to. In real life, conversations can go off on unexpected tangents, making them really hard to map. So, although it is important to gain expertise in using the software, it is far more important to practice mapping live conversations. The latter is an art that requires considerable practice. I recommend reading Paul Culmsee’s series on the practice of dialogue mapping or <advertisement> Chapter 14 of The Heretic’s Guide to Best Practices</advertisement> for more on this point.
That said, there are many other ways in which IBIS can be used, that do not require as much skill. Some of these include: mapping the central points in written arguments (what’s sometimes called issue mapping) and even decisions on personal matters.
To sum up: IBIS is a powerful means to clarify options and lay them out in an easy-to-follow visual format. Often this is all that is required to catalyse a group decision.
Introduction – uncertainty and decision-making
Managing uncertainty – deciding what to do in the absence of reliable information – is a significant part of project management and many other managerial roles. When put this way, it is clear that managing uncertainty is primarily a decision-making problem. Indeed, as I will discuss shortly, the main difficulties associated with decision-making are related to specific types of uncertainties that we tend to overlook.
Let’s begin by looking at the standard approach to decision-making, which goes as follows:
- Define the decision problem.
- Identify options.
- Develop criteria for rating options.
- Evaluate options against criteria.
- Select the top rated option.
As I have pointed out in this post, the above process is too simplistic for some of the complex, multifaceted decisions that we face in life and at work (switching jobs, buying a house or starting a business venture, for example). In such cases:
- It may be difficult to identify all options.
- It is often impossible to rate options meaningfully because of information asymmetry – we know more about some options than others. For example, when choosing whether or not to switch jobs, we know more about our current situation than the new one.
- Even when ratings are possible, different people will rate options differently – i.e. different people invariably have different preferences for a given outcome. This makes it difficult to reach a consensus.
Regular readers of this blog will know that the points listed above are characteristics of wicked problems. It is fair to say that in recent years, a general awareness of the ubiquity of wicked problems has led to an appreciation of the limits of classical decision theory. (That said, it should be noted that academics have been aware of this for a long time: Horst Rittel’s classic paper on the dilemmas of planning, written in 1973, is a good example. And there are many others that predate it.)
In this post I look into some hard-to-tackle aspects of uncertainty by focusing on the aforementioned shortcomings of classical decision theory. My discussion draws on a paper by Richard Bradley and Mareile Drechsler.
This article is organised as follows: I first present an overview of the standard approach to dealing with uncertainty and discuss its limitations. Following this, I elaborate on three types of uncertainty that are discussed in the paper.
Background – the standard view of uncertainty
- Figure out all possible states (outcomes)
- Enumerate actions that are possible
- Figure out the consequences of actions for all possible states.
- Attach a value (aka preference) to each consequence
- Select the course of action that maximizes value (based on an appropriately defined measure, making sure to factor in the likelihood of achieving the desired consequence)
(Note the close parallels between this process and the standard approach to decision-making outlined earlier.)
To keep things concrete it is useful to see how this process would work in a simple real-life example. Bradley and Drechsler quote the following example from Savage’s book that does just that:
…[consider] someone who is cooking an omelet and has already broken five good eggs into a bowl, but is uncertain whether the sixth egg is good or rotten. In deciding whether to break the sixth egg into the bowl containing the first five eggs, to break it into a separate saucer, or to throw it away, the only question this agent has to grapple with is whether the last egg is good or rotten, for she knows both what the consequence of breaking the egg is in each eventuality and how desirable each consequence is. And in general it would seem that for Savage once the agent has settled the question of how probable each state of the world is, she can determine what to do simply by averaging the utilities (Note: utility is basically a mathematical expression of preference or value) of each action’s consequences by the probabilities of the states of the world in which they are realised…
In this example there are two states (egg is good, egg is rotten), three actions (break egg into bowl, break egg into separate saucer to check if it rotten, throw egg away without checking) and three consequences (spoil all eggs, save eggs in bowl and save all eggs if last egg is not rotten, save eggs in bowl and potentially waste last egg). The problem then boils down to figuring out our preferences for the options (in some quantitative way) and the probability of the two states. At first sight, Savage’s approach seems like a reasonable way to deal with uncertainty. However, a closer look reveals major problems.
Problems with the standard approach
Unlike the omelet example, in real life situations it is often difficult to enumerate all possible states or foresee all consequences of an action. Further, even if states and consequences are known, we may not what value to attach to them – that is, we may not be able to determine our preferences for those consequences unambiguously. Even in those situations where we can, our preferences for may be subject to change – witness the not uncommon situation where lottery winners end up wishing they’d never won. The standard prescription works therefore works only in situations where all states, actions and consequences are known – i.e. tame situations, as opposed to wicked ones.
Before going any further, I should mention that Savage was cognisant of the limitations of his approach. He pointed out that it works only in what he called small world situations– i.e. situations in which it is possible to enumerate and evaluate all options. As Bradley and Drechsler put it,
Savage was well aware that not all decision problems could be represented in a small world decision matrix. In Savage’s words, you are in a small world if you can “look before you leap”; that is, it is feasible to enumerate all contingencies and you know what the consequences of actions are. You are in a grand world when you must “cross the bridge when you come to it”, either because you are not sure what the possible states of the world, actions and/or consequences are…
In the following three sections I elaborate on the complications mentioned above emphasizing, once again, that many real life situations are prone to such complications.
State space uncertainty
The standard view of uncertainty assumes that all possible states are given as a part of the problem definition – as in the omelet example discussed earlier. In real life, however, this is often not the case.
Bradley and Drechsler identify two distinct cases of state space uncertainty. The first one is when we are unaware that we’re missing states and/or consequences. For example, organisations that embark on a restructuring program are so focused on the cost-related consequences that they may overlook factors such as loss of morale and/or loss of talent (and the consequent loss of productivity). The second, somewhat rarer, case is when we are aware that we might be missing something but we don’t quite know what it is. All one can do here, is make appropriate contingency plans based on guesses regarding possible consequences.
Figuring out possible states and consequences is largely a matter of scenario envisioning based on knowledge and practical experience. It stands to reason that this is best done by leveraging the collective experience and wisdom of people from diverse backgrounds. This is pretty much the rationale behind collective decision-making techniques such as Dialogue Mapping.
The standard approach to tackling uncertainty assumes that the connection between actions and consequences is well defined. This is often not the case, particularly for wicked problems. For example, as I have discussed in this post, enterprise transformation programs with well-defined and articulated objectives often end up having a host of unintended consequences. At an even more basic level, in some situations it can be difficult to identify sensible options.
Option uncertainty is a fairly common feature in real-life decisions. As Bradley and Drechsler put it:
Option uncertainty is an endemic feature of decision making, for it is rarely the case that we can predict consequences of our actions in every detail (alternatively, be sure what our options are). And although in many decision situations, it won’t matter too much what the precise consequence of each action is, in some the details will matter very much.
…and unfortunately, the cases in which the details matter are precisely those problems in which they are the hardest to figure out – i.e. in wicked problems.
An implicit assumption in the standard approach is that once states and consequences are known, people will be able to figure out their relative preferences for these unambiguously. This assumption is incorrect, as there are at least two situations in which people will not be able to determine their preferences. Firstly, there may be a lack of factual information about one or more of the states. Secondly, even when one is able to get the required facts, it is hard to figure out how we would value the consequences.
A common example of the aforementioned situation is the job switch dilemma. In many (most?) cases in which one is debating whether or not to switch jobs, one lacks enough factual information about the new job – for example, the new boss’ temperament, the work environment etc. Further, even if one is able to get the required information, it is impossible to know how it would be to actually work there. Most people would have struggled with this kind of uncertainty at some point in their lives. Bradley and Drechsler term this ethical uncertainty. I prefer the term preference uncertainty, as it has more to do with preferences than ethics.
Some general remarks
The first point to note is that the three types of uncertainty noted above map exactly on to the three shortcomings of classical decision theory discussed in the introduction. This suggests a connection between the types of uncertainty and wicked problems. Indeed, most wicked problems are exemplars of one or more of the above uncertainty types. For example, the paradigm-defining super-wicked problem of climate change displays all three types of uncertainty.
The three types of uncertainty discussed above are overlooked by the standard approach to managing uncertainty. This happens in a number of ways. Here are two common ones:
- The standard approach assumes that all uncertainties can somehow be incorporated into a single probability function describing all possible states and/or consequences. This is clearly false for state space and option uncertainty: it is impossible to define a sensible probability function when one is uncertain about the possible states and/or outcomes.
- The standard approach assumes that preferences for different consequences are known. This is clearly not true in the case of preference uncertainty…and even for state space and option uncertainty for that matter.
In their paper, Bradley and Dreschsler arrive at these three types of uncertainty from considerations different from the ones I have used above. Their approach, while more general, is considerably more involved. Nevertheless, I would recommend that readers who are interested should take a look at it because they cover a lot of things that I have glossed over or ignored altogether.
Just as an example, they show how the aforementioned uncertainties can be reduced. There is a price to be paid, however: any reduction in uncertainty results in an increase in its severity. An example might help illustrate how this comes about. Consider a situation of state space uncertainty. One can reduce- or even, remove – this by defining a catch-all state (labelled, say, “all other outcomes”). It is easy to see that although one has formally reduced state space uncertainty to zero, one has increased the severity of the uncertainty because the catch-all state is but a reflection of our ignorance and our refusal to do anything about it!
There are many more implications of the above. However, I’ll point out just one more that serves to illustrate the very practical implications of these uncertainties. In a post on the shortcomings of enterprise risk management, I pointed out that the notion of an organisation-wide risk appetite is problematic because it is impossible to capture the diversity of viewpoints through such a construct. Moreover, rule or process based approaches to risk management tend to focus only on those uncertainties that can be quantified, or conversely they assume that all uncertainties can somehow be clumped into a single probability distribution as prescribed by the standard approach to managing uncertainty. The three types of uncertainty discussed above highlight the limitations of such an approach to enterprise risk.
The standard approach to managing uncertainty assumes that all possible states, actions and consequences are known or can be determined. In this post I have discussed why this is not always so. In particular, it often happens that we do not know all possible outcomes (state space uncertainty), consequences (option uncertainty) and/or our preferences for consequences (preference or ethical uncertainty).
As I was reading the paper, I felt the authors were articulating issues that I had often felt uneasy about but chose to overlook (suppress?). Generalising from one’s own experience is always a fraught affair, but I reckon we tend to deny these uncertainties because they are inconvenient – that is, they are difficult if not impossible to deal with within the procrustean framework of the standard approach. What is needed as a corrective is a recognition that the pseudo-quantitative approach that is commonly used to manage uncertainty may not the panacea it is claimed to be. The first step towards doing this is to acknowledge the existence of the uncertainties that we (probably) overlook.
In their book, Waltzing with Bears, Tom DeMarco and Timothy Lister coined the phrase, “risk management is project management for adults”. Twenty years on, it appears that their words have been taken seriously: risk management now occupies a prominent place in BOKs, and has also become a key element of project management practice.
On the other hand, if the evidence is to be believed (as per the oft quoted Chaos Report, for example), IT projects continue to fail at an alarming rate. This is curious because one would have expected that a greater focus on risk management ought to have resulted in better outcomes. So, is it possible at all that risk management (as it is currently preached and practiced in IT project management) cannot address certain risks…or, worse, that there are certain risks are simply not recognized as risks?
Some time ago, I came across a paper by Richard Barber that sheds some light on this very issue. This post elaborates on the nature and importance of such “hidden” risks by drawing on Barber’s work as well as my experiences and those of my colleagues with whom I have discussed the paper.
What are internally generated risks?
The standard approach to risk is based on the occurrence of events. Specifically, risk management is concerned with identifying potential adverse events and taking steps to reduce either their probability of occurrence or their impact. However, as Barber points out, this is a limited view of risk because it overlooks adverse conditions that are built into the project environment. A good example of this is an organizational norm that centralizes decision making at the corporate or managerial level. Such a norm would discourage a project manager from taking appropriate action when confronted with an event that demands an on-the-spot decision. Clearly, it is wrong-headed to attribute the risk to the event because the risk actually has its origins in the norm. In other words, it is an internally generated risk.
(Note: the notion of an internally generated risk is akin to the risk as a pathogen concept that I discussed in this post many years ago.)
Barber defines an internally generated risk as one that has its origin within the project organisation or its host, and arises from [preexisting] rules, policies, processes, structures, actions, decisions, behaviours or cultures. Some other examples of such risks include:
- An overly bureaucratic PMO.
- An organizational culture that discourages collaboration between teams.
- An organizational structure that has multiple reporting lines – this is what I like to call a pseudo-matrix organization🙂
These factors are similar to those that I described in my post on the systemic causes of project failure. Indeed, I am tempted to call these systemic risks because they are related to the entire system (project + organization). However, that term has already been appropriated by the financial risk community.
Since the term is relatively new, it is important to draw distinctions between internally generated and other types of risks. It is easy to do so because the latter (by definition) have their origins outside the hosting organization. A good example of the latter is the risk of a vendor not delivering a module on time or worse, going into receivership prior to delivering the code.
Finally, there are certain risks that are neither internally generated nor external. For example, using a new technology is inherently risky simply because it is new. Such a risk is inherent rather than internally generated or external.
Understanding the danger within
The author of the paper surveyed nine large projects with the intent of getting some insight into the nature of internally generated risks. The questions he attempted to address are the following:
- How common are these risks?
- How significant are they?
- How well are they managed?
- What is the relationship between the ability of an organization to manage such risks and the organisation’s project management maturity level (i.e. the maturity of its project management processes)
Data was gathered through group workshops and one-on-one interviews in which the author asked a number of questions that were aimed at gaining insight into:
- The key difficulties that project managers encountered on the projects.
- What they perceived to be the main barriers to project success.
The aim of the one-on-one interviews was to allow for a more private setting in which sensitive issues (politics, dysfunctional PMOs and brain-dead rules / norms) could be freely discussed.
The data gathered was studied in detail, with the intent of identifying internally generated risks. The author describes the techniques he used to minimize subjectivity and to ensure that only significant risks were considered. I will omit these details here, and instead focus on his findings as they relate to the questions listed above.
Commonality of internally generated risks
Since organizational rules and norms are often flawed, one might expect that internally generated risks would be fairly common in projects. The author found that this was indeed the case with the projects he surveyed: in his words, the smallest number of non-trivial internally generated risks identified in any of the nine projects was 15, and the highest was 30! Note: the identification of non-trivial risks was done by eliminating those risks that a wide range of stakeholders agreed as being unimportant.
Unfortunately, he does not explicitly list the most common internally-generated risks that he found. However, there are a few that he names later in the article. These are:
- Resource allocation (see my article on the resource allocation syndrome for much more on this)
- Inadequate sponsorship (see my post on the systemic roots of project failure for more on this)
I suspect that experienced project managers would be able to name many more.
Significance of internally generated risks
Determining the significance of these risks is tricky because one has to figure out their probability of occurrence. The impact is much easier to get a handle on, as one has a pretty good idea of the consequences of such risks should they eventuate. (Question: What happens if there is inadequate sponsorship? Answer: the project is highly likely to fail!). The author attempted to get a qualitative handle on the probability of occurrence by asking relevant stakeholders to estimate the likelihood of occurrence. Based on the responses received, he found that a large fraction of the internally-generated risks are significant (high probability of occurrence and high impact).
Management of internally generated risks
To identify whether internally generated risks are well managed, the author asked relevant project teams to look at all the significant internal risks on their project and classify them as to whether or not they had been identified by the project team prior to the research. He found that in over half the cases, less than 50% of the risks had been identified. However, most of the risks that were identified were not managed!
The relationship between project management maturity and susceptibility to internally generated risk
Project management maturity refers to the level of adoption of standard good practices within an organization. Conventional wisdom tells us that there should be an inverse correlation between maturity levels and susceptibility to internally generated risk – the higher the maturity level, the lower the susceptibility.
The author assessed maturity levels by interviewing various stakeholders within the organization and also by comparing the processes used within the organization to well-known standards. The results indicated a weak negative correlation – that is, organisations with a higher level of maturity tended to have a smaller number of internally generated risks. However, as the author admits, one cannot read much into this finding as the correlation was weak.
The study suggests that internally generated risks are common and significant on projects. However, the small sample size also suggests that more comprehensive surveys are needed. Nevertheless, anecdotal evidence from colleagues who I spoke with suggests that the findings are reasonably robust. Moreover, it is also clear (both, from the study and my conversations) that these risks are not very well managed. There is a good reason for this: internally generated risks originate in human behavior and / or dysfunctional structures. These tend to be a difficult topic to address in an organizational setting because people are unlikely to tell those above them in the hierarchy that they (the higher ups) are the cause of a problem. A classic example of such a risk is estimation by decree – where a project team is told to just get it done by a certain date. Although most project managers are aware of such risks, they are reluctant to name them for obvious reasons.
I suspect most project managers who work in corporate environments will have had to grapple with internally generated risks in one form or another. Although traditional risk management does not recognize these risks as risks, seasoned project managers know from experience that people, politics or even processes can pose major problems to smooth working of projects. However, even when recognised for what they are, these risks can be hard to tackle because they lie outside a project manager’s sphere of influence. They therefore tend to become those proverbial pachyderms in the room – known to all but never discussed, let alone documented….and therein lies the danger within.
Enterprise risk management (ERM) refers to the process by which uncertainties are identified, analysed and managed from an organization-wide perspective. In principle such a perspective enables organisations to deal with risks in a holistic manner, avoiding the silo mentality that plagues much of risk management practice. This is the claim made of ERM at any rate, and most practitioners accept it as such. However, whether the claim really holds is another matter altogether. Unfortunately, most of the available critiques of ERM are written for academics or risk management experts. In this post I summarise a critique of ERM presented in a paper by Michael Power entitled, The Risk Management of Nothing.
I’ll begin with a brief overview of ERM frameworks and then summarise the main points of the paper along with some of my comments and annotations.
ERM Frameworks and Definitions
What is ERM?
The best way to answer this question is to look at a couple of well-known ERM frameworks, one from the Casualty Actuarial Society (CAS) and the other from the Committee of Sponsoring Organisations of the Treadway Commission (COSO).
CAS defines ERM as:
… the discipline by which an organization in any industry assesses, controls, exploits, finances, and monitors risks from all sources for the purpose of increasing the organization’s short- and long-term value to its stakeholders.
See this article for an overview of ERM from actuarial perspective.
COSO defines ERM as:
…a process, effected by an entity’s board of directors, management and other personnel, applied in strategy setting and across the enterprise, designed to identify potential events that may affect the entity, and manage risk to be within its risk appetite, to provide reasonable assurance regarding the achievement of entity objectives.
The term risk appetite in the above definition refers to the risk an organisation is willing to bear. See the first article in the June 2003 issue of Internal Auditor for more on the COSO perspective on ERM.
In both frameworks, the focus is very much on quantifying risks through (primarily) financial measures and on establishing accountability for managing these risks in a systematic way.
All this sounds very sensible and uncontroversial. So, where’s the problem?
The problems with ERM
The author of the paper begins with the observation that the basic aim of ERM is to identify risks that can affect an organisation’s objectives and then design controls and mitigation strategies that reduce these risks (collectively) to below a predetermined value that is specified by the organisation’s risk appetite. Operationally, identified risks are monitored and corrective action is taken when they go beyond limits specified by the controls, much like the operation of a thermostat.
In this view, risk management is a mechanistic process. Failures of risk management are seen more as being due to “not doing it right” (implementation failure) or politics getting in the way (organizational friction), rather than a problem with the framework itself. The basic design of the framework is rarely questioned.
Contrary to common wisdom, the author of the paper believes that the design of ERM is flawed in the following three ways:
- The idea of a single, organisation-wide risk appetite is simplistic.
- The assumption that risk can be dealt with by detailed, process-based rules (suitable for audit and control) is questionable.
- The undue focus on developing financial metrics and controls blind it to “bigger picture”, interconnected risks because these cannot be quantified or controlled by such mechanisms.
We’ll now take a look at each of the above in some detail
Appetite vs. appetisation
As mentioned earlier, risk appetite is defined as the risk the organisation is willing to bear. Although ERM frameworks allow for qualitative measures of risk appetite, most organisations implementing ERM tend to prefer quantitative ones. This is a problem because the definition of risk appetite can vary significantly across an organization. For example, the sales and audit functions within an organisation could (will!) have different appetites for risk. As another example, familiar to anyone who reads the news, is that there is usually a big significant gap between the risk appetites of financial institutions and regulatory authorities.
The difference in risk appetites of different stakeholder groups is a manifestation of the fact that risk is a social construct – different stakeholder groups view a given risk in different ways, and some may not even see certain risks as risks (witness the behaviour of certain financial “masters of the universe”)
Since a single, organisation-wide risk appetite is difficult to come up with, the author suggests a different approach – one which takes into account the multiplicity of viewpoints in an organisation; a process he calls “risk appetizing”. This involves getting diverse stakeholders to achieve a consensus / agreement on what constitutes risk appetite. Power argues that this process of reconciling different viewpoints of risk would lead to a more realistic view of the risk the organization is willing to bear. Quoting from the paper:
Conceptualising risk appetising as a process might better direct risk management attention to where it has likely been lacking, namely to the multiplicity of interactions which shape operational and ethical boundaries at the level of organizational practice. COSO-style ERM principles effectively limit the concept of risk appetite within a capital measurement discourse. Framing risk appetite as the process through which ethics and incentives are formed and reformed would not exclude this technical conception, but would bring it closer to the insights of several decades of organization theory.
Explicitly acknowledging the diversity of viewpoints on risk is likely to be closer to reality because:
…a conflictual and pluralistic model is more descriptive of how organizations actually work, and makes lower demands on organizational and political rationality to produce a single ‘appetite’ by explicitly recognising and institutionalising processes by which different appetites and values can be mediated.
Such a process is difficult because it involves getting people who have different viewpoints to agree on what constitutes a sensible definition of risk appetite.
A process bias
A bigger problem, in Power’s view, is that the ERM frameworks overemphasise financial / accounting measures and processes as a means of quantifying and controlling risk. As he puts it ERM:
… is fundamentally an accounting-driven blueprint which emphasises a controls-based approach to risk management. This design emphasis means that efforts at implementation will have an inherent tendency to elaborate detailed controls with corresponding documents trails.
This is a problem because it leads to a “rule-based compliance” mentality wherein risks are managed in a mechanical manner, using bureaucratic processes as a substitute for real thought about risks and how they should be managed. Such a process may work in a make-believe world where all risks are known, but is unlikely to work in one in which there is a great deal of ambiguity.
Power makes the important point that rule-based compliance chews up organizational resources. The tangible effort expended on compliance serves to reassure organizations that they are doing something to manage risks. This is dangerous because it lulls them into a false sense of security:
Rule-based compliance lays down regulations to be met, and requires extensive evidence, audit trails and box ‘checking’. All this demands considerable work and there is daily pressure on operational staff to process regulatory requirements. Yet, despite the workload volume pressure, this is also a cognitively comfortable world which focuses inwards on routine systems and controls. The auditability of this controls architecture can be theorized as a defence against anxiety and enables organizational agents to feel that their work conforms to legitimised principles.
In this comfortable, prescriptive world of process-based risk management, there is little time to imagine and explore what (else) could go wrong. Further, the latter is often avoided because it is a difficult and often uncomfortable process:
…the imagination of alternative futures is likely to involve the production of discomfort, as compared with formal ‘comfort’ of auditing. The approach can take the form of scenario analysis in which participants from different disciplines in an organization can collectively track the trajectory of potential decisions and events. The process begins as an ‘encounter’ with risk and leads to the confrontation of limitation and ambiguity.
Such a process necessarily involves debate and dialogue – it is essentially a deliberative process. And as Power puts it:
The challenge is to expand processes which support interaction and dialogue and de-emphasise due process – both within risk management practice and between regulator and regulated.
This is right of course, but that’s not all: a lot of other process-focused disciplines such as project management would also benefit by acknowledging and responding to this challenge.
A limited view of embeddedness
One of the imperatives of ERM is to “embed” risk management within organisations. Among other things, this entails incorporating risk management explicitly into job descriptions, and making senior managers responsible for managing risks. Although this is a step in the right direction, Power argues that the concept of embeddeness as articulated in ERM remains limited because it focuses on specific business entities, ignoring the wider environment and context in which they exist. The essential (but not always obvious) connections between entities are not necessarily accounted for. As Power puts it:
ERM systems cannot represent embeddedness in the sense of interconnectedness; its proponents seem only to demand an intensification of embedding at the individual entity level. Yet, this latter kind of embedding of a compliance driven risk management, epitomised by the Sarbanes-Oxley legislation, is arguably a disaster in itself, by tying up resources and, much worse, cognition and attention in ‘auditized’ representations of business processes.
In short: the focus on following a process-oriented approach to risk management – as mandated by frameworks – has the potential to de-focus attention from risks that are less obvious, but are potentially more significant.
Addressing the limitations
Power believes the flaws in ERM can be addressed by looking to the practice of business continuity management (BCM). BCM addresses the issue of disaster management – i.e. how to keep an organisation functioning in the event of a disaster. Consequently, there is a significant overlap between the aims of BCM and ERM. However, unlike ERM, BCM draws specialists from different fields and emphasizes collective action. Such an approach is therefore more likely to take a holistic view of risk, and that is the real point.
Regardless of the approach one takes, the point is to involve diverse stakeholders and work towards a shared (enterprise-wide) understanding of risks. Only then will it be possible to develop a risk management plan that incorporates the varying, even contradictory, perspectives that exist within an organisation. There are many techniques to work towards a shared understanding of risks, or any other issues for that matter. Some of these are discussed at length in my book.
Power suggests that ERM, as articulated by bodies such as CAS and COSO, flawed because:
- It attempts to quantify risk appetite at the organizational level – an essentially impossible task because different organizational stakeholders will have different views of risk. Risk is a social construct.
- It advocates a controls and rule-based approach to managing risks. Such a prescriptive “best” practice approach discourages debate and dialogue about risks. Consequently, many viewpoints are missed and quite possibly, so are many risks.
- Despite the rhetoric of ERM, implemented risk management controls and processes often overlook connections and dependencies between entities within organisations. So, although risk management appears to be embedded within the organisation, in reality it may not be so.
Power suggests that ERM practice could learn a few lessons from Business Continuity Management (BCM), in particular about the interconnected nature of business risks and the collective action needed to tackle them. Indeed, any approach that attempts to reconcile diverse risk viewpoints will be a huge improvement on current practice. Until then ERM will continue to be an illusion, offering false comfort to those who are responsible for managing risk.
Project estimates are generally based on assumptions about future events and their outcomes. As the future is uncertain, the concept of probability is sometimes invoked in the estimation process. There’s enough been written about how probabilities can be used in developing estimates; indeed there are a good number of articles on this blog – see this post or this one, for example. However, most of these writings focus on the practical applications of probability rather than on the concept itself – what it means and how it should be interpreted. In this article I address the latter point in a way that will (hopefully!) be of interest to those working in project management and related areas.
Uncertainty is a shape, not a number
Since the future can unfold in a number of different ways one can describe it only in terms of a range of possible outcomes. A good way to explore the implications of this statement is through a simple estimation-related example:
Assume you’ve been asked to do a particular task relating to your area of expertise. From experience you know that this task usually takes 4 days to complete. If things go right, however, it could take as little as 2 days. On the other hand, if things go wrong it could take as long as 8 days. Therefore, your range of possible finish times (outcomes) is anywhere between 2 to 8 days.
Clearly, each of these outcomes is not equally likely. The most likely outcome is that you will finish the task in 4 days. Moreover, the likelihood of finishing in less than 2 days or more than 8 days is zero. If we plot the likelihood of completion against completion time, it would look something like Figure 1.
Figure 1 begs a couple of questions:
- What are the relative likelihoods of completion for all intermediate times – i.e. those between 2 to 4 days and 4 to 8 days?
- How can one quantify the likelihood of intermediate times? In other words, how can one get a numerical value of the likelihood for all times between 2 to 8 days? Note that we know from the earlier discussion that this must be zero for any time less than 2 or greater than 8 days.
The two questions are actually related: as we shall soon see, once we know the relative likelihood of completion at all times (compared to the maximum), we can work out its numerical value.
Since we don’t know anything about intermediate times (I’m assuming there is no historical data available, and I’ll have more to say about this later…), the simplest thing to do is to assume that the likelihood increases linearly (as a straight line) from 2 to 4 days and decreases in the same way from 4 to 8 days as shown in Figure 2. This gives us the well-known triangular distribution.
Note: The term distribution is simply a fancy word for a plot of likelihood vs. time.
Of course, this isn’t the only possibility; there are an infinite number of others. Figure 3 is another (admittedly weird) example.
Further, it is quite possible that the upper limit (8 days) is not a hard one. It may be that in exceptional cases the task could take much longer (say, if you call in sick for two weeks) or even not be completed at all (say, if you leave for that mythical greener pasture). Catering for the latter possibility, the shape of the likelihood might resemble Figure 4.
From the figures above, we see that uncertainties are shapes rather than single numbers, a notion popularised by Sam Savage in his book, The Flaw of Averages. Moreover, the “shape of things to come” depends on a host of factors, some of which may not even be on the radar when a future event is being estimated.
Making likelihood precise
Thus far, I have used the word “likelihood” without bothering to define it. It’s time to make the notion more precise. I’ll begin by asking the question: what common sense properties do we expect a quantitative measure of likelihood to have?
Consider the following:
- If an event is impossible, its likelihood should be zero.
- The sum of likelihoods of all possible events should equal complete certainty. That is, it should be a constant. As this constant can be anything, let us define it to be 1.
In terms of the example above, if we denote time by and the likelihood by then:
Where denotes the sum of all non-zero likelihoods – i.e. those that lie between 2 and 8 days. In simple terms this is the area enclosed by the likelihood curves and the x axis in figures 2 to 4. (Technical Note: Since is a continuous variable, this should be denoted by an integral rather than a simple sum, but this is a technicality that need not concern us here)
is , in fact, what mathematicians call probability– which explains why I have used the symbol rather than . Now that I’ve explained what it is, I’ll use the word “probability” instead of ” likelihood” in the remainder of this article.
With these assumptions in hand, we can now obtain numerical values for the probability of completion for all times between 2 and 8 days. This can be figured out by noting that the area under the probability curve (the triangle in figure 2 and the weird shape in figure 3) must equal 1. I won’t go into any further details here, but those interested in the maths for the triangular case may want to take a look at this post where the details have been worked out.
The meaning of it all
(Note: parts of this section borrow from my post on the interpretation of probability in project management)
So now we understand how uncertainty is actually a shape corresponding to a range of possible outcomes, each with their own probability of occurrence. Moreover, we also know, in principle, how the probability can be calculated for any valid value of time (between 2 and 8 days). Nevertheless, we are still left with the question as to what a numerical probability really means.
As a concrete case from the example above, what do we mean when we say that there is 100% chance (probability=1) of finishing within 8 days? Some possible interpretations of such a statement include:
- If the task is done many times over, it will always finish within 8 days. This is called the frequency interpretation of probability, and is the one most commonly described in maths and physics textbooks.
- It is believed that the task will definitely finish within 8 days. This is called the belief interpretation. Note that this interpretation hinges on subjective personal beliefs.
- Based on a comparison to similar tasks, the task will finish within 8 days. This is called the support interpretation.
Note that these interpretations are based on a paper by Glen Shafer. Other papers and textbooks frame these differently.
The first thing to note is how different these interpretations are from each other. For example, the first one offers a seemingly objective interpretation whereas the second one is unabashedly subjective.
So, which is the best – or most correct – one?
A person trained in science or mathematics might claim that the frequency interpretation wins hands down because it lays out an objective, well -defined procedure for calculating probability: simply perform the same task many times and note the completion times.
Problem is, in real life situations it is impossible to carry out exactly the same task over and over again. Sure, it may be possible to almost the same task, but even straightforward tasks such as vacuuming a room or baking a cake can hold hidden surprise (vacuum cleaners do malfunction and a friend may call when one is mixing the batter for a cake). Moreover, tasks that are complex (as is often the case in the project work) tend to be unique and can never be performed in exactly the same way twice. Consequently, the frequency interpretation is great in theory but not much use in practice.
“That’s OK,” another estimator might say,” when drawing up an estimate, I compared it to other similar tasks that I have done before.”
This is essentially the support interpretation (interpretation 3 above). However, although this seems reasonable, there is a problem: tasks that are superficially similar will differ in the details, and these small differences may turn out to be significant when one is actually carrying out the task. One never knows beforehand which variables are important. For example, my ability to finish a particular task within a stated time depends not only on my skill but also on things such as my workload, stress levels and even my state of mind. There are many external factors that one might not even recognize as being significant. This is a manifestation of the reference class problem.
So where does that leave us? Is probability just a matter of subjective belief?
No, not quite: in reality, estimators will use some or all of three interpretations to arrive at “best guess” probabilities. For example, when estimating a project task, a person will likely use one or more of the following pieces of information:
- Experience with similar tasks.
- Subjective belief regarding task complexity and potential problems. Also, their “gut feeling” of how long they think it ought to take. These factors often drive excess time or padding that people work into their estimates.
- Any relevant historical data (if available)
Clearly, depending on the situation at hand, estimators may be forced to rely on one piece of information more than others. However, when called upon to defend their estimates, estimators may use other arguments to justify their conclusions depending on who they are talking to. For example, in discussions involving managers, they may use hard data presented in a way that supports their estimates, whereas when talking to their peers they may emphasise their gut feeling based on differences between the task at hand and similar ones they have done in the past. Such contradictory representations tend to obscure the means by which the estimates were actually made.
Estimates are invariably made in the face of uncertainty. One way to get a handle on this is by estimating the probabilities associated with possible outcomes. Probabilities can be reckoned in a number of different ways. Clearly, when using them in estimation, it is crucial to understand how probabilities have been derived and the assumptions underlying these. We have seen three ways in which probabilities are interpreted corresponding to three different ways in which they are arrived at. In reality, estimators may use a mix of the three approaches so it isn’t always clear how the numerical value should be interpreted. Nevertheless, an awareness of what probability is and its different interpretations may help managers ask the right questions to better understand the estimates made by their teams.
The essential idea behind group estimation is that an estimate made by a group is likely to be more accurate than one made by an individual in the group. This notion is the basis for the Delphi method and its variants. In this post, I use arguments involving probabilities to gain some insight into the conditions under which group estimates are more accurate than individual ones.
An insight from conditional probability
Let’s begin with a simple group estimation scenario.
Assume we have two individuals of similar skill who have been asked to provide independent estimates of some quantity, say a project task duration. Further, let us assume that each individual has a probability of making a correct estimate.
Based on the above, the probability that they both make a correct estimate, , is:
This is a consequence of our assumption that the individual estimates are independent of each other.
Similarly, the probability that they both get it wrong, , is:
Now we can ask the following question:
What is the probability that both individuals make the correct estimate if we know that they have both made the same estimate?
This can be figured out using Bayes’ Theorem, which in the context of the question can be stated as follows:
In the above equation, is the probability that both individuals get it right given that they have made the same estimate (which is what we want to figure out). This is an example of a conditional probability – i.e. the probability that an event occurs given that another, possibly related event has already occurred. See this post for a detailed discussion of conditional probabilities.
Similarly, is the conditional probability that both estimators make the same estimate given that they are both correct. This probability is 1.
Answer: If both estimators are correct then they must have made the same estimate (i.e. they must both within be an acceptable range of the right answer).
Finally, is the probability that both make the same estimate. This is simply the sum of the probabilities that both get it right and both get it wrong. Expressed in terms of this is, .
Now lets apply Bayes’ theorem to the following two cases:
- Both individuals are good estimators – i.e. they have a high probability of making a correct estimate. We’ll assume they both have a 90% chance of getting it right ().
- Both individuals are poor estimators – i.e. they have a low probability of making a correct estimate. We’ll assume they both have a 30% chance of getting it right ()
Consider the first case. The probability that both estimators get it right given that they make the same estimate is:
Thus we see that the group estimate has a significantly better chance of being right than the individual ones: a probability of 0.9878 as opposed to 0.9.
In the second case, the probability that both get it right is:
The situation is completely reversed: the group estimate has a much smaller chance of being right than an individual estimate!
In summary: estimates provided by a group consisting of individuals of similar ability working independently are more likely to be right (compared to individual estimates) if the group consists of competent estimators and more likely to be wrong (compared to individual estimates) if the group consists of poor estimators.
Assumptions and complications
I have made a number of simplifying assumptions in the above argument. I discuss these below with some commentary.
- The main assumption is that individuals work independently. This assumption is not valid for many situations. For example, project estimates are often made by a group of people working together. Although one can’t work out what will happen in such situations using the arguments of the previous section, it is reasonable to assume that given the right conditions, estimators will use their collective knowledge to work collaboratively. Other things being equal, such collaboration would lead a group of skilled estimators to reinforce each others’ estimates (which are likely to be quite similar) whereas less skilled ones may spend time arguing over their (possibly different and incorrect) guesses. Based on this, it seems reasonable to conjecture that groups consisting of good estimators will tend to make even better estimates than they would individually whereas those consisting of poor estimators have a significant chance of making worse ones.
- Another assumption is that an estimate is either good or bad. In reality there is a range that is neither good nor bad, but may be acceptable.
- Yet another assumption is that an estimator’s ability can be accurately quantified using a single numerical probability. This is fine providing the number actually represents the person’s estimation ability for the situation at hand. However, typically such probabilities are evaluated on the basis of past estimates. The problem is, every situation is unique and history may not be a good guide to the situation at hand. The best way to address this is to involve people with diverse experience in the estimation exercise. This will almost often lead to a significant spread of estimates which may then have to be refined by debate and negotiation.
Real-life estimation situations have a number of other complications. To begin with, the influence that specific individuals have on the estimation process may vary – a manager who is a poor estimator may, by virtue of his position, have a greater influence than others in a group. This will skew the group estimate by a factor that cannot be estimated. Moreover, strategic behaviour may influence estimates in a myriad other ways. Then there is the groupthink factor as well.
…and I’m sure there are many others.
Finally I should mention that group estimates can depend on the details of the estimation process. For example, research suggests that under certain conditions competition can lead to better estimates than cooperation.
In this post I have attempted to make some general inferences regarding the validity of group estimates based on arguments involving conditional probabilities. The arguments suggest that, all other things being equal, a collective estimate from a bunch of skilled estimators will generally be better than their individual estimates whereas an estimate from a group of less skilled estimators will tend to be worse than their individual estimates. Of course, in real life, there are a host of other factors that can come into play: power, politics and biases being just a few. Though these are often hidden, they can influence group estimates in inestimable ways.
Thanks go out to George Gkotsis and Craig Brown for their comments which inspired this post.
More often than not, managerial decisions are made on the basis of uncertain information. To lend some rigour to the process of decision making, it is sometimes assumed that uncertainties of interest can be quantified accurately using probabilities. As it turns out, this assumption can be incorrect in many situations because the probabilities themselves can be uncertain. In this post I discuss a couple of ways in which such uncertainty about uncertainty can manifest itself.
The problem of vagueness
In a paper entitled, “Is Probability the Only Coherent Approach to Uncertainty?”, Mark Colyvan made a distinction between two types of uncertainty:
- Uncertainty about some underlying fact. For example, we might be uncertain about the cost of a project – that there will be a cost is a fact, but we are uncertain about what exactly it will be.
- Uncertainty about situations where there is no underlying fact. For example, we might be uncertain about whether customers will be satisfied with the outcome of a project. The problem here is the definition of customer satisfaction. How do we measure it? What about customers who are neither satisfied nor dissatisfied? There is no clear-cut definition of what customer satisfaction actually is.
The first type of uncertainty refers to the lack of knowledge about something that we know exists. This is sometimes referred to as epistemic uncertainty – i.e. uncertainty pertaining to knowledge. Such uncertainty arises from imprecise measurements, changes in the object of interest etc. The key point is that we know for certain that the item of interest has well-defined properties, but we don’t know what they are and hence the uncertainty. Such uncertainty can be quantified accurately using probability.
Vagueness, on the other hand, arises from an imprecise use of language. Specifically, the term refers to the use of criteria that cannot distinguish between borderline cases. Let’s clarify this using the example discussed earlier. A popular way to measure customer satisfaction is through surveys. Such surveys may be able to tell us that customer A is more satisfied than customer B. However, they cannot distinguish between borderline cases because any boundary between satisfied and not satisfied customers is arbitrary. This problem becomes apparent when considering an indifferent customer. How should such a customer be classified – satisfied or not satisfied? Further, what about customers who choose not to respond? It is therefore clear that any numerical probability computed from such data cannot be considered accurate. In other words, the probability itself is uncertain.
Ambiguity in classification
Although the distinction made by Colyvan is important, there is a deeper issue that can afflict uncertainties that appear to be quantifiable at first sight. To understand how this happens, we’ll first need to take a brief look at how probabilities are usually computed.
An operational definition of probability is that it is the ratio of the number of times the event of interest occurs to the total number of events observed. For example, if my manager notes my arrival times at work over 100 days and finds that I arrive before 8:00 am on 62 days then he could infer that the probability my arriving before 8:00 am is 0.62. Since the probability is assumed to equal the frequency of occurrence of the event of interest, this is sometimes called the frequentist interpretation of probability.
The above seems straightforward enough, so you might be asking: where’s the problem?
The problem is that events can generally be classified in several different ways and the computed probability of an event occurring can depend on the way that it is classified. This is called the reference class problem. In a paper entitled, “The Reference Class Problem is Your Problem Too”, Alan Hajek described the reference class problem as follows:
“The reference class problem arises when we want to assign a probability to a proposition (or sentence, or event) X, which may be classified in various ways, yet its probability can change depending on how it is classified.”
Consider the situation I mentioned earlier. My manager’s approach seems reasonable, but there is a problem with it: all days are not the same as far as my arrival times are concerned. For example, it is quite possible that my arrival time is affected by the weather: I may arrive later on rainy days than on sunny ones. So, to get a better estimate my manager should also factor in the weather. He would then end up with two probabilities, one for fine weather and the other for foul. However, that is not all: there are a number of other criteria that could affect my arrival times – for example, my state of health (I may call in sick and not come in to work at all), whether I worked late the previous day etc.
What seemed like a straightforward problem is no longer so because of the uncertainty regarding which reference class is the right one to use.
Before closing this section, I should mention that the reference class problem has implications for many professional disciplines. I have discussed its relevance to project management in my post entitled, “The reference class problem and its implications for project management”.
In this post we have looked at a couple of forms of uncertainty about uncertainty that have practical implications for decision makers. In particular, we have seen that probabilities used in managerial decision making can be uncertain because of vague definitions of events and/or ambiguities in their classification. The bottom line for those who use probabilities to support decision-making is to ensure that the criteria used to determine events of interest refer to unambiguous facts that are appropriate to the situation at hand. To sum up: decisions made on the basis of probabilities are only as good as the assumptions that go into them, and the assumptions themselves may be prone to uncertainties such as the ones described in this article.