Eight to Late

Sensemaking and Analytics for Organizations

Archive for August 2009

The Approach: a dialogue mapping story

with 19 comments

Jack could see that the discussion was going nowhere:  the group been talking about the pros and cons of competing approaches for over half an hour, but they kept going around in circles.  As he  mulled over this, Jack  got an  idea – he’d been playing around with a visual notation called IBIS (Issue-Based Information System) for a while, and was looking for an opportunity to use it to map out a discussion in real time (Editor’s notereaders unfamiliar with IBIS may want to have a look at this post and this one before proceeding).  “Why not give it a try,” he thought, “I can’t do any worse than this lot.”  Decision made,  he waited for a break in the conversation and dived in when he got one…

“I have a suggestion,” he said. “There’s this conversation mapping tool that I’ve been exploring for a while. I believe it might help us reach a common understanding of the approaches we’ve been discussing. It may even help facilitate a decision. Do you mind if I try it?”

“Pfft  – I’m all for it if it helps us get to a decision.” said Max. He’d clearly had enough too.

Jack looked around the table. Mary looked puzzled,  but nodded her assent. Rick seemed unenthusiastic, but didn’t voice any objections. Andrew – the boss –  had a here-he-goes-again look on his face (Jack had a track record of  “ideas”)  but, to Jack’s surprise, said, “OK. Why not? Go for it.”

“Give me a minute to get set up,” said Jack. He hooked his computer to the data projector. Within a couple of minutes, he had a blank IBIS map displayed on-screen.  This done, he glanced up at the others: they were looking at screen with expressions ranging from curiosity (Mary) to skepticism (Max).

“Just a few words about what I’m going to do, he said. “I’ll be using a notation called IBIS – or issue based information system – to capture our discussion. IBIS has three elements: issues, ideas and arguments.  I’ll explain these as we go along. OK – Let’s get started with figuring out what we want out of the discussion. What’s our aim?” he asked.

His starting spiel done, Jack glanced at his colleagues: Max seemed a tad more skeptical than before; Rick ever more bored; Andrew and Mary stared at the screen. No one said anything.

Just as he was about to prompt them by asking another question, Mary said, “I’d say it’s to explore options for implementing the new system and find the most suitable one. Phrased as a question: How do we implement system X?”

Jack glanced at the others. They all seemed to agree – or at least, didn’t disagree – with Mary, “Excellent,” he said, “I think that’s a very good summary of what we’re trying to do.” He drew a question node on the map and continued: “Most discussions of this kind are aimed at resolving issues or questions. Our root question is: What are our options for implementing system X, or as Mary put it, How do we implement System X.”

Figure 1

Figure 1

“So, what’s next,” asked Max. He still looked skeptical, but Jack could see that he was intrigued. Not bad, he thought to himself….

“Well, the next step is to explore ideas that address or resolve the issue.  So, ideas should be responses to the question:  how should we implement system X? Any suggestions?”

“We’d have to engage consultants,” said Max. “We don’t have in-house skills to implement it ourselves.”

Jack created an idea node on the map and began typing. “OK – so we hire consultants,” he said. He looked up at the others and continued, “In IBIS, ideas are depicted by light bulbs. Since ideas  respond  to questions, I draw an arrow from the idea to the root question, like so:

Figure 2

Figure 2

“I think doing it ourselves is an option,” said Mary, “We’d need training and it might take us longer because we’d be learning on the job, but it is a viable option. ”

“Good,” said Jack, “You’ve given us another option and some ways in which we might go about implementing the option. Ideally, each node should represent a single – or atomic – point. So I’ll capture what you’ve said like so.” He typed as fast he could, drawing nodes and filling in detail.

As he wrote he said,  “Mary said we could do it ourselves – that’s clearly a new idea – an implementation option. She also partly described how we might go about it: through training to learn the technology and by learning on the job. I’ve added in “How” as a question and the two points  that describe how we’d do it as ideas responding to the question.” He paused and looked around to check that every one was with him, then continued. “But there’s more: she also mentioned a shortcoming of doing it ourselves – it will take longer. I capture this as an argument against the idea; a con, depicted as red minus in the map.”

He paused briefly to look at his handiwork on-screen, then asked, “Any questions?”

Figure 3

Figure 3

Rick asked, “Are there any rules governing how nodes are connected?”

“Good question!  In a nutshell: ideas respond to questions and arguments respond to ideas. Issues, on the other hand, can be generated from any type of node.  I can give you some links to references on the Web if you’re interested.”

“That might be useful for everyone,” said Andrew. “Send it out to all of us.”

“Sure. Let’s move on. So, does anyone have any other options?”

“Umm..not sure how it would work, but what about co-development?” Suggested Rick.

“Do you mean collaborative development with external resources?” asked Jack as he began typing.

“Yes,” confirmed Rick.

Figure 4

Figure 4

“What about costs? We have a limited budget for this,” said Mary.

“Good point,” said Jack as he started typing.  “This is a constraint that must be satisfied by all potential approaches.”    He stopped typing and  looked up at the others, “This is important: criteria apply to all potential approaches, so the criterial question should hang off the root node,” he said.  “Does this make sense to everyone?”

Figure 5

Figure 5

“I’m not sure I understand,” said Andrew. “Why are the criteria separate from the approaches?”

“They aren’t separate. They’re a level higher than any specific approach because they apply to all solutions. Put another way, they relate to the root issue – How do we implement system X – rather than a specific solution.”

“Ah, that makes sense,” said Andrew. “This notation seems pretty powerful.”

“It is, and I’ll be happy to show you some more features later, but let’s continue the discussion for now. Are there any other criteria?”

“Well, we must have all the priority 1 features described in the scoping document implemented by the end of the year,”   said Andrew. One can always count on the manager to emphasise constraints.

“OK, that’s two criteria actually: must implement priority 1 features and must implement by year end,” said Jack, as he added in the new nodes. “No surprises here,” he continued, “we have the three classic project constraints – budget, scope and time.”

Figure 6

Figure 6

The others were now engaged with the map, looking at it, making notes etc. Jack wanted to avoid driving the discussion, so instead of suggesting how to move things forward, he asked, “What should we consider next?”

“I can’t think of any other approaches,” said Mary. Does anyone have suggestions, or should we look at the pros and cons of the listed approaches?”

“I’ve said it before; I’ll say it again: I think doing it ourselves is a dum..,.sorry,  not a good idea. It is fraught with too much risk…” started Max.

“No it isn’t,” countered Mary, “on the contrary, hiring externals is more risky because costs can blowout by  much more than if we did it ourselves.”

“Good points,” said Jack, as he noted Mary’s point.  “Max, do you have any specific risks in mind?”

Figure 7

Figure 7

“Time – it can take much longer,” said Max.

“We’ve already captured that as a con of the do-it-ourselves approach,” said Jack.

“Hmm…that’s true, but I would reword it to state that we have a hard deadline. Perhaps we could say – may not finish in allotted time – or something similar.”

“That’s a very good point,” said Jack, as he changed the node to read: higher risk of not meeting deadline. The map was coming along nicely now, and had the full attention of folks in the room.

Figure 8

Figure 8

“Alright,” he continued, “so are there any other cons? If not, what about pros – arguments in support of approaches?”

“That’s easy, “ said  Mary,  “doing it ourselves will improve our knowledge of the technology; we’ll be able to support and maintain the system ourselves.”

“Doing it through consultants will enable us to complete the project quicker,” countered Max.

Jack added in the pros and paused, giving the group some time to reflect on the map.

Figure 9

Figure 9

Rick and Mary, who were sitting next to each other, had a whispered side-conversation going; Andrew and Max were writing something down. Jack waited.

“Mary and I have an idea,” said Rick. “We could take an approach that combines the best of both worlds – external consultants and internal resources. Actually, we’ve already got it down as a separate approach –  co-development, but we haven’t discussed it yet..” He had the group’s attention now. “Co-development allows us to use consultants’ expertise where we really need it and develop our own skills too. Yes, we’d need to put some thought into how it would work, but I think we could do it.”

“I can’t see how co-development will reduce the time risk – it will take longer than doing it through consultants.” said Max.

“True,” said Mary, “but it is better than doing it ourselves and, more important, it enables us to develop in-house skills that are needed to support and maintain the application. In the long run, this can add up to a huge saving. Just last week I read that maintenance can be anywhere between 60 to 80 percent of total system costs.”

“So you’re saying that it reduces implementation time and  results in a smaller exposure to cost blowout?” asked Jack.

“Yes,” said Mary

Jack added in the pros and waited.

Figure 10

Figure 10

“I still don’t see how it reduces time,” said Max.

“It does, when compared to the option of doing it ourselves,” retorted Mary.

“Wait a second guys,” said Jack. “What if I reword the pros to read – Reduced implementation time compared to in-house option and Reduced cost compared to external option.”

He looked at Mary and Max. – both seemed to OK with this, so he typed in the changes.

Figure 11

Figure 11

Jack asked, “So, are there any other issues, ideas or arguments that anyone would like to add?”

“From what’s on the map, it seems that co-development is the best option,” said Andrew.  He looked around to see what the others thought: Rick and Mary were nodding; Max still looked doubtful.

Max asked, “how are we going to figure out who does what?  It isn’t easy to partition work cleanly when teams have different levels of expertise.”

Jack typed this in as a con.

Figure 12

Figure 12

“Good point,” said Andrew. “There may be ways to address this concern. Do you think it would help if we brought some consultants in on a day-long engagement to workshop a co-development approach with the team? ”

Max nodded, “Yeah, that might work,” he said. “It’s worth a try anyway. I have my reservations,  but co-development seems the best of the three approaches.”

“Great,“ said Andrew, “I’ll get some consultants in next week to help us workshop an approach.”

Jack typed in this exchange, as the others started to gather their things. “Anything else to add?”  he asked.

Everyone looked up at the map. “No,  that’s it, I think,” said Mary.

“Looks good,”  said Mike . “Be sure to send us a copy of the map.”

Figure 13

Figure 13

“Sure, I’ll print copies out right away,” said Jack. “Since we’ve developed it together, there shouldn’t be any points of disagreement.”

“That’s true,”  said Andrew, “another good reason to use this tool.”  Gathering his papers, he asked, “Is there anything else? He looked around the table. “ Alright then, I’ll see you guys later,  I’m off to get some lunch before my next meeting.”

Jack looked around the group.  Helped along by IBIS and  his facilitation, they’d overcome their differences and reached a collective decision. He had thought it wouldn’t work, but it had.

“ Jack, thanks  for your help with the discussion. IBIS seems to be a great way to capture discussions.  Don’t forget to send us those references,” said Mary, gathering her notes.

“I’ll do that right away,” said Jack, “and I’ll also send you some information about Compendium – the open-source software tool I used to create the map.”

“Great,” said Mary. “I’ve got to run. See you later.”

“See you later,” replied Jack.


Further Reading:

1. Jeff Conklin’s book is a must-read for any one interested in dialogue mapping. I’ve reviewed it here.

2.  For more on Compendium, see the Compendium Institute site.

3. See Paul Culmsee’s series of articles, The One Best Practice to Rule Them All, for an excellent and very entertaining introduction to issue mapping.

4. See this post and this one for examples of  how IBIS can be used to visualise written arguments.

5.  See this article for a dialogue mapping story by Jeff Conklin.

An introduction to the critical chain method

with 13 comments


All project managers have to deal with uncertainty as a part of their daily work. Project schedules, so carefully constructed, are riddled with assumptions and  uncertainties – particularly in task durations. Most project management treatises (the PMBOK included) recognise this, and so exhort project managers to include uncertainties in their activity duration estimates. However, the same books have little to say on how these uncertainties should be integrated into the project schedule in a meaningful way. Sure, well-established techniques such as PERT incorporate probabilities into a schedule via an averaged or expected duration, but the final schedule is deterministic  – i.e each task is assigned a definite completion date, based on the expected duration.  Any float that appears in the schedule is purely a consequence of an activity not being on the critical path. The float, such as it is, is not an allowance for uncertainty.

Since PERT was invented in the 1950s, there have been several other attempts to incorporate uncertainty into project scheduling.  Some of these include, Monte Carlo simulation and, more recently, Bayesian Networks. Although these techniques have a more sound basis, they don’t really address the question of how uncertainty is to be managed in a project schedule, where individual tasks are strung together one after another.   What’s needed is a simple technique to protect a project schedule   from Murphy, Parkinson or any other variations that invariably occur during the execution of individual tasks. In the 1990s, Eliyahu Goldratt proposed just such a technique in his business novel, Critical Chain.  This post presents a short, yet comprehensive introduction to Goldratt’s critical chain method .

[An Aside: Before proceeding any further I should mention that Goldratt formulated the critical chain method within the framework of his Theory of Constraints (TOC). I won’t discuss TOC in this article, mainly because of space limitations. Moreover, an understanding of TOC isn’t really needed to understand the critical chain method. For those interested in learning about TOC, the best starting point is Goldratt’s business novel, The Goal.]

I begin with a discussion of some general characteristics of activity or task estimates, highlighting the reason why task estimators tend to pad up their estimates.  This is followed by a discussion on why the buffers (or safety) that estimators build into individual activities don’t help  – i.e. why projects come in late despite the fact that most people add considerable safety factors on to their activity estimates. This then naturally leads on to the heart of the matter:  how buffers should be added in order to protect schedules effectively.

Characteristics of activity duration estimates

(Note:  Portions of this section have been published previously in my post on the inherent uncertainty of project task estimates)

Consider an activity that you do regularly – such as getting ready in the morning. You have a pretty good idea how long the activity takes on average. Say, it takes you an hour on average to get ready – from when you get out of bed to when you walk out of your front door. Clearly, on a particular day you could be super-quick and finish in 45 minutes, or even 40 minutes. However, there’s a lower limit to the early finish – you can’t get ready in 0 minutes!. On the other hand, there’s really no upper limit. On a bad day you could take a few hours. Or if you slip in the shower and hurt your back, you mayn’t make it at all.

If we were to plot the probability of activity completion for this example as a function of time, it might look something like I’ve depicted in Figure 1. The distribution starts at a non-zero cutoff (corresponding to the minimum time for the activity); increases to a maximum (corresponding to the most probable time); and then falls off rapidly at first, then with a long, slowly decaying, tail. The mean (or average) of the distribution is located to the right of the maximum because of the long tail. In the example, t_{0}(30 mins) is the minimum time for completion so the probability of finishing within 30 mins is 0%. There’s a 50% probability of completion within an hour, 80% probability of completion within 2 hours and a 90% probability of completion in 3 hours. The large values for t_{80} and t_{90}compared to t_{50}are a consequence of the long tail. OK, this particular example may be an exaggeration – but you get my point: if you want to be really really sure of completing any activity, you have to add a lot of safety because there’s a chance that you may “slip in the shower” so to speak.

Probability distribution function for an activity

It turns out that many phenomena can be modeled by this kind of long-tailed distribution. Some of the better known long-tailed distributions include lognormal and power law distributions. A quick (but admittedly informal) review of the project management literature revealed that lognormal distributions are more commonly used than power laws to model activity duration uncertainties. This may be because lognormal distributions have a finite mean and variance whereas power law distributions can have infinite values for both (see this presentation by Michael Mitzenmacher, for example). [An Aside:If you’re curious as to why infinities are possible in the latter, it is because power laws decay more slowly than lognormal distributions – i.e they have “fatter” tails, and hence enclose larger (even infinite) areas.]. In any case, regardless of the exact form of the distribution for activity estimates, what’s important and non-controversial is the short cutoff, the peak and long, decaying tail.

Most activity estimators are intuitively aware of the consequences of the long tail. They therefore add a fair amount of “air” or safety in their estimates. Goldratt suggests that typical activity estimates tend to correspond to t_{80} or t_{90}. Despite this, real life projects still have difficulty in maintaining schedules. Why this is so is partially answered in the next section.

Delays accumulate; gains don’t

A schedule is essentially made up of several activities (of varying complexity and duration) connnected sequentially or in parallel. What are the implications of uncertain activity durations on a project schedule? Well, let’s look at the case of sequential and parallel steps separately:

Sequential steps: If an activity finishes early, the successor activity rarely starts right away. More often, the successor activity starts only when it was originally scheduled to. Usually this happens because the resource responsible for the successor activity is not free – or hasn’t been told about the early finish of the predecessor activity. On the other hand, if an activity finishes late, the start of the successor activity is delayed by at least the same amount as the delay. The upshot of all this is that – delays accumulate but early finishes are rarely taken advantage of. So, in a long chain of sequential activities, you can be pretty sure that there will be delays.

Parallel steps: In this case, the longest duration activity dictates the finish time. For example, if we have three parallel activities that take 5 days each. If one of them ends up taking 10 days, the net effect is that three activities, taken together, will complete only after 10 days. In contrast, an early finish will not have an effect unless all activities finish early (and by the same amount!). Again we see that delays accumulate; early finishes don’t.

The above discussion assumed that activities are independent. In a real project activities can be highly dependent. In general this tends to make things worse – a delay in an activity is usually magnified by a dependent successor activity.

This partially explains why projects come in late. However it’s not the whole story. According to Goldratt, there are a few other factors that lead to dissipation of safety. I discuss these next.

Other time wasters

In the previous section we saw that dependencies between activities can eat into safety significantly because delays accumulate while gains don’t. There are a couple of other ways safety is wasted. These are:

Multitasking It is recognised that multitasking – i.e. working on more than one task concurrently – introduces major delays in completing tasks. See these articles by Johanna Rothman and Joel Spolsky, for a discussion of why this is so. I’ve discussed techniques to manage multitasking in an earlier post.

Student syndrome This should be familiar to any one who’s been a student. When saddled with an assignment, the common tendency is to procrastinate until the last moment. This happens on projects as well. “Ah, there’s so much time. I’ll start later…” Until, of course, there isn’t very much time at all.

Parkinson’s Law states that “work expands to fill the allocated time.” This is most often a consequence of there being no incentive to finish a task early. In fact, there’s a strong disincentive from finishing early because the early finisher may be a) accused of overestimating the task or b) rewarded by being allocated more work. Consequently people tend to adjust their pace of work to just make the scheduled delivery date, thereby making the schedule a self-fulfilling prophecy.

Any effective project management system must address and resolve the above issues. The critical chain method does just that. Now with the groundwork in place, we can move on to a discussion of the technique. We’ll do this in two steps. First, we discuss the special case in which there is no resource contention – i.e. multitasking does not occur. The second, more general, case discusses the situation in which there is resource contention.

The critical chain – special case

In this section we look at the case where there’s no resource contention in the project schedule. In this (ideal) situation, where every resource is available when required, each task performer is ready to start work on a specific task just as soon as all its predecessor tasks are complete. Sure, we’ll  also need to put in place a process to notify successor task performers about when they need to be ready to start work,  but I’ll discuss this notification process a little later in this section. Let’s tackle Parkinson and the procrastinators first.

Preventing the student syndrome and Parkinson’s Law

To cure habitual procrastinators and followers of Parkinson, Goldratt suggests that project task durations estimates be based on a 50% probability of completion. This corresponds to an estimate that is equal to t_{50} for an activity (you may want to have another look at Figure 1  to remind yourself of what this means). Remember, as discussed earlier, estimates tend to be based on t_{80} or t_{90}, both of which are significantly larger than t_{50} because of the nature of the distribution. The reduction in time should encourage task performers to start the task on schedule, thereby avoiding the student syndrome. Further, it should also discourage people from deliberately slowing their work pace, thereby preventing Parkinson from taking hold.

As discussed earlier, a t_{50} estimate implies there’s a 50% chance that the task will not complete on time. So, to reassure task estimators / performers, Goldratt recommends implementing the following actions:

  1. Removal of individual activity completion dates from the schedule altogether. The only important date is the project completion date.
  2. No penalties for going over the t_{50}estimate. Management must accept that the estimate is based on t_{50}, so the activity is expected to overrun the estimate 50% of the time.

The above points should be explained clearly to project team members before attempting  to elicit t_{50}estimates from them.

So, how does one get reliable  t_{50} estimates? Here are some approaches:

  1. Assume that the initial estimates obtained from team members are t_{80} or t_{90}, so simply halve these to get a rough t_{50}. This is the approach Goldratt recommends. However, I’m not a fan of this method because it is sure to antagonise folks.
  2. Another option is to ask the estimator how long a task is going to take. They’ll come back to you with a number. This is likely to be their t_{80} or t_{90}. Then ask them for their t_{50}, explaining what it means (i.e. estimate which you have a 50% chance of going over). They should come back to you with a smaller number. It may not be half the original estimate or less, but it should be significantly smaller.
  3. Yet another option  is to calibrate estimators’  abilities to predict task durations based on their history  (i.e. based on how good earlier estimates were).  In the absence of prior data one can  quantify an estimator’s reliability in making judgements by asking him or her to answer a series of trivia questions, giving an estimated probability of being correct along with each answer.  An individual is  said to be calibrated if  the fraction of questions correctly answered coincides (or is close to) their stated probability estimates.  In theory, a calibrated individual’s duration estimate should be pretty good. However, it is questionable as to whether calibration as determined through trivia questions carries over to real-world estimates. See this site for more on evaluating calibration.
  4. Finally, project managers can use Monte Carlo simulations to estimate task durations. The hard part here is coming up with a probability distribution for the task duration. One commonly used approach is to ask task estimators to come up with best case, worst case and most likely estimates, and then fit these to a probability distribution. There are at least two problems with this approach: a) the only sensible fit to a three point estimate is a triangular distribution, but this isn’t particularly good because it ignores the long tail and b) the estimates still need to be quality assured through independent checks (historical comparison, for example)  or via calibration as discussed above – else the distribution is worthless.  See this paper for more on the use of Monte Carlo simulations in project management (Note added on 23 Nov 2009:  See my post on Monte Carlo simulations of project task durations for a quick introduction to the technique)

Folks who’ve read my articles on cognitive biases in project management (see this post and this one)  may be wondering how these fit in to the above argument. According to Goldratt, most people tend to offer their t_{80} or t_{90} numbers,  rather than their t_{50} ones. The reason this happens is that folks tend to remember the instances when things went wrong,  so they pad up their estimates to avoid getting burned again – a case of the availability bias in action.

Getting team members to come up with reliable t_{50} numbers depends very much on how safe they feel doing so. It is important that management understands that there is a 50% chance of not meeting t_{50} deadlines for an individual tasks; the only important deadline is the completion of the project. This is why  Goldratt and other advocates of the critical chain method emphasise that a change in organisational culture is required in order for the technique to work in practice. Details of how one might implement this change is out of scope for an introductory article, but readers should be aware that the biggest challenges are not technical ones.

The resource buffer

Readers may have noticed a problem arising from the foregoing discussion of t_{50} estimates: if there is no completion date for a task, how does a successor task performer know when he or she needs to be ready to start work? This problem is handled via a notification process that works as follows: the predecessor task peformer notifies successor task performers about expected completion dates on a regular basis. These notifications occur at regular, predetermined intervals. Further, a final confirmation should be given a day or two before task completion so all successor task performers are ready to start work exactly when needed. Goldratt calls this notification process the resource buffer. It is a simple yet effective method to ensure that a task starts exactly when it should. Early finishes are no longer wasted!

The project buffer

Alright, so now we’ve reduced activity estimates, removed completion dates for individual tasks and ensured that resources are positioned to pick up tasks when they have to. What remains? Well, the most important bit really – the safety! Since tasks now only have a 50% chance of completion within the estimated time, we need to put safety in somewhere. The question is, where should it go? The answer lies in recognising that the bottleneck (or constraint) in a project is the critical path. Any delay in the critical path necessarily implies a delay in the project. Clearly, we need to add the safety somewhere on the critical path. I hope the earlier discussion has convinced you that adding safety to individual tasks is an exercise in futility. Goldratt’s insight was the following: safety should be added to the end of the critical path as a non-activity buffer. He calls this the project buffer. If any particular activity is delayed, the project manager “borrows” time from the project buffer and adds it on to the offending activity. On the other hand, if an activity finishes early the gain is added to the project buffer. Figure 2 depicts a project network diagram with the project buffer added on to the critical path (C1-C2-C3 in the figure).


What size should the buffer be? As a rule of thumb, Goldratt proposed that the buffer should be 50% of the safety that was removed from the tasks. Essentially this makes the critical path 75% as long as it would have been with the original (t_{80} or t_{90}) estimates. Other methods of buffer estimation are discussed in this book on critical chain project management.

The feeding buffer

As shown in Figure 2 the project buffer protects the critical path. However, delays can occur in non-critical paths as well (A1-A2 and B1-B2 in the figure). If long enough, these delays can affect subsequent critical path. To prevent this from happening, Goldratt suggests adding buffers at points where non-critical paths join the critical path. He terms these feeding buffers.


Figure 3 depicts the same project network diagram as before with feeding buffers added in. Feeding buffers are sized the same way as project buffers are – i.e. based on a fraction of the safety removed from the activities on the relevant (non-critical) path.

The critical chain – a first definition

This completes the discussion of the case where there’s no resource contention. In this special case, the critical chain of the project is identical to the critical path. The activity durations for all tasks are based on t50 estimates, with the project buffer protecting the project from delays. In addition, the feeding buffers protect critical chain activities from delays in non-critical chain activities.

The critical chain – general case

Now for the more general case where there is contention for resources. Resource contention implies that task performers are scheduled to work on multiple tasks simultaneously, at one or more points along the project timeline. Although it is well recognised that multitasking is to be avoided, most algorithms for finding the critical path do not take resource contention into account. The first step, therefore, is to resource level the schedule – i.e ensure that tasks that are to be performed the same resource(s) are scheduled sequentially rather than simultaneously. Typically this changes the critical path from what it would otherwise be. This resource leveled critical path is the critical chain.

The above can be illustrated by modifying the example network shown in Figure 3. Assume tasks C1, B2 and A2 (marked X) are performed by the same resources. The resource leveled critical path thus changes from that shown in Figures 2 and 3 to that shown in Figure 4 (in red). As per the definition above, this is the critical chain. Notice that the feeding buffers change location, as (by definition) these have to be moved to points where non-critical paths merge with the critical path. The location of the project buffer remains unchanged.



This completes my introduction to the critical chain method. Before closing, I should mention that there has been some academic controversy regarding the critical chain method. In practice, though, the method seems to work well as evidenced by the number of companies offering consulting and software related to critical chain project scheduling.

I can do no better than to end with a list of online references which I’ve found immensely useful in learning about the method. Here they are, in no particular order:

Critical Chain Scheduling and Buffer Management . . . Getting Out From Between Parkinson’s Rock and Murphy’s Hard Place by Francis Patrick.

Critical Chain: a hands-on project application by Ernst Meijer.

The best place to start, however, is where it all began: Goldratt’s novel, Critical Chain.

(Note:  This essay is a revised version of my article on the critical chain,  first  published in 2007)

Written by K

August 20, 2009 at 10:50 pm

Cognitive biases as project meta-risks

with 16 comments

Introduction and background

A  comment by John Rusk on this post got me thinking about the effects of  cognitive  biases on the perception and analysis of project  risks.  A cognitive bias is a human tendency to base a judgement or decision on a flawed perception or understanding of data or events.  A recent paper suggests that cognitive biases may have played a role in some high profile project failures.   The author of the paper, Barry Shore, contends that the failures were caused by poor  decisions which could be traced back to specific biases.  A direct implication is that  cognitive biases can have a significant negative  effect on how project risks are perceived and acted upon.  If true, this has consequences for the practice of risk management in projects (and other areas, for that matter). This essay discusses the role of cognitive biases in risk analysis, with a focus on project environments.

Following the pioneering work of Daniel Kahneman and Amos Tversky, there has been a lot of applied research on the role of cognitive biases in various areas of social sciences (see Kahneman’s Nobel Prize lecture for a very readable account of his work on cognitive biases).  A lot of this research highlights the fallibility of intuitive decision making.  But even judgements ostensibly based on data are subject to cognitive biases.  An example of this is when data is misinterpreted to suit the decision-maker’s preconceptions (the so-called confirmation bias). Project risk management is largely about making decisions regarding uncertain events that might impact a project. It involves, among other things, estimating the likelihood of these events occurring and the resulting impact on the project. These estimates and the decisions based on them can be erroneous for a host of reasons.  Cognitive biases are an often overlooked, yet universal,  cause of error.

Cognitive biases as project meta-risks

So, what role do cognitive biases play in project risk analysis? Many researchers have considered specific cognitive biases as project risks:  for example, in this paper, Flyvbjerg describes how the risks posed by optimism bias can be addressed using reference class forecasting (see my post on improving project forecasts for more on this).  However, as suggested in the introduction, one can go further. The first point to note is that biases are part and parcel of the mental make up of humans, so any aspect of risk management that involves human judgment is subject to bias. As such, then, cognitive biases may be thought of as meta-risks: risks that affect risk analyses. Second, because they are a part of the mental baggage of all humans,  overcoming them involves an understanding of the thought processes that govern decision-making,  rather than externally-directed analyses (as in the case of risks).  The analyst has to understand how his or her perception of risks may be affected by these meta-risks.

The publicly available research and professional literature on meta-risks in business and organisational contexts is sparse. One relevant reference is a paper by Jack Gray on meta-risks in financial portfolio management.  The first few lines of the paper state,

“Meta-risks are qualitative, implicit risks that pass beyond the scope of explicit risks. Most are born out the complex interaction between the behaviour pattern of individuals and those of organizational structures” (italics mine).

Although he doesn’t use the phrase, Gray seems to be referring to cognitive biases – at least in part. This is confirmed by a reading of the paper. It describes, among other things, hubris (which roughly corresponds to the  illusion of control) and discounting evidence that conflicts with one’s views (which corresponds to confirmation bias) as meta-risks. From this (admittedly small) sampling of the literature, it seems that the notion of cognitive biases as meta-risks has some precedent.

Next, let’s look at how biases can manifest themselves as meta-risks in a project environment. To keep the discussion manageable, I’ll focus on a small set of biases:

Anchoring: This refers to the tendency of humans to rely on a single piece of information when making a decision. I have seen this manifest itself in task duration estimation – where “estimates plucked out of thin air” by management serve as an anchor for subsequent estimation by the project team. See this post for more on anchoring in project situations. Anchoring is a meta-risk because the over-reliance on a single piece of information about a risk can have an adverse effect on decisions relating to that risk.

Availability: This refers to the tendency of people to base decisions on information that can be easily recalled, neglecting potentially more important information. As an example, a project manager might give undue weight to his or her most recent professional experiences when analysing project risks. Here availability is a meta-risk because it is a barrier to an objective consideration of risks that are not immediately apparent to the analyst.

Representativeness: This refers to the tendency to make judgements based on  seemingly representative, known samples . For example, a project team member  might base a task estimate based on another (seemingly) similar task, ignoring important differences between the two. Another manifestation of representativeness is when probabilities of events are estimated based on those of comparable, known events. An example of this is the gambler’s fallacy. This is clearly a meta-risk, especially where “expert judgement” is used as a technique to assess risk (Why? Because such judgements are invariably based on comparable tasks that the expert has encountered before.).

Selective perception: This refers to the tendency of individuals to give undue importance to data that supports their own views. Selective perception is a bias that we’re all subject to; we hear what we want to hear, see what we choose to see, and remain deaf  and blind to the rest. This is a meta-risk because it results in a skewed (or incomplete) perception of risks.

Loss Aversion: This refers to the tendency of people to give preference to avoiding losses (even small losses) over making gains. In risk analysis this might manifest itself as overcautiousness. Loss aversion is a meta-risk because it might, for instance, result in the assignment of an unreasonably large probability of occurrence to a risk.

A particularly common manifestation of loss aversion in project environments is the sunk cost bias. In situations where significant investments have been made in projects, risk analysts might be biased towards downplaying risks.

Information bias: This is the tendency of some analysts to seek as much data as they can lay their hands on prior to making a decision.  The danger here is of being swamped by too much irrelevant information. Data by itself does not improve the quality of decisions (see this post by Tim van Gelder for more on the dangers of data-centrism). Over-reliance on data – especially when there is no way to determine the quality and relevance of data as is often the case – can hinder risk analyses. Information bias is a meta-risk for two reasons already alluded to above; first, the data may not capture important qualitative factors and second, the data may not be relevant to the actual risk.

I could work my way through a few more of the biases listed here, but I think I’ve already made my point: projects encompass a spectrum of organisational and technical situations, so just about any cognitive bias is a potential meta-risk.


Cognitive biases are meta-risks because they can affect decisions pertaining to risks – i.e. they are risks of risk analysis.  Shore’s research suggests that the risks posed by these meta-risks are very real;  they can cause project failure  So, at a practical level, project managers need to understand  how cognitive biases could affect their own  risk-related judgements (or any other judgements for that matter).   The previous section provides illustrations of how selected cognitive biases  can affect risk analyses;  there are, of course,  many more.  Listing examples is illustrative, and helps make the point that cognitive biases are meta-risks. However, it is more useful and interesting to understand how biases operate and what we can do to overcome them.   As I have mentioned above, overcoming biases requires an understanding of the thought processes through which humans make decisions in the face of uncertainty.  Of particular interest is  the role of  intuition and rational thought in forming judgements, and the common mechanisms that underlie judgement-related cognitive biases.   A knowledge and awareness of these mechanisms  might help project managers in consciously countering the operation of cognitive biases in their own decision making.  I’m currently making some notes on these topics, with the intent of publishing them in a forthcoming essay – please stay tuned.


Part II of this post published here.

Written by K

August 9, 2009 at 9:59 pm

%d bloggers like this: