Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Project Management’ Category

A gentle introduction to Monte Carlo simulation for project managers

with 6 comments

This article covers the why, what and how of Monte Carlo simulation using a canonical example from project management –  estimating the duration of a small project. Before starting, however, I’d like say a few words about the tool I’m going to use.

Despite the bad rap spreadsheets get from tech types – and I have to admit that many of their complaints are justified – the fact is, Excel remains one of the most ubiquitous “computational” tools in the corporate world. Most business professionals would have used it at one time or another. So, if you you’re a project manager and want the rationale behind your estimates to be accessible to the widest possible audience, you are probably better off presenting them in Excel than in SPSS, SAS, Python, R or pretty much anything else. Consequently, the tool I’ll use in this article is Microsoft Excel. For those who know about Monte Carlo and want to cut to the chase, here’s the Excel workbook containing all the calculations detailed in later sections. However, if you’re unfamiliar with the technique, you may want to have a read of the article before playing with the spreadsheet.

In keeping with the format of the tutorials on this blog, I’ve assumed very little prior knowledge about probability, let alone Monte Carlo simulation. Consequently, the article is verbose and the tone somewhat didactic.

Introduction

Estimation is key part of a project manager’s role. The most frequent (and consequential) estimates they are asked deliver relate to time and cost.  Often these are calculated and presented as point estimates: i.e. single numbers – as in, this task will take 3 days. Or, a little better, as two-point ranges – as in, this task will take between 2 and 5 days.  Better still, many use a PERT-like approach wherein estimates are based on 3 points: best, most likely and worst case scenarios – as in, this task will take between 2 and 5 days, but it’s most likely that we’ll finish on day 3.  We’ll use three-point estimates as a starting point for Monte Carlo simulation, but first, some relevant background.

It is a truism, well borne out by experience, that it is easier to estimate small, simple tasks than large, complex ones. Indeed, this is why one of the early to-dos in a project is the construction of a work breakdown structure. However, a problem arises when one combines the estimates for individual elements into an overall estimate for a project or a phase thereof. It is that a straightforward addition of individual estimates or bounds will almost always lead to a grossly incorrect estimation of overall time or cost. The reason for this is simple: estimates are necessarily based on probabilities and probabilities do not combine additively. Monte Carlo simulation provides a principled and intuitive way to obtain probabilistic estimates at the level of an entire project based on estimates of the individual tasks that comprise it.

The problem

The best way to explain Monte Carlo is through a simple worked example. So, let’s consider a 4 task project shown in Figure 1. In the project, the second task is dependent on the first, and third and fourth are dependent on the second but not on each other. The upshot of this is that the first two tasks have to be performed sequentially and the last two can be done at the same time, but can only be started after the second task is completed.

To summarise: the first two tasks must be done in series and the last two can be done in parallel.

Figure 1; A project with 4 tasks.

Figure 1 also shows the three point estimates for each task – that is the minimum, maximum and most likely completion times. For completeness I’ve listed them below:

  • Task 1 – Min: 2 days; Most Likely: 4 days; Max: 8 days
  • Task 2 – Min: 3 days; Most Likely: 5 days; Max: 10 days
  • Task 3 – Min: 3 days; Most Likely: 6 days; Max: 9 days
  • Task 4 – Min: 2 days; Most Likely: 4 days; Max: 7 days

OK, so that’s the situation as it is given to us. The first step to  developing  an estimate is to formulate the problem in a way that it can be tackled using Monte Carlo simulation. This bring us to the important topic of the shape of uncertainty aka probability distributions.

The shape of uncertainty

Consider the data for Task 1. You have been told that it most often finishes on day 4.  However, if things go well, it could take as little as 2 days; but if things go badly it could take as long as 8 days.  Therefore, your range of possible finish times (outcomes) is between 2 to 8 days.

Clearly, each of these outcomes is not equally likely.  The most likely outcome is that you will finish the task in 4 days (from what your team member has told you). Moreover, the likelihood of finishing in less than 2 days or more than 8 days is zero. If we plot the likelihood of completion against completion time, it would look something like Figure 2.

Figure 2: Likelihood of finishing on day 2, day 4 and day 8.

Figure 2 begs a couple of questions:

  1. What are the relative likelihoods of completion for all intermediate times – i.e. those between 2 to 4 days and 4 to 8 days?
  2. How can one quantify the likelihood of intermediate times? In other words, how can one get a numerical value of the likelihood for all times between 2 to 8 days?  Note that we know from the earlier discussion that this must be zero for any time less than 2 or greater than 8 days.

The two questions are actually related. As we shall soon see, once we know the relative likelihood of completion at all times (compared to the maximum), we can work out its numerical value.

Since we don’t know anything about intermediate times (I’m assuming there is no other historical data available), the simplest thing to do is to assume that the likelihood increases linearly (as a straight line) from 2 to 4 days and decreases in the same way from 4 to 8 days as shown in Figure 3. This gives us the well-known triangular distribution.

Jargon Buster: The term distribution is simply a fancy word for a plot of likelihood vs. time.

Figure 3: Triangular distribution fitted to points in Figure 1

Of course, this isn’t the only possibility; there are an infinite number of others. Figure 4 is another (admittedly weird) example.

Figure 4: Another distribution that fits the points in Figure 2.

Further, it is quite possible that the upper limit (8 days) is not a hard one. It may be that in exceptional cases the task could take much longer (for example, if your team member calls in sick for two weeks) or even not be completed at all (for example, if she then leaves for that mythical greener pasture).  Catering for the latter possibility, the shape of the likelihood might resemble Figure 5.

Figure 5: A distribution that allows for a very long (potentially) infinite completion time

The main takeaway from the above is that uncertainties should be expressed as shapes rather than numbers, a notion popularised by Sam Savage in his book, The Flaw of Averages.

[Aside:  you may have noticed that all the distributions shown above are skewed to the right – that  is they have a long tail. This is a general feature of distributions that describe time (or cost) of project tasks. It would take me too far afield to discuss why this is so, but if you’re interested you may want to check out my post on the inherent uncertainty of project task estimates.

From likelihood to probability

Thus far, I have used the word “likelihood” without bothering to define it.  It’s time to make the notion more precise.  I’ll begin by asking the question: what common sense properties do we expect a quantitative measure of likelihood to have?

Consider the following:

  1. If an event is impossible, its likelihood should be zero.
  2. The sum of likelihoods of all possible events should equal complete certainty. That is, it should be a constant. As this constant can be anything, let us define it to be 1.

In terms of the example above, if we denote time by t and the likelihood by P(t)  then:

P(t) = 0 for t< 2 and  t> 8

And

\sum_{t}P(t) = 1 where 2\leq t< 8

Where \sum_{t} denotes the sum of all non-zero likelihoods – i.e. those that lie between 2 and 8 days. In simple terms this is the area enclosed by the likelihood curves and the x axis in figures 2 to 5.  (Technical Note:  Since t is a continuous variable, this should be denoted by an integral rather than a simple sum, but this is a technicality that need not concern us here)

P(t) is , in fact, what mathematicians call probability– which explains why I have used the symbol P rather than L. Now that I’ve explained what it  is, I’ll use the word “probability” instead of ” likelihood” in the remainder of this article.

With these assumptions in hand, we can now obtain numerical values for the probability of completion for all times between 2 and 8 days. This can be figured out by noting that the area under the probability curve (the triangle in figure 3 and the weird shape in figure 4) must equal 1, and we’ll do this next.  Indeed, for the problem at hand, we’ll assume that all four task durations can be fitted to triangular distributions. This is primarily to keep things  simple. However, I should emphasise that you can use any shape so long as you can express it mathematically, and I’ll say more about this towards the end of this article.

The triangular distribution

Let’s look at the estimate for Task 1. We have three numbers corresponding to a minimummost likely and maximum time.  To keep the discussion general, we’ll call these t_{min}, t_{ml} and t_{max} respectively, (we’ll get back to our estimator’s specific numbers later).

Now, what about the probabilities associated with each of these times?

Since t_{min} and t_{max} correspond to the minimum and maximum times,  the probability associated with these is zero. Why?  Because if it wasn’t zero, then there would be a non-zero probability of completion for a time less than t_{min} or greater than t_{max} – which isn’t possible [Note: this is a consequence of the assumption that the probability varies continuously –  so if it takes on non-zero value, p_{0},  at t_{min} then it must take on a value slightly less than p_{0} – but greater than 0 –  at t slightly smaller than t_{min} ] .   As far as  the most likely time,  t_{ml},  is concerned:  by definition, the probability attains its highest value at time t_{ml}.    So, assuming the probability can be described by a triangular function, the distribution must have the form shown in Figure 6 below.

Figure 6: Triangular distribution redux.

For the simulation, we need to know the equation describing the above distribution.  Although Wikipedia will tell us the answer in a mouse-click, it is instructive to figure it out for ourselves. First, note that the area under the triangle must be equal to  1 because the task must finish at some time between t_{min} and t_{max}.   As a consequence we have:

\frac{1}{2}\times{base}\times{altitude}=\frac{1}{2}\times{(t_{max}-t_{min})}\times{p(t_{ml})}=1\ldots\ldots{(1)}

where p(t_{ml}) is the probability corresponding to time t_{ml}.  With a bit of rearranging we get,

p(t_{ml})=\frac{2}{(t_{max}-t_{min})}\ldots\ldots(2)

To derive the probability for any time t lying between t_{min} and t_{ml}, we note that:

\frac{(t-t_{min})}{p(t)}=\frac{(t_{ml}-t_{min})}{p(t_{ml})}\ldots\ldots(3)

This is a consequence of the fact that the ratios on either side of equation (3)  are  equal to the slope of the line joining the points (t_{min},0) and (t_{ml}, p(t_{ml})).

Figure 7

Substituting (2) in (3) and simplifying a bit, we obtain:

p(t)=\frac{2(t-t_{min})}{(t_{ml}-t_{min})(t_{max}-t_{min})}\dots\ldots(4) for t_{min}\leq t \leq t_{ml}

In a similar fashion one can show that the probability for times lying between t_{ml} and t_{max} is given by:

p(t)=\frac{2(t_{max}-t)}{(t_{max}-t_{ml})(t_{max}-t_{min})}\dots\ldots(5) for t_{ml}\leq t \leq t_{max}

Equations 4 and 5 together describe the probability distribution function (or PDF)  for all times between t_{min} and t_{max}.

As it turns out, in Monte Carlo simulations, we don’t directly work with the probability distribution function. Instead we work with the cumulative distribution function (or CDF) which is the probability, P,  that the task is completed by time t. To reiterate, the PDF, p(t), is the probability of the task finishing at time t whereas the CDF, P(t), is the probability of the task completing by time t. The CDF, P(t),  is essentially a sum of all probabilities between t_{min} and t. For t < t_{min} this is the area under the triangle with apexes at   (t_{min}, 0), (t, 0) and (t, p(t)).  Using the formula for the area of a triangle (1/2 base times height) and equation (4) we get:

P(t)=\frac{(t-t_{min})^2}{(t_{ml}-t_{min})(t_{max}-t_{min})}\ldots\ldots(6) for t_{min}\leq t \leq t_{ml}

Noting that for t \geq t_{ml}, the area under the curve equals the total area minus the area enclosed by the triangle with base between t and t_{max}, we have:

P(t)=1- \frac{(t_{max}-t)^2}{(t_{max}-t_{ml})(t_{max}-t_{min})}\ldots\ldots(7) for t_{ml}\leq t \leq t_{max}

As expected,  P(t)  starts out with a value 0 at t_{min} and then increases monotonically, attaining a value of 1 at t_{max}.

To end this section let’s plug in the numbers quoted by our estimator at the start of this section: t_{min}=2, t_{ml}=4 and t_{max}=8.  The resulting PDF and CDF are shown in figures 8 and 9.

Figure 8: PDF for triangular distribution (tmin=2, tml=4, tmax=8)

Figure 9 – CDF for triangular distribution (tmin=2, tml=4, tmax=8)

Monte Carlo in a minute

Now with all that conceptual work done, we can get to the main topic of this post:  Monte Carlo estimation. The basic idea behind Monte Carlo is to simulate the entire project (all 4 tasks in this case) a large number N (say 10,000) times and thus obtain N overall completion times.  In each of the N trials, we simulate each of the tasks in the project and add them up appropriately to give us an overall project completion time for the trial.  The resulting N overall completion times will all be different, ranging from the sum of the minimum completion times to the sum of the maximum completion times.  In other words, we will obtain the PDF and CDF for the overall completion time, which will enable us to answer questions such as:

  • How likely is it that the project will be completed within 17 days?
  • What’s the estimated time for which I can be 90% certain that the project will be completed? For brevity, I’ll call this the 90% completion time in the rest of this piece.

“OK, that sounds great”, you say, “but how exactly do we simulate a single task”?

Good question, and I was just about to get to that…

Simulating a single task using the CDF

As we saw earlier, the CDF for the triangular has a S shape and ranges from 0 to 1 in value. It turns out that the S shape is characteristic of all CDFs, regardless of the details underlying PDF. Why? Because, the cumulative probability must lie between 0 and 1 (remember, probabilities can never exceed 1, nor can they be negative).

OK, so to simulate a task, we:

  • generate a random number between 0 and 1, this corresponds to the probability that the task will finish at time t.
  • find the time, t, that this corresponds to this value of probability. This is the completion time for the task for this trial.

Incidentally, this method is called inverse transform sampling.

An example might help clarify how inverse transform sampling works.  Assume that the random number generated is 0.4905. From the CDF for the first task, we see that this value of probability corresponds to a completion time of 4.503 days, which is the completion for this trial (see Figure 10). Simple!

Figure 10: Illustrating inverse transform sampling

In this case we found the time directly from the computed CDF. That’s not too convenient when you’re simulating the project 10,000 times. Instead, we need a programmable math expression that gives us the time corresponding to the probability directly. This can be obtained by solving equations (6) and (7) for t. Some straightforward algebra, yields the following two expressions for t:

t = t_{min} + \sqrt{P(t)(t_{ml} -  t_{min})(t_{max} - t_{min})} \ldots\ldots(8) for t_{min}\leq t \leq t_{ml}

And

t = t_{max} - \sqrt{[1-P(t)](t_{max} -  t_{ml})(t_{max} - t_{min})} \ldots\ldots(9) for t_{ml}\leq t \leq t_{max}

These can be easily combined in a single Excel formula using an IF function, and I’ll show you exactly how in a minute. Yes, we can now finally get down to the Excel simulation proper and you may want to download the workbook if you haven’t done so already.

The simulation

Open up the workbook and focus on the first three columns of the first sheet to begin with. These simulate the first task in Figure 1, which also happens to be the task we have used to illustrate the construction of the triangular distribution as well as the mechanics of Monte Carlo.

Rows 2 to 4 in columns A and B list the min, most likely and max completion times while the same rows in column C list the probabilities associated with each of the times. For t_{min} the probability is 0 and for t_{max} it is 1.  The probability at t_{ml} can be calculated using equation (6) which, for t=t_{max}, reduces to

P(t_{ml}) =\frac{(t_{ml}-t_{min})}{t_{max}-t_{min}}\ldots\ldots(10)

Rows 6 through 10005 in column A are simulated probabilities of completion for Task 1. These are obtained via the Excel RAND() function, which generates uniformly distributed random numbers lying between 0 and 1.  This gives us a list of probabilities corresponding to 10,000 independent simulations of Task 1.

The 10,000 probabilities need to be translated into completion times for the task. This is done using equations (8) or (9) depending on whether the simulated probability is less or greater than P(t_{ml}), which is in cell C3 (and given by Equation (10) above). The conditional statement can be coded in an Excel formula using the IF() function.

Tasks 2-4 are coded in exactly the same way, with distribution parameters in rows 2 through 4 and simulation details in rows 6 through 10005 in the columns listed below:

  • Task 2 – probabilities in column D; times in column F
  • Task 3 – probabilities in column H; times in column I
  • Task 4 – probabilities in column K; times in column L

That’s basically it for the simulation of individual tasks. Now let’s see how to combine them.

For tasks in series (Tasks 1 and 2), we simply sum the completion times for each task to get the overall completion times for the two tasks.  This is what’s shown in rows 6 through 10005 of column G.

For tasks in parallel (Tasks 3 and 4), the overall completion time is the maximum of the completion times for the two tasks. This is computed and stored in rows 6 through 10005 of column N.

Finally, the overall project completion time for each simulation is then simply the sum of columns G and N (shown in column O)

Sheets 2 and 3 are plots of the probability and cumulative probability distributions for overall project completion times. I’ll cover these in the next section.

Discussion – probabilities and estimates

The figure on Sheet 2 of the Excel workbook (reproduced in Figure 11 below) is the probability distribution function (PDF) of completion times. The x-axis shows the elapsed time in days and the y-axis the number of Monte Carlo trials that have a completion time that lie in the relevant time bin (of width 0.5 days). As an example, for the simulation shown in the Figure 11, there were 882 trials (out of 10,000) that had a completion time that lie between 16.25 and 16.75 days. Your numbers will vary, of course, but you should have a maximum in the 16 to 17 day range and a trial number that is reasonably close to the one I got.

Figure 11: Probability distribution of completion times (N=10,000)

I’ll say a bit more about Figure 11 in the next section. For now, let’s move on to Sheet 3 of workbook which shows the cumulative probability of completion by a particular day (Figure 12 below).  The figure shows the cumulative probability function (CDF), which is the sum of all completion times from the earliest possible completion day to the particular day.

Figure 12: Probability of completion by a particular day (N=10,000)

To reiterate a point made earlier,  the reason we work with the CDF  rather than the PDF is that we are interested in knowing the probability of completion by a particular date (e.g. it is 90% likely that we will finish by April 20th) rather than the probability of completion on a particular date (e.g. there’s a 10% chance we’ll finish on April 17th). We can now answer the two questions we posed earlier. As a reminder, they are:

  • How likely is it that the project will be completed within 17 days?
  • What’s the 90% likely completion time?

Both questions are easily answered by using the cumulative distribution chart on Sheet 3 (or Fig 12).  Reading the relevant numbers from the chart, I see that:

  • There’s a 60% chance that the project will be completed in 17 days.
  • The 90% likely completion time is 19.5 days.

How does the latter compare to the sum of the 90% likely completion times for the individual tasks? The 90% likely completion time for a given task can be calculated by solving Equation 9 for $t$, with appropriate values for the parameters t_{min}, t_{max} and t_{ml} plugged in, and P(t) set to 0.9. This gives the following values for the 90% likely completion times:

  • Task 1 – 6.5 days
  • Task 2 – 8.1 days
  • Task 3 – 7.7 days
  • Task 4 – 5.8 days

Summing up the first three tasks (remember, Tasks 3 and 4 are in parallel) we get a total of 22.3 days, which is clearly an overestimation. Now, with the benefit of having gone through the simulation, it is easy to see that the sum of 90% likely completion times for individual tasks does not equal the 90% likely completion time for the sum of the relevant individual tasks – the first three tasks in this particular case. Why? Essentially because a Monte Carlo run in which the first three tasks tasks take as long as their (individual) 90% likely completion times is highly unlikely. Exercise:  use the worksheet to estimate how likely this is.

There’s much more that can be learnt from the CDF. For example, it also tells us that the greatest uncertainty in the estimate is in the 5 day period from ~14 to 19 days because that’s the region in which the probability changes most rapidly as a function of elapsed time. Of course, the exact numbers are dependent on the assumed form of the distribution. I’ll say more about this in the final section.

To close this section, I’d like to reprise a point I mentioned earlier: that uncertainty is a shape, not a number. Monte Carlo simulations make the uncertainty in estimates explicit and can help you frame your estimates in the language of probability…and using a tool like Excel can help you explain these to non-technical people like your manager.

Closing remarks

We’ve covered a fair bit of ground: starting from general observations about how long a task takes, saw how to construct simple probability distributions and then combine these using Monte Carlo simulation.  Before I close, there are a few general points I should mention for completeness…and as warning.

First up, it should be clear that the estimates one obtains from a simulation depend critically on the form and parameters of the distribution used. The parameters are essentially an empirical matter; they should be determined using historical data. The form of the function, is another matter altogether: as pointed out in an earlier section, one cannot determine the shape of a function from a finite number of data points. Instead, one has to focus on the properties that are important. For example, is there a small but finite chance that a task can take an unreasonably long time? If so, you may want to use a lognormal distribution…but remember, you will need to find a sensible way to estimate the distribution parameters from your historical data.

Second, you may have noted from the probability distribution curve (Figure 11)  that despite the skewed distributions of the individual tasks, the distribution of the overall completion time is somewhat symmetric with a minimum of ~9 days, most likely time of ~16 days and maximum of 24 days.  It turns out that this is a general property of distributions that are generated by adding a large number of independent probabilistic variables. As the number of variables increases, the overall distribution will tend to the ubiquitous Normal distribution.

The assumption of independence merits a closer look.  In the case it hand,  it implies that the completion times for each task are independent of each other. As most project managers will know from experience, this is rarely the case: in real life,  a task that is delayed will usually have knock-on effects on subsequent tasks. One can easily incorporate such dependencies in a Monte Carlo simulation. A formal way to do this is to introduce a non-zero correlation coefficient between tasks as I have done here. A simpler and more realistic approach is to introduce conditional inter-task dependencies As an example, one could have an inter-task delay that kicks in only if the predecessor task takes more than 80%  of its maximum time.

Thirdly, you may have wondered why I used 10,000 trials: why not 100, or 1000 or 20,000. This has to do with the tricky issue of convergence. In a nutshell, the estimates we obtain should not depend on the number of trials used.  Why? Because if they did, they’d be meaningless.

Operationally, convergence means that any predicted quantity based on aggregates should not vary with number of trials.  So, if our Monte Carlo simulation has converged, our prediction of 19.5 days for the 90% likely completion time should not change substantially if I increase the number of trials from ten to twenty thousand. I did this and obtained almost the same value of 19.5 days. The average and median completion times (shown in cell Q3 and Q4 of Sheet 1) also remained much the same (16.8 days). If you wish to repeat the calculation, be sure to change the formulas on all three sheets appropriately. I was lazy and hardcoded the number of trials. Sorry!

Finally, I should mention that simulations can be usefully performed at a higher level than individual tasks. In their highly-readable book,  Waltzing With Bears: Managing Risk on Software Projects, Tom De Marco and Timothy Lister show how Monte Carlo methods can be used for variables such as  velocity, time, cost etc.  at the project level as opposed to the task level. I believe it is better to perform simulations at the lowest possible level, the main reason being that it is easier, and less error-prone, to estimate individual tasks than entire projects. Nevertheless, high level simulations can be very useful if one has reliable data to base these on.

There are a few more things I could say about the usefulness of the generated distribution functions and Monte Carlo in general, but they are best relegated to a future article. This one is much too long already and I think I’ve tested your patience enough. Thanks so much for reading, I really do appreciate it and hope that you found it useful.

Acknowledgement: My thanks to Peter Holberton for pointing out a few typographical and coding errors in an earlier version of this article. These have now been fixed. I’d be grateful if readers could bring any errors they find to my attention.

Written by K

March 27, 2018 at 4:11 pm

The map and the territory – a project manager’s reflections on the Seven Bridges Walk

with 3 comments

Korzybski’s aphorism about the gap between the map and the territory tells a truth that is best understood by walking the territory.

The map

Some weeks ago my friend John and I did the Seven Bridges Walk, a 28 km affair organised annually by the NSW Cancer Council.  The route loops around a section of the Sydney shoreline, taking in north shore and city vistas, traversing seven bridges along the way. I’d been thinking about doing the walk for some years but couldn’t find anyone interested enough to commit a Sunday.  A serendipitous conversation with John a few months ago changed that.

John and I are both in reasonable shape as we are keen bushwalkers. However, the ones we do are typically in the 10 – 15 km range.  Seven Bridges, being about double that, presented a higher order challenge.  The best way to allay our concerns was to plan. We duly got hold of a map and worked out a schedule based on an average pace of 5 km per hour (including breaks), a figure that seemed reasonable at the time (Figure 1 – click on images to see full sized versions).

Figure 1:The map, the plan

Some key points:

  1. We planned to start around 7:45 am at Hunters Hill Village and have our first break at Lane Cove Village, around the 5 to 6 km from the starting point. Our estimated time for this section was about an hour.
  2. The plan was to take the longer, more interesting route (marked in green). This covered bushland and parks rather than roads. The detours begin at sections of the walk marked as “Decision Points” in the map, and add at a couple of kilometers to the walk, making it a round 30 km overall.
  3. If needed, we would stop at the 9 or 11 km mark (Wollstonecraft or Milson’s Point) for another break before heading on towards the city.
  4. We figured it would take us 4 to 5 hours (including breaks) to do the 18 km from Hunters Hill to Pyrmont Village in the heart of the city, so lunch would be between noon and 1 pm.
  5. The backend of the walk, the ~ 10 km from Pyrmont to Hunters Hill, would be covered at an easier pace in the afternoon. We thought this section would take us 2.5 to 3 hours giving us a finish time of around 4 pm.

A planned finish time of 4 pm meant we had enough extra time in hand if we needed it. We were very comfortable with what we’d charted out on the map.

The territory

We started on time and made our first crossing at around 8am:  Fig Tree Bridge, about a kilometer from the starting point.  John took this beautiful shot from one end, the yellow paintwork and purple Jacaranda set against the diffuse light off the Lane Cove River.

Figure 2: Lane Cove River from Fig Tree Bridge

Looking city-wards from the middle of the bridge, I got this one of a couple of morning kayakers.

Figure 3: Morning kayakers on the river

Scenes such as these convey a sense of what it was like to experience the territory, something a map cannot do.  The gap between the map and the territory is akin to the one between a plan and a project; the lived experience of a project is very different from the plan, and is also unique to each individual. Jon Whitty and Bronte van der Hoorn explore this at length in a fascinating paper that relates the experience of managing a project to the philosophy of Martin Heidegger.

The route then took us through a number of steep (but mercifully short) sections in the Lane Cove and Wollstonecraft area.  On researching these later, I was gratified to find that three are featured in the Top 10 Hill runs in Lane Cove.   Here’s a Google Street View shot of the top ranked one.  Though it doesn’t look like much, it’s not the kind of gradient you want to encounter in a long walk.

Figure 4: A bit of a climb

As we negotiated these sections, it occurred to me that part of the fun lay in not knowing they were coming up.  It’s often better not to anticipate challenges that are an unavoidable feature of the territory and deal with them as they arise.  Just to be clear, I’m talking  about routine challenges that are part of the territory, not those that are avoidable or have the potential to derail a project altogether.

It was getting to be time for that planned first coffee break. When drawing up our plan, we had assumed that all seven starting points (marked in blue in the map in Figure 1) would have cafes.  Bad assumption: the starting points were set off from the main commercial areas. In retrospect, this makes good sense: you don’t want to have thousands of walkers traipsing through a small commercial area, disturbing the peace of locals enjoying a Sunday morning coffee. Whatever the reason, the point is that a taken-for-granted assumption turned out to be wrong; we finally got our first coffee well past the 10 km mark.

Post coffee, as we continued city-wards through Lavender Street we got this unexpected view:

Figure 5: Harbour Bridge from Lavender St.

The view was all the sweeter because we realised we were close to the Harbour, well ahead of schedule (it was a little after 10 am).

The Harbour Bridge is arguably the most recognisable Sydney landmark.  So instead of yet another stereotypical shot of it, I took one that shows a walker’s perspective while crossing it:

Figure 6: A pedestrian’s view of The Bridge

The barbed wire and mesh fencing detract from what would be an absolutely breathtaking view. According to this report, the fence has been in place for safety reasons since 1934!  And yes, as one might expect, it is a sore point with tourists who come from far and wide to see the bridge.

Descriptions of things – which are but maps of a kind – often omit details that are significant. Sometimes this is done to downplay negative aspects of the object or event in question. How often have you, as a project manager, “dressed-up” reports to your stakeholders?  Not outright lies, but stretching the truth. I’ve done it often enough.

The section south of The Bridge took us through parks surrounding the newly developed Barangaroo precinct which hugs the northern shoreline of the Sydney central business district.  Another kilometer, and we were at crossing # 3, the Pyrmont Bridge in Darling Harbour:

Figure 7: Pyrmont Bridge

Though almost an hour and half ahead of schedule, we took a short break for lunch at Darling Harbour before pressing on to Balmain and Anzac Bridge.  John took this shot looking upward from Anzac Bridge:

Figure: View looking up from Anzac Bridge

Commissioned in 1995,  it replaced the Glebe Island Bridge, an electrically operated swing bridge constructed in 1903, which remained the main route from the city out to the western suburbs for over 90 years! As one might imagine, as the number of vehicles in the city increased many-fold from the 60s onwards, the old bridge became a major point of congestion. The Glebe Island Bridge,  now retired, is a listed heritage site.

Incidentally, this little nugget of history was related to me by John as we walked this section of the route. It’s something I would almost certainly have missed had he not been with me that day. Journeys, real and metaphoric, are often enriched by travelling companions who point out things or fill in context  that would otherwise be passed over.

Once past Anzac Bridge, the route took us off the main thoroughfare through the side streets of Rozelle. Many of these are lined by heritage buildings. Rozelle is in the throes of change as it is going to be impacted by a major motorway project.

The project reflects a wider problem in Australia: the relative neglect of public transport compared to road infrastructure. The counter-argument is that the relatively small population of the country makes the capital investments and running costs of public transport prohibitive. A wicked problem with no easy answers, but I do believe that the more sustainable option, though more expensive initially, will prove to be the better one in the long run.

Wicked problems are expected in large infrastructure projects that affect thousands of stakeholders, many of whom will have diametrically opposing views.  What is less well appreciated is that even much smaller projects – say IT initiatives within a large organisation – can have elements of wickedness that can trip up the unwary.  This is often magnified by management decisions  made on the basis of short-term expediency.

From the side streets of Rozelle, the walk took us through Callan Park, which was the site of a psychiatric hospital from 1878 to 1994 (see this article for a horrifying history of asylums in Sydney).  Some of the asylum buildings are now part of the Sydney College of The Arts.  Pending the establishment of a trust to manage ongoing use of the site, the park is currently managed by the NSW Government in consultation with the local municipality.

Our fifth crossing of the day was Iron Cove Bridge.  The cursory shot I took while crossing it does not do justice to the view; the early afternoon sun was starting to take its toll.

Figure 9: View from Iron Cove Bridge

The route then took us about a kilometer and half through the backstreets of Drummoyne to the penultimate crossing: Gladesville Bridge  whose claim to fame is that it was for many years the longest single span concrete arch bridge in the world (another historical vignette that came to me via John).  It has since been superseded by the Qinglong Railway Bridge in China.

By this time I was feeling quite perky, cheered perhaps by the realisation that we were almost done.  I took time to compose perhaps my best shot of the day as we crossed Gladesville Bridge.

Figure 10: View from Gladesville Bridge

…and here’s one of the aforementioned arch, taken from below the bridge:

Figure 11: A side view of Gladesville Bridge

The final crossing, Tarban Creek Bridge was a short 100 metre walk from the Gladesville Bridge. We lingered mid-bridge to take a few shots as we realised the walk was coming to an end; the finish point was a few hundred metres away.

Figure 12: View from Tarban Creek Bridge

We duly collected our “Seven Bridges Completed” stamp at around 2:30 pm and headed to the local pub for a celebratory pint.

Figure 13: A well-deserved pint

Wrapping up

Gregory Bateson once wrote:

“We say the map is different from the territory. But what is the territory? Operationally, somebody went out with a retina or a measuring stick and made representations which were then put upon paper. What is on the paper map is a representation of what was in the retinal representation of the [person] who made the map; and as you push the question back, what you find is an infinite regress, an infinite series of maps. The territory never gets in at all. The territory is [the thing in itself] and you can’t do anything with it. Always the process of representation will filter it out so that the mental world is only maps of maps of maps, ad infinitum.”

One might think that a solution lies in making ever more accurate representations, but that is an exercise in futility. Indeed, as Borges pointed out in a short story:

“… In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast map was Useless…”

Apart from being impossibly cumbersome, a complete map of a territory is impossible because a representation can never be the real thing. The territory remains forever ineffable; every encounter with it is unique and has the potential to  reveal new perspectives.

This is as true for a project as it is for a walk or any other experience.

Written by K

November 27, 2017 at 1:33 pm

Posted in Project Management

The law of requisite variety and its implications for enterprise IT

with 4 comments

Introduction

There are two  facets to the operation of IT systems and processes in organisations:  governance, the standards and regulations associated with a system or process; and execution, which relates to steering the actual work of the system or process in specific situations.

An example might help clarify the difference:

The purpose of project management is to keep projects on track. There are two aspects to this: one pertaining to the project management office (PMO) which is responsible for standards and regulations associated with managing projects in general, and the other relating to the day-to-day work of steering a particular project.  The two sometimes work at cross-purposes. For example, successful project managers know that much of their work is about navigate their projects through the potentially treacherous terrain of their organisations, an activity that sometimes necessitates working around, or even breaking, rules set by the PMO.

Governance and steering share a common etymological root: the word kybernetes, which means steersman in Greek.  It also happens to be the root word of Cybernetics  which is the science of regulation or control.   In this post,  I  apply a key principle of cybernetics to a couple of areas of enterprise IT.

Cybernetic systems

An oft quoted example of a cybernetic system is a thermostat, a device that regulates temperature based on inputs from the environment.  Most cybernetic systems are way more complicated than a thermostat. Indeed, some argue that the Earth is a huge cybernetic system. A smaller scale example is a system consisting of a car + driver wherein a driver responds to changes in the environment thereby controlling the motion of the car.

Cybernetic systems vary widely not just in size, but also in complexity. A thermostat is concerned only the ambient temperature whereas the driver in a car has to worry about a lot more (e.g. the weather, traffic, the condition of the road, kids squabbling in the back-seat etc.).   In general, the more complex the system and its processes, the larger the number of variables that are associated with it. Put another way, complex systems must be able to deal with a greater variety of disturbances than simple systems.

The law of requisite variety

It turns out there is a fundamental principle – the law of requisite variety– that governs the capacity of a system to respond to changes in its environment. The law is a quantitative statement about the different types of responses that a system needs to have in order to deal with the range of  disturbances it might experience.

According to this paper, the law of requisite variety asserts that:

The larger the variety of actions available to a control system, the larger the variety of perturbations it is able to compensate.

Mathematically:

V(E) > V(D) – V(R) – K

Where V represents variety, E represents the essential variable(s) to be controlled, D represents the disturbance, R the regulation and K the passive capacity of the system to absorb shocks. The terms are explained in brief below:

V(E) represents the set of  desired outcomes for the controlled environmental variable:  desired temperature range in the case of the thermostat,  successful outcomes (i.e. projects delivered on time and within budget) in the case of a project management office.

V(D) represents the variety of disturbances the system can be subjected to (the ways in which the temperature can change, the external and internal forces on a project)

V(R) represents the various ways in which a disturbance can be regulated (the regulator in a thermostat, the project tracking and corrective mechanisms prescribed by the PMO)

K represents the buffering capacity of the system – i.e. stored capacity to deal with unexpected disturbances.

I won’t say any more about the law of requisite variety as it would take me to far afield; the interested and technically minded reader is referred to the link above or this paper for more.

Implications for enterprise IT

In plain English, the law of requisite variety states that only “variety can absorb variety.”  As stated by Anthony Hodgson in an essay in this book, the law of requisite variety:

…leads to the somewhat counterintuitive observation that the regulator must have a sufficiently large variety of actions in order to ensure a sufficiently small variety of outcomes in the essential variables E. This principle has important implications for practical situations: since the variety of perturbations a system can potentially be confronted with is unlimited, we should always try maximize its internal variety (or diversity), so as to be optimally prepared for any foreseeable or unforeseeable contingency.

This is entirely consistent with our intuitive expectation that the best way to deal with the unexpected is to have a range of tools and approaches at ones disposal.

In the remainder of this piece, I’ll focus on the implications of the law for an issue that is high on the list of many corporate IT departments: the standardization of  IT systems and/or processes.

The main rationale behind standardizing an IT  process is to handle all possible demands (or use cases) via a small number of predefined responses.   When put this way, the connection to the law of requisite variety is clear: a request made upon a function such as a service desk or project management office (PMO) is a disturbance and the way they regulate or respond to it determines the outcome.

Requisite variety and the service desk

A service desk is a good example of a system that can be standardized. Although users may initially complain about having to log a ticket instead of calling Nathan directly, in time they get used to it, and may even start to see the benefits…particularly when Nathan goes on vacation.

The law of requisite variety tells us successful standardization requires that all possible demands made on the system be known and regulated by the  V(R)  term in the equation above. In case of a service desk this is dealt with by a hierarchy of support levels. 1st level support deals with routine calls (incidents and service requests in ITIL terminology) such as system access and simple troubleshooting. Calls that cannot be handled by this tier are escalated to the 2nd and 3rd levels as needed.  The assumption here is that, between them, the three support tiers should be able to handle majority of calls.

Slack  (the K term) relates to unexploited capacity.  Although needed in order to deal with unexpected surges in demand, slack is expensive to carry when one doesn’t need it.  Given this, it makes sense to incorporate such scenarios into the repertoire of the standard system responses (i.e the V(R) term) whenever possible.  One way to do this is to anticipate surges in demand and hire temporary staff to handle them. Another way  is to deal with infrequent scenarios outside the system- i.e. deem them out of scope for the service desk.

Service desk standardization is thus relatively straightforward to achieve provided:

  • The kinds of calls that come in are largely predictable.
  • The work can be routinized.
  • All non-routine work – such as an application enhancement request or a demand for a new system-  is  dealt with outside the system via (say) a change management process.

All this will be quite unsurprising and obvious to folks working in corporate IT. Now  let’s see what happens when we apply the law to a more complex system.

Requisite variety and the PMO

Many corporate IT leaders see the establishment of a PMO as a way to control costs and increase efficiency of project planning and execution.   PMOs attempt to do this by putting in place governance mechanisms. The underlying cause-effect assumption is that if appropriate rules and regulations are put in place, project execution will necessarily improve.  Although this sounds reasonable, it often does not work in practice: according to this article, a significant fraction of PMOs fail to deliver on the promise of improved project performance. Consider the following points quoted directly from the article:

  • “50% of project management offices close within 3 years (Association for Project Mgmt)”
  • “Since 2008, the correlated PMO implementation failure rate is over 50% (Gartner Project Manager 2014)”
  • “Only a third of all projects were successfully completed on time and on budget over the past year (Standish Group’s CHAOS report)”
  • “68% of stakeholders perceive their PMOs to be bureaucratic     (2013 Gartner PPM Summit)”
  • “Only 40% of projects met schedule, budget and quality goals (IBM Change Management Survey of 1500 execs)”

The article goes on to point out that the main reason for the statistics above is that there is a gap between what a PMO does and what the business expects it to do. For example, according to the Gartner review quoted in the article over 60% of the stakeholders surveyed believe their PMOs are overly bureaucratic.  I can’t vouch for the veracity of the numbers here as I cannot find the original paper. Nevertheless, anecdotal evidence (via various articles and informal conversations) suggests that a significant number of PMOs fail.

There is a curious contradiction between the case of the service desk and that of the PMO. In the former, process and methodology seem to work whereas in the latter they don’t.

Why?

The answer, as you might suspect, has to do with variety.  Projects and service requests are very different beasts. Among other things, they differ in:

  • Duration: A project typically goes over many months whereas a service request has a lifetime of days,
  • Technical complexity: A project involves many (initially ill-defined) technical tasks that have to be coordinated and whose outputs have to be integrated.  A service request typically consists one (or a small number) of well-defined tasks.
  • Social complexity: A project involves many stakeholder groups, with diverse interests and opinions. A service request typically involves considerably fewer stakeholders, with limited conflicts of opinions/interests.

It is not hard to see that these differences increase variety in projects compared to service requests. The reason that standardization (usually) works for service desks  but (often) fails for PMOs is that the PMOs are subjected a greater variety of disturbances than service desks.

The key point is that the increased variety in the case of the PMO precludes standardisation.  As the law of requisite variety tells us, there are two ways to deal with variety:  regulate it  or adapt to it. Most PMOs take the regulation route, leading to over-regulation and outcomes that are less than satisfactory. This is exactly what is reflected in the complaint about PMOs being overly bureaucratic. The solution simple and obvious solution is for PMOs to be more flexible– specifically, they must be able to adapt to the ever changing demands made upon them by their organisations’ projects.  In terms of the law of requisite variety, PMOs need to have the capacity to change the system response, V(R), on the fly. In practice this means recognising the uniqueness of requests by avoiding reflex, cookie cutter responses that characterise bureaucratic PMOs.

Wrapping up

The law of requisite variety is a general principle that applies to any regulated system.  In this post I applied the law to two areas of enterprise IT – service management and project governance – and  discussed why standardization works well  for the former but less satisfactorily for the latter. Indeed, in view of the considerable differences in the duration and complexity of service requests and projects, it is unreasonable to expect that standardization will work well for both.  The key takeaway from this piece is therefore a simple one: those who design IT functions should pay attention to the variety that the functions will have to cope with, and bear in mind that standardization works well only if variety is known and limited.

Written by K

December 12, 2016 at 9:00 pm

Improving decision-making in projects

with 5 comments

An irony of organisational life is that the most important decisions on projects (or any other initiatives) have to be made at the start, when ambiguity is at its highest and information availability lowest. I recently gave a talk at the Pune office of BMC Software on improving decision-making in such situations.

The talk was recorded and simulcast to a couple of locations in India. The folks at BMC very kindly sent me a copy of the recording with permission to publish it on Eight to Late. Here it is:


Based on the questions asked and the feedback received, I reckon that a number of people found the talk  useful. I’d welcome your comments/feedback.

Acknowledgements: My thanks go out to Gaurav Pal, Manish Gadgil and Mrinalini Wankhede for giving me the opportunity to speak at BMC, and to Shubhangi Apte for putting me in touch with them. Finally, I’d like to thank the wonderful audience at BMC for their insightful questions and comments.

The Risk – a dialogue mapping vignette

with one comment

Foreword

Last week, my friend Paul Culmsee conducted an internal workshop in my organisation on the theme of collaborative problem solving. Dialogue mapping is one of the tools he introduced during the workshop.  This piece, primarily intended as a follow-up for attendees,  is an introduction to dialogue mapping via a vignette that illustrates its practice (see this post for another one). I’m publishing it here as I thought it might be useful for those who wish to understand what the technique is about.

Dialogue mapping uses a notation called Issue Based Information System (IBIS), which I have discussed at length in this post. For completeness, I’ll begin with a short introduction to the notation and then move on to the vignette.

A crash course in IBIS

The IBIS notation consists of the following three elements:

  1. Issues(or questions): these are issues that are being debated. Typically, issues are framed as questions on the lines of “What should we do about X?” where X is the issue that is of interest to a group. For example, in the case of a group of executives, X might be rapidly changing market condition whereas in the case of a group of IT people, X could be an ageing system that is hard to replace.
  2. Ideas(or positions): these are responses to questions. For example, one of the ideas of offered by the IT group above might be to replace the said system with a newer one. Typically the whole set of ideas that respond to an issue in a discussion represents the spectrum of participant perspectives on the issue.
  3. Arguments: these can be Pros (arguments for) or Cons (arguments against) an issue. The complete set of arguments that respond to an idea represents the multiplicity of viewpoints on it.

Compendium is a freeware tool that can be used to create IBIS maps– it can be downloaded here.

In Compendium, IBIS elements are represented as nodes as shown in Figure 1: issues are represented by blue-green question markspositions by yellow light bulbspros by green + signs and cons by red – signs.  Compendium supports a few other node types, but these are not part of the core IBIS notation. Nodes can be linked only in ways specified by the IBIS grammar as I discuss next.

Figure 1: Elements of IBIS

Figure 1: IBIS node types

The IBIS grammar can be summarized in three simple rules:

  1. Issues can be raised anew or can arise from other issues, positions or arguments. In other words, any IBIS element can be questioned.  In Compendium notation:  a question node can connect to any other IBIS node.
  2. Ideas can only respond to questions– i.e. in Compendium “light bulb” nodes can only link to question nodes. The arrow pointing from the idea to the question depicts the “responds to” relationship.
  3. Arguments  can only be associated with ideas– i.e. in Compendium “+” and “–“  nodes can only link to “light bulb” nodes (with arrows pointing to the latter)

The legal links are summarized in Figure 2 below.

Figure 2: Legal links in IBIS

Figure 2: Legal links in IBIS

 

…and that’s pretty much all there is to it.

The interesting (and powerful) aspect of IBIS is that the essence of any debate or discussion can be captured using these three elements. Let me try to convince you of this claim via a vignette from a discussion on risk.

 The Risk – a Dialogue Mapping vignette

“Morning all,” said Rick, “I know you’re all busy people so I’d like to thank you for taking the time to attend this risk identification session for Project X.  The objective is to list the risks that we might encounter on the project and see if we can identify possible mitigation strategies.”

He then asked if there were any questions. The head waggles around the room indicated there were none.

“Good. So here’s what we’ll do,”  he continued. “I’d like you all to work in pairs and spend 10 minutes thinking of all possible risks and then another 5 minutes prioritising.  Work with the person on your left. You can use the flipcharts in the breakout area at the back if you wish to.”

Twenty minutes later, most people were done and back in their seats.

“OK, it looks as though most people are done…Ah, Joe, Mike have you guys finished?” The two were still working on their flip-chart at the back.

“Yeah, be there in a sec,” replied Mike, as he tore off the flip-chart page.

“Alright,” continued Rick, after everyone had settled in. “What I’m going to do now is ask you all to list your top three risks. I’d also like you tell me why they are significant and your mitigation strategies for them.” He paused for a second and asked, “Everyone OK with that?”

Everyone nodded, except Helen who asked, “isn’t it important that we document the discussion?”

“I’m glad you brought that up. I’ll make notes as we go along, and I’ll do it in a way that everyone can see what I’m writing. I’d like you all to correct me if you feel I haven’t understood what you’re saying. It is important that  my notes capture your issues, ideas and arguments accurately.”

Rick turned on the data projector, fired up Compendium and started a new map.  “Our aim today is to identify the most significant risks on the project – this is our root question”  he said, as he created a question node. “OK, so who would like to start?”

 

 

Fig 3: The root question

Figure 3: The root question

 

“Sure,” we’ll start, said Joe easily. “Our top risk is that the schedule is too tight. We’ll hit the deadline only if everything goes well, and everyone knows that they never do.”

“OK,” said Rick, “as he entered Joe and Mike’s risk as an idea connecting to the root question. “You’ve also mentioned a point that supports your contention that this is a significant risk – there is absolutely no buffer.” Rick typed this in as a pro connecting to the risk. He then looked up at Joe and asked,  “have I understood you correctly?”

“Yes,” confirmed Joe.

 

Fig 4: Map in progress

Figure 4: Map in progress

 

“That’s pretty cool,” said Helen from the other end of the table, “I like the notation, it makes reasoning explicit. Oh, and I have another point in support of Joe and Mike’s risk – the deadline was imposed by management before the project was planned.”

Rick began to enter the point…

“Oooh, I’m not sure we should put that down,” interjected Rob from compliance. “I mean, there’s not much we can do about that can we?”

…Rick finished the point as Rob was speaking.

 

Fig 4: Map in progress

Figure 5: Two pros for the idea

 

“I hear you Rob, but I think  it is important we capture everything that is said,” said Helen.

“I disagree,” said Rob. “It will only annoy management.”

“Slow down guys,” said Rick, “I’m going to capture Rob’s objection as “this is a management imposed-constraint rather than risk. Are you OK with that, Rob?”

Rob nodded his assent.

 

Fig 6: A con enters the picture

Fig 6: A con enters the picture

I think it is important we articulate what we really think, even if we can’t do anything about it,” continued Rick. There’s no point going through this exercise if we don’t say what we really think. I want to stress this point, so I’m going to add honesty  and openness  as ground rules for the discussion. Since ground rules apply to the entire discussion, they connect directly to the primary issue being discussed.”

Figure 7: A "criterion" that applies to the analysis of all risks

Figure 7: A “criterion” that applies to the analysis of all risks

 

“OK, so any other points that anyone would like to add to the ones made so far?” Queried Rick as he finished typing.

He looked up. Most of the people seated round the table shook their heads indicating that there weren’t.

“We haven’t spoken about mitigation strategies. Any ideas?” Asked Rick, as he created a question node marked “Mitigation?” connecting to the risk.

 

Figure 8: Mitigating the risk

Figure 8: Mitigating the risk

“Yeah well, we came up with one,” said Mike. “we think the only way to reduce the time pressure is to cut scope.”

“OK,” said Rick, entering the point as an idea connecting to the “Mitigation?” question. “Did you think about how you are going to do this? He entered the question “How?” connecting to Mike’s point.

Figure 9: Mitigating the risk

Figure 9: Mitigating the risk

 

“That’s the problem,” said Joe, “I don’t know how we can convince management to cut scope.”

“Hmmm…I have an idea,” said Helen slowly…

“We’re all ears,” said Rick.

“…Well…you see a large chunk of time has been allocated for building real-time interfaces to assorted systems – HR, ERP etc. I don’t think these need to be real-time – they could be done monthly…and if that’s the case, we could schedule a simple job or even do them manually for the first few months. We can push those interfaces to phase 2 of the project, well into next year.”

There was a silence in the room as everyone pondered this point.

“You know, I think that might actually work, and would give us an extra month…may be even six weeks for the more important upstream stuff,” said Mike. “Great idea, Helen!”

“Can I summarise this point as – identify interfaces that can be delayed to phase 2?” asked Rick, as he began to type it in as a mitigation strategy. “…and if you and Mike are OK with it, I’m going to combine it with the ‘Cut Scope’ idea to save space.”

“Yep, that’s fine,” said Helen. Mike nodded OK.

Rick deleted the “How?” node connecting to the “Cut scope” idea, and edited the latter to capture Helen’s point.

Figure 10: Mitigating the risk

Figure 10: Mitigating the risk

“That’s great in theory, but who is going to talk to the affected departments? They will not be happy.” asserted Rob.  One could always count on compliance to throw in a reality check.

“Good point,”  said Rick as he typed that in as a con, “and I’ll take the responsibility of speaking to the department heads about this,” he continued entering the idea into the map and marking it as an action point for himself. “Is there anything else that Joe, Mike…or anyone else would like to add here,” he added, as he finished.

Figure 11: Completed discussion of first risk (click to see full size

Figure 11: Completed discussion of first risk (click to view larger image)

“Nope,” said Mike, “I’m good with that.”

“Yeah me too,” said Helen.

“I don’t have anything else to say about this point,” said Rob, “ but it would be great if you could give us a tutorial on this technique. I think it could be useful to summarise the rationale behind our compliance regulations. Folks have been complaining that they don’t understand the reasoning behind some of our rules and regulations. ”

“I’d be interested in that too,” said Helen, “I could use it to clarify user requirements.”

“I’d be happy to do a session on the IBIS notation and dialogue mapping next week. I’ll check your availability and send an invite out… but for now, let’s focus on the task at hand.”

The discussion continued…but the fly on the wall was no longer there to record it.

Afterword

I hope this little vignette illustrates how IBIS and dialogue mapping can aid collaborative decision-making / problem solving by making diverse viewpoints explicit. That said, this is a story, and the problem with stories is that things  go the way the author wants them to.  In real life, conversations can go off on unexpected tangents, making them really hard to map. So, although it is important to gain expertise in using the software, it is far more important to practice mapping live conversations. The latter is an art that requires considerable practice. I recommend reading Paul Culmsee’s series on the practice of dialogue mapping or <advertisement> Chapter 14 of The Heretic’s Guide to Best Practices</advertisement> for more on this point.

That said, there are many other ways in which IBIS can be used, that do not require as much skill. Some of these include: mapping the central points in written arguments (what’s sometimes called issue mapping) and even decisions on personal matters.

To sum up: IBIS is a powerful means to clarify options and lay them out in an easy-to-follow visual format. Often this is all that is required to catalyse a group decision.

Sherlock Holmes and the case of the management fetish

with 2 comments

As narrated by Dr. John Watson, M.D.

As my readers are undoubtedly aware,  my friend Sherlock Holmes is widely feted for his powers of logic and deduction.  With all due modesty, I can claim to have played a small part in publicizing his considerable talents, for I have a sense for what will catch the reading public’s fancy and, perhaps more important, what will not. Indeed, it could be argued  that his fame is in no small part due to the dramatic nature of the exploits which I have chosen to publicise.

Management consulting, though far more lucrative than criminal investigation, is not nearly as exciting.  Consequently my work has become that much harder since Holmes reinvented himself as a management expert.  Nevertheless, I am firmly of the opinion that the long-standing myths  exposed by  his  recent work more than make up for any lack of suspense or drama.

A little known fact is that many of Holmes’ insights into flawed management practices have come after the fact, by discerning common themes that emerged from different cases. Of course this makes perfect sense:  only after seeing the same (or similar) mistake occur in a variety of situations can one begin to perceive an underlying pattern.

The conversation I had with him last night  is an excellent illustration of this point.

We were having dinner at Holmes’ Baker Street abode  when, apropos of nothing, he remarked, “It’s a strange thing, Watson, that our lives are governed by routine. For instance, it is seven in the evening, and here we are having dinner, much like we would on any other day.”

“Yes, it is,” I said, intrigued by his remark.  Dabbing my mouth with a napkin, I put down my fork and waited for him to say more.

He smiled. “…and do you think that is a good thing?”

I thought about it for a minute before responding. “Well, we follow routine because we like…or need… regularity and predictability,” I said. “Indeed, as a medical man, I know well that our bodies have built in clocks that drive us to do things – such as eat and sleep – at regular intervals.  That apart, routines give us a sense of comfort and security in an unpredictable world. Even those who are adventurous have routines of their own. I don’t think we have a choice in the matter, it’s the way humans are wired.” I wondered where the conversation was going.

Holmes cocked an eyebrow. “Excellent, Watson!” he said. “Our propensity for routine is quite possibly a consequence of our need for security and comfort ….but what about the usefulness of routines – apart from the sense of security we get from them?”

“Hmmm…that’s an interesting question. I suppose a routine must have a benefit, or at least a perceived benefit…else it would not have been made into a routine.”

“Possibly,” said Holmes, “ but let me ask you another question.  You remember the case of the failed projects do you not?”

“Yes, I do,” I replied. Holmes’ abrupt conversational U-turns no longer disconcert me, I’ve become used to them over the years. I remembered the details of the case like it had happened yesterday…indeed I should, as it was I who wrote the narrative!

“Did anything about the case strike you as strange?” he inquired.

I mulled over the case, which (in hindsight) was straightforward enough. Here are the essential facts:

The organization suffered from a high rate of project failure (about 70% as I recall). The standard prescription – project post-mortems followed by changes in processes aimed at addressing the top issues revealed – had failed to resolve the issue. Holmes’ insightful diagnosis was that the postmortems identified symptoms, not causes.  Therefore the measures taken to fix the problems didn’t work because they did not address the underlying cause. Indeed, the measures were akin to using brain surgery to fix a headache.  In the end, Holmes concluded that the failures were a consequence of flawed organizational structures and norms.

Of course flawed structures and norms are beyond the purview of a mere project or  program manager. So Holmes’ diagnosis, though entirely correct, did not help Bryant (the manager who had consulted us).

Nothing struck me as unduly strange as  went over the facts mentally. No,” I replied, “but what on earth does that have to do with routine?”

He smiled. “I will explain presently, but I have yet another question for you before I do so.  Do you remember one of our earliest management consulting cases – the affair of the terminated PMO?”

I replied in the affirmative.

“Well then,  you see the common thread running through the two cases, don’t you?” Seeing my puzzled look, he added, “think about it for a minute, Watson, while I go and fetch dessert.”

He went into the kitchen, leaving me to ponder his question.

The only commonality I could see was the obvious one – both cases were related to the failure of PMOs. (Editor’s note: PMO = Project Management Office)

He returned with dessert a few minutes later. “So, Watson,” he said as he sat down, “have you come up with anything?

I told him what I thought.

“Capital, Watson! Then you will, no doubt, have asked yourself the obvious next question. ”

I saw what he was getting at. “Yes!  The question is: can this observation be generalised?  Do majority of PMOs fail? ”

“Brilliant, Watson.  You are getting better at this by the day.” I know Holmes  does not intend to sound condescending, but the sad fact is that he often does.  “Let me tell you,” he continued, “Research   suggests that 50% of PMOs fail within three years of being set up. My hypothesis is that failure rate would be considerably higher if the timeframe is increased to five or seven years. What’s even more interesting is that there is a single overriding complaint about PMOs:  the majority of stakeholders surveyed felt that their PMOs are overly bureaucratic, and generally hinder project work.”

“But isn’t that contrary to the aim of a  PMO – which, as I understand, is to facilitate project work?” I queried.

“Excellent, my dear Watson. You are getting close to the heart of the matter.

“I am?”  To be honest, I was a little lost.

“Ah Watson, don’t tell me you do not see it,” said Holmes exasperatedly.

“I’m afraid you’ll have to explain,” I replied curtly. Really, he could insufferable at times.

“I shall do my best. You see, there is a fundamental contradiction between the stated mission and actual operation of a typical PMO.  In theory, they are supposed to facilitate projects, but as far as executive management is concerned this is synonymous with overseeing and controlling projects. What this means is that in practice, PMOs inevitably end up policing project work rather than facilitating it.”

I wasn’t entirely convinced.  “May be the reason that  PMOs fail is that organisations do not implement them correctly,” I said.

“Ah, the famous escape clause used by purveyors of best practices – if our best practice doesn’t work, it means you aren’t implementing it correctly. Pardon me while I choke on my ale, because that is utter nonsense.”

“Why?”

“Well, one would expect after so many years, these so-called implementation errors would have been sorted out. Yet we see the same poor outcomes over and over again,” said Holmes.

“OK,  but then why are PMOs are still so popular with management?”

“Now we come to the crux of matter, Watson,” he said, a tad portentously, “They are popular for reasons we spoke of at the start of this conversation – comfort and security.”

“Comfort and security? I have no idea what you’re talking about.”

“Let me try explaining this in another way,” he said. “When you were a small child, you must have had some object that you carried around everywhere…a toy, perhaps…did you not?”

“I’m not sure I should tell you this Holmes  but, yes, I had a blanket”

“A security blanket, I would never have guessed, Watson,” smiled Holmes. “…but as it happens that’s a perfect example because PMOs and the methodologies they enforce are  security blankets. They give executives and frontline managers a sense that they are doing something concrete and constructive to manage uncertainty…even though they actually aren’t.   PMOs are popular , not because they work (and indeed, we’ve seen they don’t)  but because they help managers contain their anxiety about whether things will turn out right. I would not be exaggerating if I said that  PMOs and the methodologies they evangelise are akin to lucky charms or fetishes.”

“That’s a strong a statement to make on rather slim grounds,” I said dubiously.

“Is it? Think about it, Watson,” he shot back, with a flash of irritation. “Many (though I should admit, not all) PMOs and methodologies prescribe excruciatingly detailed procedures to follow and templates to fill when managing projects. For many (though again, not all) project managers, managing a project is synonymous with following these rituals. Such managers attempt to force-fit  reality into standardised procedures and documents. But tell me, Watson – how can such project management by ritual work  when no two projects are the same?”

“Hmm….”

“That is not all, Watson,” he continued, before I could respond, “PMOs and methodologies enable people to live in a fantasy world where everything seems to be under control. Methodology fetishists will not see the gap between their fantasy world and reality, and will therefore miss opportunities to learn. They follow rituals that give them security and an illusion of efficiency, but at the price of a genuine engagement with people and projects.”

“ I’ll have to think about it,” I said.

“You do that,” he replied , as he pushed back his chair and started to clear the table. Unlike him, I had a lot more than dinner to digest. Nevertheless, I rose to help him as I do every day.

Evening conversations at 221B Baker Street are seldom boring. Last night was no exception.

Acknowledgement:

This tale was inspired David Wastell’s brilliant paper, The fetish of technique: methodology as social defence (abstract only).

Written by K

April 29, 2015 at 8:37 pm

The danger within: internally-generated risks in projects

with 2 comments

Introduction

In their book, Waltzing with Bears, Tom DeMarco and Timothy Lister coined the phrase, “risk management is project management for adults”.  Twenty years on, it appears that their words have been taken seriously: risk management now occupies a prominent place in BOKs, and has also become a key element of project management practice.

On the other hand, if the evidence  is to be believed (as per the oft quoted Chaos Report, for example), IT projects continue to fail at an alarming rate. This is curious because one would have expected that a greater focus on risk management ought to have resulted in better outcomes.  So, is it possible at all that risk management (as it is currently preached and practiced in IT project management) cannot address certain risks…or, worse, that there are certain risks are simply not recognized as risks?

Some time ago, I came across a paper by Richard Barber that sheds some light on this very issue. This post elaborates on the nature and importance of such “hidden” risks by drawing on Barber’s work as well as my experiences and those of my colleagues with whom I have discussed the paper.

What are internally generated risks?

The standard approach to risk is based on the occurrence of events. Specifically, risk management is concerned with identifying potential adverse events and taking steps to  reduce either their probability of occurrence or their impact. However, as Barber points out, this is a limited view of risk because it overlooks adverse conditions that are built into the project environment. A good example of this is an organizational norm that centralizes decision making at the corporate or managerial level. Such a norm would discourage a project manager from taking appropriate action when confronted with an event that demands an on-the-spot decision.  Clearly, it is wrong-headed to attribute the risk to the event because the risk actually has its origins in the norm. In other words, it is an internally generated risk.

(Note: the notion of an internally generated risk is akin to the risk as a pathogen concept that I discussed in this post many years ago.)

Barber defines an internally generated risk as one that has its origin within the project organisation or its host, and arises from [preexisting] rules, policies, processes, structures, actions, decisions, behaviours or cultures. Some other examples of such risks include:

  • An overly bureaucratic PMO.
  • An organizational culture that discourages collaboration between teams.
  • An organizational structure that has multiple reporting lines – this is what I like to call a pseudo-matrix organization 🙂

These factors are similar to those that I described in my post on the systemic causes of project failure. Indeed, I am tempted to call these systemic risks because they are related to the entire system (project + organization). However, that term has already been appropriated by the financial risk community.

Since the term is relatively new, it is important to draw distinctions between internally generated and other types of risks. It is easy to do so because the latter (by definition) have their origins outside the hosting organization. A good example of the latter is the risk of a vendor not delivering a module on time or worse, going into receivership prior to delivering the code.

Finally, there are certain risks that are neither internally generated nor external. For example, using a new technology is inherently risky simply because it is new. Such a risk is inherent rather than internally generated or external.

Understanding the danger within

The author of the paper surveyed nine large projects with the intent of getting some insight into the nature of internally generated risks.  The questions he attempted to address are the following:

  1. How common are these risks?
  2. How significant are they?
  3. How well are they managed?
  4. What is the relationship between the ability of an organization to manage such risks and the organisation’s project management maturity level (i.e. the maturity of its project management processes)

Data was gathered through group workshops and one-on-one interviews in which the author asked a number of questions that were aimed at gaining insight into:

  1. The key difficulties that project managers encountered on the projects.
  2. What they perceived to be the main barriers to project success.

The aim of the one-on-one interviews was to allow for a more private setting in which sensitive issues (politics, dysfunctional PMOs and brain-dead rules / norms) could be freely discussed.

The data gathered was studied in detail, with the intent of identifying internally generated risks. The author describes the techniques he used to minimize subjectivity and to ensure that only significant risks were considered. I will omit these details here, and instead focus on his findings as they relate to the questions listed above.

Commonality of internally generated risks

Since organizational rules and norms are often flawed, one might expect that internally generated risks would be fairly common in projects. The author found that this was indeed the case with the projects he surveyed: in his words, the smallest number of non-trivial internally generated risks identified in any of the nine projects was 15, and the highest was 30!  Note: the identification of non-trivial risks was done by eliminating those risks that a wide range of stakeholders agreed as being unimportant.

Unfortunately, he does not explicitly list the most common internally-generated risks that he found. However, there are a few that he names later in the article. These are:

I suspect that experienced project managers would be able to name many more.

Significance of internally generated risks

Determining the significance of these risks is tricky because one has to figure out their probability of occurrence.  The impact is much easier to get a handle on, as one has a pretty good idea of the consequences of such risks should they eventuate. (Question: What happens if there is inadequate sponsorship? Answer: the project is highly likely to fail!).   The author attempted to get a qualitative handle on the probability of occurrence by asking relevant stakeholders to estimate the likelihood of occurrence. Based on the responses received, he found that a large fraction of the internally-generated risks are significant (high probability of occurrence and high impact).

Management of internally generated risks

To identify whether internally generated risks are well managed, the author asked relevant project teams to look at all the significant internal risks on their project and classify them as to whether or not they had been identified by the project team prior to the research. He found that in over half the cases, less than 50% of the risks had been identified. However, most of the risks that were identified were not managed!

The relationship between project management maturity and susceptibility to internally generated risk

Project management maturity refers to the level of adoption of  standard good practices within an organization. Conventional wisdom tells us that there should be an inverse correlation between maturity levels and susceptibility to internally generated risk –  the higher the maturity level, the lower the susceptibility.

The author assessed maturity levels by interviewing various stakeholders within the organization and also by comparing the processes used within the organization to well-known standards.  The results indicated a weak negative correlation – that is, organisations with a higher level of maturity tended to have a smaller number of internally generated risks. However, as the author admits, one cannot read much into this finding as the correlation was weak.

Discussion

The study suggests that internally generated risks are common and significant on projects. However, the small sample size also suggests that more comprehensive surveys are needed.  Nevertheless, anecdotal evidence from colleagues who I spoke with suggests that the findings are reasonably robust. Moreover, it is also clear (both, from the study and my conversations) that these risks are not very well managed.  There is a good reason for this:  internally generated risks originate in human behavior and / or dysfunctional structures. These tend to be a difficult topic to address in an organizational setting because people are unlikely to tell those above them in the hierarchy that they (the higher ups) are the cause of a problem.  A classic example of such a risk is estimation by decree – where a project team is told to just get it done by a certain date. Although most project managers are aware of such risks, they are reluctant to name them for obvious reasons.

Conclusion

I suspect most project managers who work in corporate environments will have had to grapple with internally generated risks in one form or another.  Although traditional risk management does not recognize these risks as risks, seasoned project managers know  from experience that people,  politics or even processes can pose major problems to smooth working of projects.  However, even when recognised for what they are, these risks can be hard to tackle because they  lie outside a project manager’s sphere of influence.  They therefore tend to become those proverbial pachyderms in the room – known to all but never discussed, let alone documented….and therein lies the danger within.

Written by K

December 10, 2014 at 7:23 pm

%d bloggers like this: