Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Monte Carlo Simulation’ Category

Monte Carlo simulation of multiple project tasks – three examples and some general comments

with 23 comments

Introduction

In my previous post I  demonstrated the use of a Monte Carlo technique in simulating  a single project task with completion times  described by a triangular distribution.  My aim in that article was to:  a) describe a Monte Carlo simulation procedure in enough detail for someone interested to be able to reproduce the calculations and b) show that it gives sensible results in a situation where the answer is known.  Now it’s time to take things further.  In this post, I present simulations for two tasks chained together in various ways.  We shall see that, even with this small increase in complexity (from one task to two), the results obtained can be surprising.  Specifically, small changes in inter-task dependencies can have a huge effect on the overall (two-task) completion time distribution. Although, this is something that that most project managers have experienced in real life, it is rarely taken in to account by traditional scheduling techniques. As we shall see, Monte Carlo techniques predict such effects as a matter of course.

Background

The three simulations discussed here are built on the example that I used in my previous article, so it’s worth spending a few lines for a brief recap of that example.  The task simulated in the example was assumed to be described by a triangular distribution with  minimum completion time (t_{min}) of 2  hours,   most likely completion time (t_{ml}) of 4 hours and   a maximum completion time (t_{max}) of 8 hours.   The resulting triangular probability distribution function (PDF), p(t)  –  which gives the probability of completing the task at time t – is shown in Figure 1.

Figure 1 - PDF for triangular distribution (tmin=2, tml=4, tmax=8)

Figure 1 - PDF for triangular distribution (tmin=2, tml=4, tmax=8)

Figure 2 depicts the associated cumulative distribution function (CDF), P(t)  which gives the probability that a task will be completed by time t (as opposed to the PDF which specifies the probability of completion at time t). The value of the CDF at t=8 is 1 because the task must finish within 8 hrs.

Figure 2 - PDF for triangular distribution (tmin=2, tml=4, tmax=6)

Figure 2 - PDF for triangular distribution (tmin=2, tml=4, tmax=6)

The equations describing the PDF and CDF are listed in equations 4-7 of my previous article.  I won’t rehash them here as they don’t add anything new  to the discussion – please see the article for all the gory algebraic details and formulas.   Now, with that background, we’re ready to move on to the examples.

Two tasks in series with no inter-task delay

As a first example, let’s look at two tasks that have to be performed sequentially – i.e. the second task starts as soon as the first one is completed. To simplify things, we’ll also assume that they have identical (triangular) distributions as described earlier and shown in Figure 1  (excepting , of course,  that the distribution is shifted to the right for the second task  – since it starts after the first one finishes).  We’ll also  assume that the second task begins right after the first one is completed (no inter-task delay) – yes, this is unrealistic, but bear with me.  The simulation algorithm for the  combined tasks is very similar to the one for a single task (described in detail in my previous post). Here’s the procedure:

  1. For each of the two tasks, generate a set of N random numbers. Each random number generated corresponds to the cumulative probability of completion for a single task on that particular run.
  2. For each random number generated, find the time corresponding to the cumulative probability by solving equation 6 or 7 in my previous post.
  3. Step 2 gives N sets of completion times. Each set has two completion times – one for each tasks.
  4. Add up the two numbers in each set to yield the comple. The resulting set corresponds to N simulation runs for the combined task.

I then divided the time interval from t=4 hours  (min possible completion time for both tasks) to t=16 hours (max possible completion time for both tasks) into bins of 0.25 hrs each, and then assigned each combined completion time to the appropriate bin. For example, if the predicted completion time for a particular run was 9.806532 hrs, it was assigned to the bin corresponding to 0.975 hrs.  The resulting histogram is shown in Figure 3 below (click on image to view the full-size graphic).

Figure 3 - Frequency histogram for tasks in series with no delay

Figure 3 - Frequency histogram for tasks in series with no inter-task delay

[An aside:  compare the histogram in Figure 3 to the one for a single task (Figure 1):  the distribution for the single task is distinctly asymmetric (the peak is not at the centre of the distribution) whereas the two task histogram is nearly symmetric.  This surprising result is a consequence of the Central Limit Theorem (CLT) – which states that the sum of many identical distributions tends to resemble the Normal (Bell-shaped) distribution, regardless of the shape of the individual distributions.   Note that the CLT holds even though the two task distributions are shifted relative to each other – i.e. the second task begins after the first one is completed.]

The simulation also enables us to compute the cumulative probability of completion for the combined tasks (the CDF). This value of the cumulative probability at a particular bin equals the sum of the number of simulations runs in every bin up to (and including) the bin in question, divided by the total number of simulation runs. In mathematical terms this is:

P(t_{i})=(n_{1}+n_{2}+...+n_{i})/ N \ldots \ldots (1)

where P(t_{i})  is the cumulative probability at the time corresponding to the  ith  bin, n_{i}, the number of simulation runs in the ith  bin and  N  the total number of simulation runs. Note that this formula is an approximate one because time is treated as a constant within each bin. The approximation can be improved by making the bins smaller (and hence increasing the number of bins).

The resulting cumulative probability function is shown in Figure 4. This allows us to answer questions such as:  “What is the probability that the tasks will be completed within 10 days?”. Answer:  .698, or approximately 70%. (Note:  I obtained this number by interpolating between values obtained from equation (1), but this level of precision is uncalled for, particularly because the formula is an approximate one)

Figure 4 - CDF for tasks in series with no inter-task delay

Figure 4 - CDF for tasks in series with no inter-task delay

Many project scheduling techniques compute average completion times for component tasks and then add them up to get the expected completion time for the combined task. In the present case the average works out to 9.33 hrs (twice the average for a single task). However, we see from the CDF that there is a significant probability (.43) that we will not finish by this time – and this in a “best-case ” situation where the second task starts immediately after the first one finishes!

[An aside: If one applies the  well-known PERT formula (t_{min}+4t_{ml}+t_{max})/ 6  to each of the tasks, one gets an expected completion time  of 8.66 hrs for the combined task.  From the CDF one can show that there is a  probability of non-completion of 57%  by t=8.66 hours (see Figure 4) – i.e. there’s a greater than even chance of not finishing by this time!]

As interesting as this case is, it is somewhat unrealistic because successor tasks seldom start the instant the predecessor is completed. More often than not, there is a cut-off time before which the successor cannot start – even if there are no explicit dependencies between the two tasks.  This observation is a perfect segue into my next example, which is…

Two tasks in series with a fixed earliest start for the successor

Now we’ll introduce a slight complication: let’s assume, as before, that the two tasks are done one after the other but that the earliest the second task can start is at t= 6 hours  (as measured from the start of the first task). So, if the first task finishes within 6 hours, there will be a delay between its completion and the start of the second task. However, if the first task takes longer than 6 hours to finish, the second task will start soon after the first one finishes.  The simulation procedure is the same as described in the previous section excepting for the last step – the completion time for the combined task is given by:

t=t_{1}+t_{2}, for t \geq  6 hrs and t=6+t_{2}, for t < 6 hrs

I divided the time interval from t=4hrs to t=20 hrs into bins of 0.25 hr duration (much as I did before) and then assigned each combined completion time to the appropriate bin. The resulting histogram is shown in Figure 5.

Figure 5 - Frequency histogram for tasks in series with inter-task delay

Figure 5 - Frequency histogram for tasks in series with inter-task delay

Comparing Figure 5 to Figure 3, we see that the earliest possible finish time now increases from 4 hrs to 8 hrs. This is no surprise, as we built this in to our assumption.  Further, as one might expect, the distribution is distinctly asymmetric – with a minimum completion time of 8 hrs, a most likely time between 10 and 11 hrs and a maximum completion time of about 15 hrs.

Figure 6 shows the cumulative probability of completion for this case.

Figure 6 - CDF for tasks in series with inter-task delay

Figure 6 - CDF for tasks in series with inter-task delay

Because of the delay condition, it is impossible to calculate the average completion from the formulas for the triangular distribution – we have to obtain it from the simulation.  The average can be calculated from the simulation adding up all completion times and dividing by the total number of simulations, N. In mathematical terms this is:

t_{av} = (t_{1} + t_{2} + ...+ t_{i} + ... + t_{N})/ N \ldots \ldots (2)

where t_{av} is the average time,  t_{i}  the completion time for the ith simulation run and   N the total number of simulation runs.

This procedure gave me a value of about 10.8  hrs for the average.  From the CDF in Figure 6 one sees that the probability that the combined task will finish by this time is only 0.60 – i.e. there’s only a 60% chance that the task will finish by this time.  Any naïve estimation of time would do just as badly unless, of course, one is overly pessimistic and assumes a completion time of 15 – 16 hrs.

From the above it should be evident that the simulation allows one to associate an uncertainty (or probability) with every estimate. If management imposes a time limit of 10 hours,  the project manager can refer to the CDF in Figure 6 and figure out the probability of completing the task by that time (there’s a 40 % chance of completion by 10 hrs).  Of course, the reliability of the numbers depend on how good the distribution is. But  the assumptions that have gone into the model are known –  the optimistic, most likely and pessimistic times and the form of the distribution – and these can be refined as one gains experience.

Two tasks in parallel

My final example is the case of two identical tasks performed in parallel. As above, I’ll assume the uncertainty in each task is characterized by a triangular distribution with t_{min}, t_{ml}  and t_{max}  of 2, 4 and 8 hours respectively. The simulation procedure for this case is the same as in the first example, excepting the last step. Assuming the simulated completion times for the individual tasks are t_{1} and t_{2}, the completion time for the combined tasks is given by the greater of the two – i.e. the combined completion time t is given by t =max(t_{1},t_{2}).

To plot the histogram shown in Figure 7 , I divided the interval from t=2 hrs to t=8 hrs into bins of 0.25 hr duration each (Warning: Note the difference in the time axis scale  from Figures 3 and 5!).

Figure 7 - Frequency histogram for tasks in parallel

Figure 7 - Frequency histogram for tasks in parallel

It is interesting to compare the above  histogram with that for an individual task with the same parameters (i.e. the example that was used in my previous post). Figure 8 shows the histograms for the two examples on the same plot (the combined task in red and the single task in blue). As one might expect, the histogram for the combined task is shifted to the right, a consequence of the nonlinear condition on the completion time.

Figure 8 - Histograms for tasks in parallel (red) and single task (blue)

Figure 8 - Histograms for tasks in parallel (red) and single task (blue)

What about the average? I calculated the average as before, by using equation (2) from the previous section. This gives an average of 5.38 hrs (compared to 4.67 hrs for either task, taken individually).   Note that the method to calculate the average is the same regardless of the form of the distribution. On the other hand,  computing the average from the equations would be a complicated affair, involving a stiff dose of algebra with an optional  sprinkling of  calculus.  Even worse – the calculations would vary from distribution to distribution. There’s no question that simulations are much easier.

The CDF for the overall completion time is also computed easily using equation (1). The resulting plot is shown in Figure 9  (Note the difference in the time axis scale  from Figures  4 and 6!). There are no surprises here – excepting how easy it is to calculate once the simulation is done.

Figure 9 - CDF for tasks in parallel

Figure 9 - CDF for tasks in parallel

Let’s see what time corresponds to a 90% certainty of completion. A rough estimate for this number can be obtained from Figure 9 – just find the value of t (on the x axis) corresponding to a cumulative probability of 0.9 (on the y axis).  This is the graphical equivalent of solving the CDF for time, given the cumulative probability is 0.9. From Figure 9, we get a time of approximately 6.7 hrs. [Note: we could get a more accurate number by fitting the points obtained from equation (1) to a curve and then calculating the time corresponding to P=0.9]. The interesting thing is that the 90% certain completion time is not too different from that of a single task (as calculated from equation 7 of my previous post) – which works out to 6.45 hrs.

Comparing the two histograms in Figure 8, we expect the biggest differences in cumulative probability to occur at about the t=4 hour mark, because by that time the probability for the individual task has peaked whereas that for the combined task is yet to peak. Let’s see if this is so: from Figure 8, the cumulative probability for t=4  hrs is about .15 and from the CDF for the triangular distribution (equation 6 from my previous post), the cumulative probability at t=4 hours  (which is the most likely time) is .333 – double that of the combined task.  This, again, isn’t too surprising (once one has Figure 8 on hand). The really nice thing is that we are able to attach uncertainties to all our estimates.

Conclusion

Although the examples discussed above are simple – two identical tasks with uncertainties described by a triangular distribution – they serve to illustrate some of the non-intuitive outcomes when tasks have dependencies.   It is also worth noting that although the distribution for the individual tasks is known, the only straightforward way to obtain the distributions for the combined tasks (figures 3, 5 and 7) is through simulations. So, even these simple examples are a good demonstration of the utility of Monte Carlo techniques. Of course, real projects are way more complicated, with diverse tasks distributed in many different ways.   To simplify simulations in such cases,  one could  perform  coarse-grained simulations on a small number of high-level tasks,  each consisting of a number of  low-level, atomic tasks. The high-level tasks could be constructed in such a way as to focus attention on areas of greatest complexity, and hence greatest uncertainty.

As I have mentioned several times in this article and the previous one: simulation results are only as good as the distributions on which they are based. This begs the question: how do we know what’s an appropriate distribution for a given situation? There’s no one-size-fits-all answer to this question. However, for project tasks there are some general considerations that apply. These are:

  1. There is a minimum time (t_{min}) before which a task cannot cannot be completed.
  2. The probability will increase from 0 at t_{min} to a maximum at a “most likely” completion time, t_{ml}. This holds true for most atomic tasks – but may be not for composite tasks which consist of many smaller tasks.
  3. The probability decreases as time increases beyond t_{ml},  falling to 0 at a time much larger than t_{ml}.   This is simply another way of saying that the distribution has a long (but not necessarily infinite!) tail.

Asymmetric triangular distributions (such as the one used in my examples) are the simplest distributions that satisfy these conditions. Furthermore, a three point estimate is enought to specify a triangular distribution completely – i.e. given a three point estimate there is only one triangular distribution that can be fitted to it. That said, there are several other distributions that can be used; of particular relevance are certain long-tailed distributions.

Finally, I should mention that I make no claims about the efficiency or accuracy of the method presented here:  it should be seen  as  a demonstration rather than a definitive technique.  The many commercial Monte Carlo tools available in the market probably offer far more comprehensive, sophisticated and reliable algorithms (Note:  I ‘ve never used any of them, so I can’t make any recommendations!).  That said, it is always helpful to know the principles behind such tools,  if for no other reason than to understand how they work and, more important,  how to use them correctly.  The material discussed in this and the previous article came out of my efforts to develop an understanding Monte Carlo techniques and how they can be applied to various aspects of project management (they can also be applied to cost estimation, for example).  Over the last few weeks  I’ve spent many enjoyable evenings developing and running these simulations, and learning from them.  I’ll  leave it here with the hope that you find my articles helpful in your own explorations of the wonderful world of Monte Carlo simulations.

Written by K

September 20, 2009 at 9:34 pm

An introduction to Monte Carlo simulation of project tasks

with 15 comments

Introduction

In an essay on the uncertainty of project task estimates,  I  described how a task estimate corresponds to a  probability distribution.  Put simply, a task estimate is actually a range of possible completion times, each with a probability of occurrence specified by a distribution.   If one knows the distribution,  it is possible to answer questions  such as:  “What is the probability that the task will be completed within x days?”

The reliability of such predictions depends on how faithfully the distribution captures the actual  spread of task durations –  and therein lie at least a couple of problems.   First,  the probability distributions for task durations are generally hard to characterise because of the lack of reliable data (estimators are not very good at estimating, and historical data is usually not available).  Second,  many realistic distributions have complicated mathematical forms which can be hard to characterise and manipulate.

These problems are compounded by the fact that projects consist of several tasks, each one with its own duration estimate and  (possibly complicated) distribution.  The first issue is usually addressed by fitting distributions to  point estimates (such as optimistic, pessimistic and most likely times as in PERT)  and then  refining these estimates and distributions as one gains experience.  The second issue can be tackled by Monte Carlo techniques, which involve  simulating the task a number of times  (using an appropriate distribution) and then calculating expected completion times based on the results.   My aim in this post  is to present an  example-based  introduction to Monte Carlo simulation of project task durations.

Although my aim is to keep things reasonably simple (not too much beyond high-school maths and a basic understanding of probability), I’ll be covering a fair bit of ground. Given this,  I’d best to start with a brief description of my approach so that my readers know what coming.

Monte Carlo simulation is an umbrella term that  covers a range of approaches that use random sampling to simulate events that are described by known probability distributions.  The first task then, is to specify the probability distribution. However, as mentioned earlier, this is generally unknown for task durations. For simplicity, I’ll assume that task duration uncertainty can be described accurately using a triangular probability distribution – a distribution that is particularly easy to handle from the mathematical point of view. The advantage of using the triangular distribution is that simulation results can be validated easily. 

Using the triangular distribution isn’t a limitation because the method I describe can be applied to arbitrarily shaped distributions. More important, the technique can be used to simulate what happens when multiple tasks are strung together as in a project schedule (I’ll cover this in a future post).  Finally, I’ll demonstrate a Monte Carlo simulation method as applied to a single task described by a triangular distribution. Although a simulation is overkill in this case (because questions regarding durations can be answered exactly without using a simulation),  the example serves to illustrate the steps involved in simulating more complex cases – such as those comprising of more than one task and/or involving more complicated distributions.

So, without further ado, let me begin the journey by describing the triangular distribution.

The triangular distribution

Let’s assume that there’s a project task that needs doing, and the person who is going to do it reckons it will take between 2 and 8 hours to complete it, with a most likely completion time of 4 hours. How the estimator comes up with these numbers isn’t important at this stage – maybe there’s some guesswork, maybe some padding or maybe it is really based on experience (as it should be).  What’s important is that we have three numbers corresponding to a minimum, most likely and maximum time.  To keep the discussion general, we’ll call these t_{min}, t_{ml} and t_{max} respectively, (we’ll get back to our estimator’s specific numbers later).

Now, what about the probabilities associated with each of these times?

Since t_{min} and t_{max} correspond to the minimum and maximum times,  the probability associated with these is zero. Why?  Because if it wasn’t zero, then there would be a non-zero probability of completion for a time less than t_{min} or greater than t_{max} – which isn’t possible [Note: this is a consequence of the assumption that the probability varies continuously –  so if it takes on non-zero value, p_{0},  at t_{min} then it must take on a value slightly less than p_{0} – but greater than 0 –  at t slightly smaller than t_{min} ] .   As far as  the most likely time,  t_{ml},  is concerned:  by definition, the probability attains its highest value at time t_{ml}.    So, assuming the probability can be described by a triangular function, the distribution must have the form shown in Figure 1 below.

Figure 1

Figure 1: Triangular Distribution

For the simulation, we need to know the equation describing the above distribution.  Although Wikipedia will tell us the answer in a mouse-click, it is instructive to figure it out for ourselves. First, note that the area under the triangle must be equal to  1 because the task must finish at some time between t_{min} and t_{max}.   As a consequence we have:

\frac{1}{2}\times{base}\times{altitude}=\frac{1}{2}\times{(t_{max}-t_{min})}\times{p(t_{ml})}=1\ldots\ldots{(1)}

where p(t_{ml}) is the probability corresponding to time t_{ml}.  With a bit of rearranging we get,

p(t_{ml})=\frac{2}{(t_{max}-t_{min})}\ldots\ldots(2)

To derive the probability for any time t lying between t_{max} and t_{ml}, we note that:

\frac{(t-t_{min})}{p(t)}=\frac{(t_{ml}-t_{min})}{p(t_{ml})}\ldots\ldots(3)

This is a consequence of the fact that the ratios on either side of equation (3)  are  equal to the slope of the line joining the points (t_{min},0) and (t_{ml}, p(t_{ml})).

Figure 2

Figure 2

Substituting (2) in (3) and simplifying a bit, we obtain:

p(t)=\frac{2(t-t_{min})}{(t_{ml}-t_{min})(t_{max}-t_{min})}\dots\ldots(4) for t_{min}\leq t \leq t_{ml}

In a similar fashion one can show that the probability for times lying between t_{ml} and t_{max} is given by:

p(t)=\frac{2(t_{max}-t)}{(t_{max}-t_{ml})(t_{max}-t_{min})}\dots\ldots(5) for t_{ml}\leq t \leq t_{max}

Equations 4 and 5 together describe the probability distribution function (or PDF)  for all times between t_{min} and t_{max}.


Another quantity of  interest is the cumulative distribution function (or CDF) which is the probability, P,  that the task is completed by a time t. To reiterate, the PDF, p(t), is the probability of the task finishing at time t whereas the CDF, P(t), is the probability of the task completing by time t. The CDF, P(t),  is essentially a sum of all probabilities between t_{min} and t. For t < t_{min} this is the area under the triangle with apexes at   (t_{min}, 0), (t, 0) and (t, p(t)).  Using the formula for the area of a triangle (1/2 base times height) and equation (4) we get:

P(t)=\frac{(t-t_{min})^2}{(t_{ml}-t_{min})(t_{max}-t_{min})}\ldots\ldots(6) for t_{min}\leq t \leq t_{ml}

Noting that for t \geq t_{ml}, the area under the curve equals the total area minus the area enclosed by the triangle with base between t and t_{max}, we have:

P(t)=1- \frac{(t_{max}-t)^2}{(t_{max}-t_{ml})(t_{max}-t_{min})}\ldots\ldots(7) for t_{ml}\leq t \leq t_{max}

As expected,  P(t)  starts out with a value 0 at t_{min} and then increases monotonically, attaining a value of 1 at t_{max}.

To end this section let's plug in the numbers quoted by our estimator at the start of this section: t_{min}=2, t_{ml}=4 and t_{max}=8.  The resulting PDF and CDF are shown in figures 3 and 4.

Figure 3 - Triangular PDF (tmin=2, tml=4, tmax=8)
Figure 4 - Triangular CDF (tmin=2, tml=4, tmax=8)

Figure 4 – Triangular CDF (tmin=2, tml=4, tmax=8)

Monte Carlo Simulation of  a Single Task

OK, so now we get to the business end of this essay – the simulation.  I’ll first outline the simulation procedure and  then discuss results for the case of  the task described in the previous section (triangular distribution with t_{min}=2, t_{ml}=4 and t_{max}=8).  Note that I used TK Solver – a mathematical package created by Universal Technical Systems – to do the simulations. TK Solver has built-in backsolving capability which is extremely helpful for solving some of the equations that come up in the simulation calculations. One could use Excel too, but my spreadsheet skills are not up to it :-(.

So, here’s my  simulation procedure:

  1. Generate a random number between 0 and 1.  Treat this number as the cumulative probability, P(t) for the simulation run. [Technical Note:  I used the random number generator that comes with the TK Solver package (the algorithm used by the generator is described here). Excel’s random number generator is even better.]
  2. Find the time, t,  corresponding to P(t) by solving equations (6) or (7) for t. The resulting value of t is the time taken to complete the task. [Technical Note: Solving equation (6) or (7) for t isn’t straightforward because t appears in several places in the equations. One has two options to solve for t a) Use numerical techniques such as the bisection or Newton-Raphson method or b) use the backsolve (goal seek) functionality in Excel or other mathematical packages. I used the backsolving capability of TK Solver to obtain t for each random value of P generated. TK Solver backsolves equations automatically –  no fiddling around with numerical methods – which makes it an attractive option for these kinds of calculations.]
  3. Repeat steps (1) and (2)  N times, where N is a “sufficiently large” number – say 10,000.

I did the calculations for N=10000 using the triangular distribution with parameters t_{min}=2, t_{ml}=4 and t_{max}=8. This gave me 10,000 values of P(t) and t.

As an example of a simulation run proceeds, here’s the data from my first simulation run: the random number generator returned 0.490474700804856 (call it 0.4905). This is the value of P(t). The time corresponding to this cumulative probability is obtained by solving equation (7) numerically for t. This gave t = 4.503057452476027 (call it 4.503) as shown in Figure 5. This is the completion time for the first run.

Figure 5

Figure 5

After completing 10,000 simulation runs, I sorted these into bins corresponding to time intervals of .25 hrs, starting from t=2hrs through to t=8 hrs. The resulting histogram is shown in Figure 6. Each bar corresponds to the number of simulation runs that fall within that time interval.

Figure 6: Distribution of simulation runs

Figure 6: Distribution of simulation runs

As one might expect, this looks like the triangular distribution shown in Figure 4. There is a difference though: Figure 4 plots probability as a continuous function of time  whereas Figure 6 plots the number of simulation runs as a step function of time. To convince ourselves that the two are really the same, lets look at the cumulative probability at t_{ml}  – i.e the probability that the task will be completed within 4 hrs. From equation 6 we get P(t_{ml})=0.3333.  The corresponding number from the simulation is simply the number of simulation runs that had a completion time less than or equal to 4 hrs,  divided by the total number of simulation runs. For my simulation this comes out to be 0.3383. The agreement’s not perfect, but is convincing enough. Just to be sure, I performed the simulation a number of times – generating several sets of random numbers – and took the average of the predicted P(t_{ml}). The agreement between theory and simulation improved, as expected.

Wrap up

A limitation of the triangular distribution is that it imposes an upper cut-off at t_{max}. Long-tailed distributions may therefore be more realistic. In the end, though, the form distribution is neither here nor there because the technique can be applied to any distribution. The real question is:  how do we obtain reliable distributions for our estimates? There’s no easy answer to this one, but one can start with three point estimates (as in PERT) and then fit these to a triangular (or more complicated) distribution.  Although it is best if one  has historical data, in the absence this one can always start with reasonable guesses. The point is to refine these through experience.

Another point worth mentioning is that simulations can be done at a level higher than that of an indivdual task. In their brilliant book – Waltzing With Bears: Managing Risk on Software Projects – De Marco and Lister demonstrate the use of Monte Carlo methods to simulate various aspects of project – velocity, time, cost etc. – at the project level (as opposed to the task level). I believe it is better to perform simulations at the lowest possible level (although it is a lot more work) – the main reason being that it is easier, and less error-prone, to estimate individual tasks than entire projects. Nevertheless, high level simulations can be very useful if one has reliable data to base these on.

I would be remiss if I didn’t mention some of the various Monte Carlo packages available in the market. I’ve never used any of these, but by all accounts they’re pretty good: see this commercial package or this one, for example. Both products use random number generators and sampling techniques that are far more sophisticated than the simple ones I’ve used in my example.

Finally, I have to admit that the example described in this post is a very complicated way of demonstrating the obvious – I started out with the triangular distribution and then got back the triangular distribution via simulation. My point, however, was to illustrate the method and show that it yields expected results in a situation where the answer is known. In a future post I’ll apply the method to more complex situations- for example, multiple tasks in series and parallel, with some dependency rules thrown in for good measure.  Although, I’ll use the triangular distribution for individual tasks, the results will be far from obvious: simulation methods really start to shine as complexity increases. But all that will have to wait for later. For now, I hope my example has helped illustrate how Monte Carlo methods can be used to simulate project tasks.

Note added on 21 Sept 2009:

Follow-up to this article published here.

Note added on 14 Dec 2009:

See this post for a Monte Carlo simulation of correlated project tasks.

Written by K

September 11, 2009 at 11:05 pm

%d bloggers like this: