Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Project Management’ Category

3 or 7, truth or trust

with one comment

“It is clear that ethics cannot be articulated.” – Ludwig Wittgenstein

Over the last few years I’ve been teaching and refining a series of lecture-workshops on Decision Making Under Uncertainty. Audiences include data scientists and mid-level managers working in corporates and public service agencies. The course is based on the distinction between uncertainties in which the variables are known and can be quantified versus those in which the variables are not known upfront and/or are hard to quantify.

Before going any further, it is worth explaining the distinction via a couple of examples:

An example of the first type of uncertainty is project estimation. A project has an associated time and cost, and although we don’t know what their values are upfront, we can estimate them if we have the right data.  The point to note is this: because such problems can be quantified, the human brain tends to deal with them in a logical manner.

In contrast, business strategy is an example of the second kind of uncertainty. Here we do not know what the key variables are upfront. Indeed we cannot, because different stakeholders will perceive different aspects of a strategy to be paramount depending on their interests – consider, for example, the perspective of a CFO versus that of a CMO. Because of these differences, one cannot make progress on such problems until agreement has been reached on what is important to the group as a whole.  The point to note here is that since such problems involve contentious issues, our reactions to them  tend to be emotional rather than logical.

The difference between the two types of uncertainty is best conveyed experientially, so I have a few in-class activities aimed at doing just that. One of them is an exercise I call “3 or 7“, in which I give students a sheet with the following printed on it:

Circle either the number 3 or 7 below depending on whether you want 3 marks or 7 marks added to your Assignment 2 final mark. Yes, this offer is for real, but there a catch: if more than 10% of the class select 7, no one gets anything.

Write your student ID on the paper so that Kailash can award you the marks. Needless to say, your choice will remain confidential, no one (but Kailash) will know what you have selected.

3              7

Prior to handing out the sheet, I tell them that they:

  • should sit far enough apart so that they can’t see what their neighbours choose,
  • are not allowed to communicate their choices to others until the entire class has turned their sheets.

Before reading any further you may want to think about what typically happens.

–x–

Many readers would have recognized this exercise as a version of the Prisoner’s Dilemma and, indeed, many students in my classes recognize this too.   Even so, there are always enough of “win at the cost of others” types in the room who ensure that I don’t have to award any extra marks. I’ve run the exercise about 10 times, often with groups comprised of highly collaborative individuals who work well together. Despite that,15-20% of the class ends up opting for 7.

It never fails to surprise me that, even in relatively close-knit groups, there are invariably a number of individuals who, if given a chance to gain at the expense of their colleagues, will not hesitate to do so providing their anonymity is ensured.

–x–

Conventional management thinking deems that any organisational activity involving several people has to be closely supervised. Underlying this view is the assumption that individuals involved in the activity will, if left unsupervised, make decisions based on self-interest rather than the common good (as happens in the prisoner’s dilemma game). This assumption finds justification in rational choice theory, which predicts that individuals will act in ways that maximise their personal benefit without any regard to the common good. This view is exemplified in 3 or 7 and, at a societal level, in the so-called Tragedy of the Commons, where individuals who have access to a common resource over-exploit it,  thus depleting the resource entirely.

Fortunately, such a scenario need not come to pass: the work of Elinor Ostrom, one of the 2009 Nobel prize winners for Economics, shows that, given the right conditions, groups can work towards the common good even if it means forgoing personal gains.

Classical economics assumes that individuals’ actions are driven by rational self-interest – i.e. the well-known “what’s in it for me” factor. Clearly, the group will achieve much better results as a whole if it were to exploit the resource in a cooperative way. There are several real-world examples where such cooperative behaviour has been successful in achieving outcomes for the common good (this paper touches on some). However, according to classical economic theory, such cooperative behaviour is simply not possible.

So the question is: what’s wrong with rational choice theory?  A couple of things, at least:

Firstly, implicit in rational choice theory is the assumption that individuals can figure out the best choice in any given situation.  This is obviously incorrect. As Ostrom has stated in one of her papers:

Because individuals are boundedly rational, they do not calculate a complete set of strategies for every situation they face. Few situations in life generate information about all potential actions that one can take, all outcomes that can be obtained, and all strategies that others can take.

Instead, they use heuristics (experienced-based methods), norms (value-based techniques) and rules (mutually agreed regulations) to arrive at “good enough” decisions.  Note that Ostrom makes a distinction between norms and rules, the former being implicit (unstated) rules, which are determined by the cultural attitudes and values)

Secondly, rational choice theory assumes that humans behave as self-centred, short-term maximisers. Such theories work in competitive situations such as the stock-market but not in situations in which collective action is called for, such as the prisoners dilemma.

Ostrom’s work essentially addresses the limitations of rational choice theory by outlining how individuals can work together to overcome self-interest.

–x–

In a paper entitled, A Behavioral Approach to the Rational Choice Theory of Collective Action, published in 1998, Ostrom states that:

…much of our current public policy analysis is based on an assumption that rational individuals are helplessly trapped in social dilemmas from which they cannot extract themselves without inducement or sanctions applied from the outside. Many policies based on this assumption have been subject to major failure and have exacerbated the very problems they were intended to ameliorate. Policies based on the assumptions that individuals can learn how to devise well-tailored rules and cooperate conditionally when they participate in the design of institutions affecting them are more successful in the field…[Note:  see this book by Baland and Platteau, for example]

Since rational choice theory aims to maximise individual gain,  it does not work in situations that demand collective action – and Ostrom presents some very general evidence to back this claim.  More interesting than the refutation of rational choice theory, though, is Ostrom’s discussion of the ways in which individuals “trapped” in social dilemmas end up making the right choices. In particular she singles out two empirically grounded ways in which individuals work towards outcomes that are much better than those offered by rational choice theory. These are:

Communication: In the rational view, communication makes no difference to the outcome.  That is, even if individuals make promises and commitments to each other (through communication), they will invariably break these for the sake of personal gain …or so the theory goes. In real life, however, it has been found that opportunities for communication significantly raise the cooperation rate in collective efforts (see this paper abstract or this one, for example). Moreover, research shows that face-to-face is far superior to any other form of communication, and that the main benefit achieved through communication is exchanging mutual commitment (“I promise to do this if you’ll promise to do that”) and increasing trust between individuals. It is interesting that the main role of communication is to enhance or reinforce the relationship between individuals rather than to transfer information.  This is in line with the interactional theory of communication.

Innovative Governance:  Communication by itself may not be enough; there must be consequences for those who break promises and commitments. Accordingly, cooperation can be encouraged by implementing mutually accepted rules for individual conduct, and imposing sanctions on those who violate them. This effectively amounts to designing and implementing novel governance structures for the activity. Note that this must be done by the group; rules thrust upon the group by an external authority are unlikely to work.

Of course, these factors do not come into play in artificially constrained and time-bound scenarios like 3 or 7.  In such situations, there is no opportunity or time to communicate or set up governance structures. What is clear, even from the simple 3 or 7 exercise,  is that these are required even for groups that appear to be close-knit.

Ostrom also identifies three core relationships that promote cooperation. These are:

Reciprocity: this refers to a family of strategies that are based on the expectation that people will respond to each other in kind – i.e. that they will do unto others as others do unto them.  In group situations, reciprocity can be a very effective means to promote and sustain cooperative behaviour.

Reputation: This refers to the general view of others towards a person. As such, reputation is a part of how others perceive a person, so it forms a part of the identity of the person in question. In situations demanding collective action, people might make judgements on a person’s reliability and trustworthiness based on his or her reputation.’

Trust: Trust refers to expectations regarding others’ responses in situations where one has to act before others. And if you think about it, everything else in Ostrom’s framework is ultimately aimed at engendering or – if that doesn’t work – enforcing trust.

–x—

In an article on ethics and second-order cybernetics, Heinz von Foerster tells the following story:

I have a dear friend who grew up in Marrakech. The house of his family stood on the street that divide the Jewish and the Arabic quarter. As a boy he played with all the others, listened to what they thought and said, and learned of their fundamentally different views. When I asked him once, “Who was right?” he said, “They are both right.”

“But this cannot be,” I argued from an Aristotelian platform, “Only one of them can have the truth!”

“The problem is not truth,” he answered, “The problem is trust.”

For me, that last line summarises the lesson implicit in the admittedly artificial scenario of 3 or 7. In our search for facts and decision-making frameworks we forget the simple truth that in many real-life dilemmas they matter less than we think. Facts and  frameworks cannot help us decide on ambiguous matters in which the outcome depends on what other people do.  In such cases the problem is not truth; the problem is trust.  From your own experience it should be evident it is impossible convince others of your trustworthiness by assertion, the only way to do so is by behaving in a trustworthy way. That is, by behaving ethically rather than talking about it, a point that is squarely missed by so-called business ethics classes.

Yes,  it is clear that ethics cannot be articulated.

Notes:

  1. Portions of this article are lightly edited sections from a 2009 article that I wrote on Ostrom’s work and its relevance to project management.
  2.  Finally, an unrelated but important matter for which I seek your support for a common good: I’m taking on the 7 Bridges Walk to help those affected by cancer. Please donate via my 7 Bridges fundraising page if you can . Every dollar counts; all funds raised will help Cancer Council work towards the vision of a cancer free future.

Written by K

September 18, 2019 at 8:28 pm

Seven Bridges revisited – further reflections on the map and the territory

with 2 comments

The  Seven Bridges Walk is an annual fitness and fund-raising event organised by the Cancer Council of New South Wales. The picturesque 28 km circuit weaves its way through a number of waterfront suburbs around Sydney Harbour and takes in some spectacular views along the way.  My friend John and I did the walk for the first time in 2017. Apart from thoroughly enjoying the experience, there was  another, somewhat unexpected payoff: the walk evoked some thoughts on project management and the map-territory relationship which I subsequently wrote up in a post on this blog.

Figure 1:The map, the plan

We enjoyed the walk so much that we decided to do it again in 2018. Now, it is a truism that one cannot travel exactly the same road twice. However, much is made of the repeatability of certain kinds of experiences. For example, the discipline of project management is largely predicated on the assumption that projects are repeatable.  I thought it would be interesting to see how this plays out in the case of a walk along a well-defined route, not the least because it is in many ways akin to a repeatable project.

To begin with, it is easy enough to compare the weather conditions on the two days: 29 Oct 2017 and 28 Oct 2018. A quick browse of this site gave me the data as I was after (Figure 2).

Figure 2: Weather on 29 Oct 2017 and 28 Oct 2018

The data supports our subjective experience of the two walks. The conditions in 2017 were less than ideal for walking: clear and uncomfortably warm with a hot breeze from the north.  2018 was considerably better: cool and overcast with a gusty south wind – in other words, perfect walking weather. Indeed, one of the things we commented on the second time around was how much more pleasant it was.

But although weather conditions matter, they tell but a part of the story.

On the first walk, I took a number of photographs at various points along the way. I thought it would be interesting to take photographs at the same spots, at roughly the same time as I did the last time around, and compare how things looked a year on. In the next few paragraphs I show a few of these side by side (2017 left, 2018 right) along with some comments.

We started from Hunters Hill at about 7:45 am as we did on our first foray, and took our first photographs at Fig Tree Bridge, about a kilometre from the starting point.

Figure 3: Lane Cove River from Fig Tree Bridge (2017 Left, 2018 Right)

The purple Jacaranda that captivated us in 2017 looks considerably less attractive the second time around (Figure 3): the tree is yet to flower and what little there is there does not show well in the cloud-diffused light. Moreover, the scaffolding and roof covers on the building make for a much less attractive picture. Indeed, had the scene looked so the first time around, it is unlikely we would have considered it worthy of a photograph.

The next shot (Figure 4), taken not more than a  hundred metres from the previous one, also looks considerably different:  rougher waters and no kayakers in the foreground. Too cold and windy, perhaps?  The weather and wind data in Fig 2 would seem to support that conclusion.

Figure 4: Morning kayakers on the river (2017 Left, 2018 Right)

The photographs in Figure 5 were taken at Pyrmont Bridge  about four hours into the walk. We already know from Figure 4 that it was considerably windier in 2018. A comparison of the flags in the two shots in Figure 5 reveal an additional detail: the wind was from opposite directions in the two years. This is confirmed by the weather information in Figure 2, which also tells us that the wind was from the north in 2017 and the south the following year (which explains the cooler conditions).  We can even get an  approximate temperature: the photographs were taken around 11:30 am both years, and a quick look at Figure 2 reveals that the temperature at noon was about 30 C in 2017 and 18 C in 2018.

Figure 5: Pyrmont Bridge (2017 Left, 2018 Right)

The point about the wind direction and cloud conditions is also confirmed by comparing the photographs in Figure 6, taken at Anzac Bridge, a few kilometres further along the way (see the direction of the flag atop the pylon).

Figure 6: View looking up Anzac Bridge (2017 L, 2018 R)

Skipping over to the final section of the walk, here are a couple of shots I took towards the end: Figure 7 shows a view from Gladesville Bridge and Figure 8 shows one from Tarban Creek Bridge.  Taken together the two confirm some of the things we’ve already noted regarding the weather and conditions for photography.

Figure 7: View from Gladesville Bridge (2017 L, 2018 R)

Further, if you look closely at Figures 7 and 8, you will also see the differences in the flowering stage of the Jacaranda.

Figure 8: View from Tarban Creek Bridge (2017 L, 2018 R)

A detail that I did not notice until John pointed it out is that the the boat at the bottom edge of  both photographs in Fig. 8 is the same one (note the colour of the furled sail)! This was surprising to us, but it should not have been so.  It turns out that boat owners have to apply for private mooring licenses and are allocated positions at which they install a suitable mooring apparatus. Although this is common knowledge for boat owners, it likely isn’t so for others.

The photographs are a visual record of some of the things we encountered  along the way. However, the details in recorded in them have more to do with aesthetics rather the experience – in photography of this kind, one tends to preference what looks good over what happened. Sure, some of the photographs offer hints about the experience but much of this is incidental and indirect. For example,  when taking the photographs in Figures 5 and 6, it was certainly not my intention to record the wind direction. Indeed, that would have been a highly convoluted way to convey information that is directly and more accurately described by the data in Figure 2 . That said, even data has limitations: it can help fill in details such as the wind direction and temperature but it does not evoke any sense of what it was like to be there, to experience the experience, so to speak.

Neither data nor photographs are the stuff memories are made of. For that one must look elsewhere.

–x–

As Heraclitus famously said, one can never step into the same river twice. So it is with walks.  Every experience of a walk is unique; although map remains the same the territory is invariably different on each traverse, even if only subtly so. Indeed, one could say that the territory is defined through one’s experience of it. That experience is not reproducible, there are always differences in the details.

As John Salvatier points out, reality has a surprising amount of detail, much of which we miss because we look but do not see. Seeing entails a deliberate focus on minutiae such as the play of morning light on the river or tree; the cool damp from last night’s rain; changes in the built environment, some obvious, others less so.  Walks are made memorable by precisely such details, but paradoxically  these can be hard to record in a meaningful way.  Factual (aka data-driven) descriptions end up being laundry lists that inevitably miss the things that make the experience memorable.

Poets do a better job. Consider, for instance, Tennyson‘s take on a brook:

“…I chatter over stony ways,
In little sharps and trebles,
I bubble into eddying bays,
I babble on the pebbles.

With many a curve my banks I fret
By many a field and fallow,
And many a fairy foreland set
With willow-weed and mallow.

I chatter, chatter, as I flow
To join the brimming river,
For men may come and men may go,
But I go on for ever….”

One can almost see and hear a brook. Not Tennyson’s, but one’s own version of it.

Evocative descriptions aren’t the preserve of poets alone. Consider the following description of Sydney Harbour, taken from DH Lawrence‘s Kangaroo:

“…He took himself off to the gardens to eat his custard apple-a pudding inside a knobbly green skin-and to relax into the magic ease of the afternoon. The warm sun, the big, blue harbour with its hidden bays, the palm trees, the ferry steamers sliding flatly, the perky birds, the inevitable shabby-looking, loafing sort of men strolling across the green slopes, past the red poinsettia bush, under the big flame-tree, under the blue, blue sky-Australian Sydney with a magic like sleep, like sweet, soft sleep-a vast, endless, sun-hot, afternoon sleep with the world a mirage. He could taste it all in the soft, sweet, creamy custard apple. A wonderful sweet place to drift in….”

Written in 1923, it remains a brilliant evocation of the Harbour even today.

Tennyson’s brook and Lawrence’s Sydney do a better job than photographs or factual description, even though the latter are considered more accurate and objective. Why?  It is because their words are more than mere description: they are stories that convey a sense of what it is like to be there.

–x–

The two editions of the walk covered exactly the same route, but our experiences of the territory on the two instances were very different. The differences were in details that ultimately added up to the uniqueness of each experience.  These details cannot  be captured by maps and visual or written records, even in principle. So although one may gain familiarity with certain aspects of a territory through repetition, each lived experience of it will be unique. Moreover, no two individuals will experience the territory in exactly the same way.

When bidding for projects, consultancies make much of their prior experience of doing similar projects elsewhere. The truth, however, is that although two projects may look identical on paper they will invariably be different in practice.  The map,  as Korzybski famously said, is not the territory.  Even more, every encounter with the territory is different.

All this is not to say that maps (or plans or data) are useless, one needs them as orienting devices. However, one must accept that they offer limited guidance on how to deal with the day-to-day events and occurrences on a project. These tend to be unique because they are highly context dependent. The lived experience of a project is therefore necessarily different from the planned one. How can one gain insight into the former? Tennyson and Lawrence offer a hint: look to the stories told by people who have traversed the territory, rather than the maps, plans and data-driven reports they produce.

Written by K

February 15, 2019 at 8:24 am

Posted in Project Management

A gentle introduction to Monte Carlo simulation for project managers

with 10 comments

This article covers the why, what and how of Monte Carlo simulation using a canonical example from project management –  estimating the duration of a small project. Before starting, however, I’d like say a few words about the tool I’m going to use.

Despite the bad rap spreadsheets get from tech types – and I have to admit that many of their complaints are justified – the fact is, Excel remains one of the most ubiquitous “computational” tools in the corporate world. Most business professionals would have used it at one time or another. So, if you you’re a project manager and want the rationale behind your estimates to be accessible to the widest possible audience, you are probably better off presenting them in Excel than in SPSS, SAS, Python, R or pretty much anything else. Consequently, the tool I’ll use in this article is Microsoft Excel. For those who know about Monte Carlo and want to cut to the chase, here’s the Excel workbook containing all the calculations detailed in later sections. However, if you’re unfamiliar with the technique, you may want to have a read of the article before playing with the spreadsheet.

In keeping with the format of the tutorials on this blog, I’ve assumed very little prior knowledge about probability, let alone Monte Carlo simulation. Consequently, the article is verbose and the tone somewhat didactic.

Introduction

Estimation is key part of a project manager’s role. The most frequent (and consequential) estimates they are asked deliver relate to time and cost.  Often these are calculated and presented as point estimates: i.e. single numbers – as in, this task will take 3 days. Or, a little better, as two-point ranges – as in, this task will take between 2 and 5 days.  Better still, many use a PERT-like approach wherein estimates are based on 3 points: best, most likely and worst case scenarios – as in, this task will take between 2 and 5 days, but it’s most likely that we’ll finish on day 3.  We’ll use three-point estimates as a starting point for Monte Carlo simulation, but first, some relevant background.

It is a truism, well borne out by experience, that it is easier to estimate small, simple tasks than large, complex ones. Indeed, this is why one of the early to-dos in a project is the construction of a work breakdown structure. However, a problem arises when one combines the estimates for individual elements into an overall estimate for a project or a phase thereof. It is that a straightforward addition of individual estimates or bounds will almost always lead to a grossly incorrect estimation of overall time or cost. The reason for this is simple: estimates are necessarily based on probabilities and probabilities do not combine additively. Monte Carlo simulation provides a principled and intuitive way to obtain probabilistic estimates at the level of an entire project based on estimates of the individual tasks that comprise it.

The problem

The best way to explain Monte Carlo is through a simple worked example. So, let’s consider a 4 task project shown in Figure 1. In the project, the second task is dependent on the first, and third and fourth are dependent on the second but not on each other. The upshot of this is that the first two tasks have to be performed sequentially and the last two can be done at the same time, but can only be started after the second task is completed.

To summarise: the first two tasks must be done in series and the last two can be done in parallel.

Figure 1; A project with 4 tasks.

Figure 1 also shows the three point estimates for each task – that is the minimum, maximum and most likely completion times. For completeness I’ve listed them below:

  • Task 1 – Min: 2 days; Most Likely: 4 days; Max: 8 days
  • Task 2 – Min: 3 days; Most Likely: 5 days; Max: 10 days
  • Task 3 – Min: 3 days; Most Likely: 6 days; Max: 9 days
  • Task 4 – Min: 2 days; Most Likely: 4 days; Max: 7 days

OK, so that’s the situation as it is given to us. The first step to  developing  an estimate is to formulate the problem in a way that it can be tackled using Monte Carlo simulation. This bring us to the important topic of the shape of uncertainty aka probability distributions.

The shape of uncertainty

Consider the data for Task 1. You have been told that it most often finishes on day 4.  However, if things go well, it could take as little as 2 days; but if things go badly it could take as long as 8 days.  Therefore, your range of possible finish times (outcomes) is between 2 to 8 days.

Clearly, each of these outcomes is not equally likely.  The most likely outcome is that you will finish the task in 4 days (from what your team member has told you). Moreover, the likelihood of finishing in less than 2 days or more than 8 days is zero. If we plot the likelihood of completion against completion time, it would look something like Figure 2.

Figure 2: Likelihood of finishing on day 2, day 4 and day 8.

Figure 2 begs a couple of questions:

  1. What are the relative likelihoods of completion for all intermediate times – i.e. those between 2 to 4 days and 4 to 8 days?
  2. How can one quantify the likelihood of intermediate times? In other words, how can one get a numerical value of the likelihood for all times between 2 to 8 days?  Note that we know from the earlier discussion that this must be zero for any time less than 2 or greater than 8 days.

The two questions are actually related. As we shall soon see, once we know the relative likelihood of completion at all times (compared to the maximum), we can work out its numerical value.

Since we don’t know anything about intermediate times (I’m assuming there is no other historical data available), the simplest thing to do is to assume that the likelihood increases linearly (as a straight line) from 2 to 4 days and decreases in the same way from 4 to 8 days as shown in Figure 3. This gives us the well-known triangular distribution.

Jargon Buster: The term distribution is simply a fancy word for a plot of likelihood vs. time.

Figure 3: Triangular distribution fitted to points in Figure 1

Of course, this isn’t the only possibility; there are an infinite number of others. Figure 4 is another (admittedly weird) example.

Figure 4: Another distribution that fits the points in Figure 2.

Further, it is quite possible that the upper limit (8 days) is not a hard one. It may be that in exceptional cases the task could take much longer (for example, if your team member calls in sick for two weeks) or even not be completed at all (for example, if she then leaves for that mythical greener pasture).  Catering for the latter possibility, the shape of the likelihood might resemble Figure 5.

Figure 5: A distribution that allows for a very long (potentially) infinite completion time

The main takeaway from the above is that uncertainties should be expressed as shapes rather than numbers, a notion popularised by Sam Savage in his book, The Flaw of Averages.

[Aside:  you may have noticed that all the distributions shown above are skewed to the right – that  is they have a long tail. This is a general feature of distributions that describe time (or cost) of project tasks. It would take me too far afield to discuss why this is so, but if you’re interested you may want to check out my post on the inherent uncertainty of project task estimates.

From likelihood to probability

Thus far, I have used the word “likelihood” without bothering to define it.  It’s time to make the notion more precise.  I’ll begin by asking the question: what common sense properties do we expect a quantitative measure of likelihood to have?

Consider the following:

  1. If an event is impossible, its likelihood should be zero.
  2. The sum of likelihoods of all possible events should equal complete certainty. That is, it should be a constant. As this constant can be anything, let us define it to be 1.

In terms of the example above, if we denote time by t and the likelihood by P(t)  then:

P(t) = 0 for t< 2 and  t> 8

And

\sum_{t}P(t) = 1 where 2\leq t< 8

Where \sum_{t} denotes the sum of all non-zero likelihoods – i.e. those that lie between 2 and 8 days. In simple terms this is the area enclosed by the likelihood curves and the x axis in figures 2 to 5.  (Technical Note:  Since t is a continuous variable, this should be denoted by an integral rather than a simple sum, but this is a technicality that need not concern us here)

P(t) is , in fact, what mathematicians call probability– which explains why I have used the symbol P rather than L. Now that I’ve explained what it  is, I’ll use the word “probability” instead of ” likelihood” in the remainder of this article.

With these assumptions in hand, we can now obtain numerical values for the probability of completion for all times between 2 and 8 days. This can be figured out by noting that the area under the probability curve (the triangle in figure 3 and the weird shape in figure 4) must equal 1, and we’ll do this next.  Indeed, for the problem at hand, we’ll assume that all four task durations can be fitted to triangular distributions. This is primarily to keep things  simple. However, I should emphasise that you can use any shape so long as you can express it mathematically, and I’ll say more about this towards the end of this article.

The triangular distribution

Let’s look at the estimate for Task 1. We have three numbers corresponding to a minimummost likely and maximum time.  To keep the discussion general, we’ll call these t_{min}, t_{ml} and t_{max} respectively, (we’ll get back to our estimator’s specific numbers later).

Now, what about the probabilities associated with each of these times?

Since t_{min} and t_{max} correspond to the minimum and maximum times,  the probability associated with these is zero. Why?  Because if it wasn’t zero, then there would be a non-zero probability of completion for a time less than t_{min} or greater than t_{max} – which isn’t possible [Note: this is a consequence of the assumption that the probability varies continuously –  so if it takes on non-zero value, p_{0},  at t_{min} then it must take on a value slightly less than p_{0} – but greater than 0 –  at t slightly smaller than t_{min} ] .   As far as  the most likely time,  t_{ml},  is concerned:  by definition, the probability attains its highest value at time t_{ml}.    So, assuming the probability can be described by a triangular function, the distribution must have the form shown in Figure 6 below.

Figure 6: Triangular distribution redux.

For the simulation, we need to know the equation describing the above distribution.  Although Wikipedia will tell us the answer in a mouse-click, it is instructive to figure it out for ourselves. First, note that the area under the triangle must be equal to  1 because the task must finish at some time between t_{min} and t_{max}.   As a consequence we have:

\frac{1}{2}\times{base}\times{altitude}=\frac{1}{2}\times{(t_{max}-t_{min})}\times{p(t_{ml})}=1\ldots\ldots{(1)}

where p(t_{ml}) is the probability corresponding to time t_{ml}.  With a bit of rearranging we get,

p(t_{ml})=\frac{2}{(t_{max}-t_{min})}\ldots\ldots(2)

To derive the probability for any time t lying between t_{min} and t_{ml}, we note that:

\frac{(t-t_{min})}{p(t)}=\frac{(t_{ml}-t_{min})}{p(t_{ml})}\ldots\ldots(3)

This is a consequence of the fact that the ratios on either side of equation (3)  are  equal to the slope of the line joining the points (t_{min},0) and (t_{ml}, p(t_{ml})).

Figure 7

Substituting (2) in (3) and simplifying a bit, we obtain:

p(t)=\frac{2(t-t_{min})}{(t_{ml}-t_{min})(t_{max}-t_{min})}\dots\ldots(4) for t_{min}\leq t \leq t_{ml}

In a similar fashion one can show that the probability for times lying between t_{ml} and t_{max} is given by:

p(t)=\frac{2(t_{max}-t)}{(t_{max}-t_{ml})(t_{max}-t_{min})}\dots\ldots(5) for t_{ml}\leq t \leq t_{max}

Equations 4 and 5 together describe the probability distribution function (or PDF)  for all times between t_{min} and t_{max}.

As it turns out, in Monte Carlo simulations, we don’t directly work with the probability distribution function. Instead we work with the cumulative distribution function (or CDF) which is the probability, P,  that the task is completed by time t. To reiterate, the PDF, p(t), is the probability of the task finishing at time t whereas the CDF, P(t), is the probability of the task completing by time t. The CDF, P(t),  is essentially a sum of all probabilities between t_{min} and t. For t < t_{min} this is the area under the triangle with apexes at   (t_{min}, 0), (t, 0) and (t, p(t)).  Using the formula for the area of a triangle (1/2 base times height) and equation (4) we get:

P(t)=\frac{(t-t_{min})^2}{(t_{ml}-t_{min})(t_{max}-t_{min})}\ldots\ldots(6) for t_{min}\leq t \leq t_{ml}

Noting that for t \geq t_{ml}, the area under the curve equals the total area minus the area enclosed by the triangle with base between t and t_{max}, we have:

P(t)=1- \frac{(t_{max}-t)^2}{(t_{max}-t_{ml})(t_{max}-t_{min})}\ldots\ldots(7) for t_{ml}\leq t \leq t_{max}

As expected,  P(t)  starts out with a value 0 at t_{min} and then increases monotonically, attaining a value of 1 at t_{max}.

To end this section let’s plug in the numbers quoted by our estimator at the start of this section: t_{min}=2, t_{ml}=4 and t_{max}=8.  The resulting PDF and CDF are shown in figures 8 and 9.

Figure 8: PDF for triangular distribution (tmin=2, tml=4, tmax=8)

Figure 9 – CDF for triangular distribution (tmin=2, tml=4, tmax=8)

Monte Carlo in a minute

Now with all that conceptual work done, we can get to the main topic of this post:  Monte Carlo estimation. The basic idea behind Monte Carlo is to simulate the entire project (all 4 tasks in this case) a large number N (say 10,000) times and thus obtain N overall completion times.  In each of the N trials, we simulate each of the tasks in the project and add them up appropriately to give us an overall project completion time for the trial.  The resulting N overall completion times will all be different, ranging from the sum of the minimum completion times to the sum of the maximum completion times.  In other words, we will obtain the PDF and CDF for the overall completion time, which will enable us to answer questions such as:

  • How likely is it that the project will be completed within 17 days?
  • What’s the estimated time for which I can be 90% certain that the project will be completed? For brevity, I’ll call this the 90% completion time in the rest of this piece.

“OK, that sounds great”, you say, “but how exactly do we simulate a single task”?

Good question, and I was just about to get to that…

Simulating a single task using the CDF

As we saw earlier, the CDF for the triangular has a S shape and ranges from 0 to 1 in value. It turns out that the S shape is characteristic of all CDFs, regardless of the details underlying PDF. Why? Because, the cumulative probability must lie between 0 and 1 (remember, probabilities can never exceed 1, nor can they be negative).

OK, so to simulate a task, we:

  • generate a random number between 0 and 1, this corresponds to the probability that the task will finish at time t.
  • find the time, t, that this corresponds to this value of probability. This is the completion time for the task for this trial.

Incidentally, this method is called inverse transform sampling.

An example might help clarify how inverse transform sampling works.  Assume that the random number generated is 0.4905. From the CDF for the first task, we see that this value of probability corresponds to a completion time of 4.503 days, which is the completion for this trial (see Figure 10). Simple!

Figure 10: Illustrating inverse transform sampling

In this case we found the time directly from the computed CDF. That’s not too convenient when you’re simulating the project 10,000 times. Instead, we need a programmable math expression that gives us the time corresponding to the probability directly. This can be obtained by solving equations (6) and (7) for t. Some straightforward algebra, yields the following two expressions for t:

t = t_{min} + \sqrt{P(t)(t_{ml} -  t_{min})(t_{max} - t_{min})} \ldots\ldots(8) for t_{min}\leq t \leq t_{ml}

And

t = t_{max} - \sqrt{[1-P(t)](t_{max} -  t_{ml})(t_{max} - t_{min})} \ldots\ldots(9) for t_{ml}\leq t \leq t_{max}

These can be easily combined in a single Excel formula using an IF function, and I’ll show you exactly how in a minute. Yes, we can now finally get down to the Excel simulation proper and you may want to download the workbook if you haven’t done so already.

The simulation

Open up the workbook and focus on the first three columns of the first sheet to begin with. These simulate the first task in Figure 1, which also happens to be the task we have used to illustrate the construction of the triangular distribution as well as the mechanics of Monte Carlo.

Rows 2 to 4 in columns A and B list the min, most likely and max completion times while the same rows in column C list the probabilities associated with each of the times. For t_{min} the probability is 0 and for t_{max} it is 1.  The probability at t_{ml} can be calculated using equation (6) which, for t=t_{max}, reduces to

P(t_{ml}) =\frac{(t_{ml}-t_{min})}{t_{max}-t_{min}}\ldots\ldots(10)

Rows 6 through 10005 in column A are simulated probabilities of completion for Task 1. These are obtained via the Excel RAND() function, which generates uniformly distributed random numbers lying between 0 and 1.  This gives us a list of probabilities corresponding to 10,000 independent simulations of Task 1.

The 10,000 probabilities need to be translated into completion times for the task. This is done using equations (8) or (9) depending on whether the simulated probability is less or greater than P(t_{ml}), which is in cell C3 (and given by Equation (10) above). The conditional statement can be coded in an Excel formula using the IF() function.

Tasks 2-4 are coded in exactly the same way, with distribution parameters in rows 2 through 4 and simulation details in rows 6 through 10005 in the columns listed below:

  • Task 2 – probabilities in column D; times in column F
  • Task 3 – probabilities in column H; times in column I
  • Task 4 – probabilities in column K; times in column L

That’s basically it for the simulation of individual tasks. Now let’s see how to combine them.

For tasks in series (Tasks 1 and 2), we simply sum the completion times for each task to get the overall completion times for the two tasks.  This is what’s shown in rows 6 through 10005 of column G.

For tasks in parallel (Tasks 3 and 4), the overall completion time is the maximum of the completion times for the two tasks. This is computed and stored in rows 6 through 10005 of column N.

Finally, the overall project completion time for each simulation is then simply the sum of columns G and N (shown in column O)

Sheets 2 and 3 are plots of the probability and cumulative probability distributions for overall project completion times. I’ll cover these in the next section.

Discussion – probabilities and estimates

The figure on Sheet 2 of the Excel workbook (reproduced in Figure 11 below) is the probability distribution function (PDF) of completion times. The x-axis shows the elapsed time in days and the y-axis the number of Monte Carlo trials that have a completion time that lie in the relevant time bin (of width 0.5 days). As an example, for the simulation shown in the Figure 11, there were 882 trials (out of 10,000) that had a completion time that lie between 16.25 and 16.75 days. Your numbers will vary, of course, but you should have a maximum in the 16 to 17 day range and a trial number that is reasonably close to the one I got.

Figure 11: Probability distribution of completion times (N=10,000)

I’ll say a bit more about Figure 11 in the next section. For now, let’s move on to Sheet 3 of workbook which shows the cumulative probability of completion by a particular day (Figure 12 below).  The figure shows the cumulative probability function (CDF), which is the sum of all completion times from the earliest possible completion day to the particular day.

Figure 12: Probability of completion by a particular day (N=10,000)

To reiterate a point made earlier,  the reason we work with the CDF  rather than the PDF is that we are interested in knowing the probability of completion by a particular date (e.g. it is 90% likely that we will finish by April 20th) rather than the probability of completion on a particular date (e.g. there’s a 10% chance we’ll finish on April 17th). We can now answer the two questions we posed earlier. As a reminder, they are:

  • How likely is it that the project will be completed within 17 days?
  • What’s the 90% likely completion time?

Both questions are easily answered by using the cumulative distribution chart on Sheet 3 (or Fig 12).  Reading the relevant numbers from the chart, I see that:

  • There’s a 60% chance that the project will be completed in 17 days.
  • The 90% likely completion time is 19.5 days.

How does the latter compare to the sum of the 90% likely completion times for the individual tasks? The 90% likely completion time for a given task can be calculated by solving Equation 9 for $t$, with appropriate values for the parameters t_{min}, t_{max} and t_{ml} plugged in, and P(t) set to 0.9. This gives the following values for the 90% likely completion times:

  • Task 1 – 6.5 days
  • Task 2 – 8.1 days
  • Task 3 – 7.7 days
  • Task 4 – 5.8 days

Summing up the first three tasks (remember, Tasks 3 and 4 are in parallel) we get a total of 22.3 days, which is clearly an overestimation. Now, with the benefit of having gone through the simulation, it is easy to see that the sum of 90% likely completion times for individual tasks does not equal the 90% likely completion time for the sum of the relevant individual tasks – the first three tasks in this particular case. Why? Essentially because a Monte Carlo run in which the first three tasks tasks take as long as their (individual) 90% likely completion times is highly unlikely. Exercise:  use the worksheet to estimate how likely this is.

There’s much more that can be learnt from the CDF. For example, it also tells us that the greatest uncertainty in the estimate is in the 5 day period from ~14 to 19 days because that’s the region in which the probability changes most rapidly as a function of elapsed time. Of course, the exact numbers are dependent on the assumed form of the distribution. I’ll say more about this in the final section.

To close this section, I’d like to reprise a point I mentioned earlier: that uncertainty is a shape, not a number. Monte Carlo simulations make the uncertainty in estimates explicit and can help you frame your estimates in the language of probability…and using a tool like Excel can help you explain these to non-technical people like your manager.

Closing remarks

We’ve covered a fair bit of ground: starting from general observations about how long a task takes, saw how to construct simple probability distributions and then combine these using Monte Carlo simulation.  Before I close, there are a few general points I should mention for completeness…and as warning.

First up, it should be clear that the estimates one obtains from a simulation depend critically on the form and parameters of the distribution used. The parameters are essentially an empirical matter; they should be determined using historical data. The form of the function, is another matter altogether: as pointed out in an earlier section, one cannot determine the shape of a function from a finite number of data points. Instead, one has to focus on the properties that are important. For example, is there a small but finite chance that a task can take an unreasonably long time? If so, you may want to use a lognormal distribution…but remember, you will need to find a sensible way to estimate the distribution parameters from your historical data.

Second, you may have noted from the probability distribution curve (Figure 11)  that despite the skewed distributions of the individual tasks, the distribution of the overall completion time is somewhat symmetric with a minimum of ~9 days, most likely time of ~16 days and maximum of 24 days.  It turns out that this is a general property of distributions that are generated by adding a large number of independent probabilistic variables. As the number of variables increases, the overall distribution will tend to the ubiquitous Normal distribution.

The assumption of independence merits a closer look.  In the case it hand,  it implies that the completion times for each task are independent of each other. As most project managers will know from experience, this is rarely the case: in real life,  a task that is delayed will usually have knock-on effects on subsequent tasks. One can easily incorporate such dependencies in a Monte Carlo simulation. A formal way to do this is to introduce a non-zero correlation coefficient between tasks as I have done here. A simpler and more realistic approach is to introduce conditional inter-task dependencies As an example, one could have an inter-task delay that kicks in only if the predecessor task takes more than 80%  of its maximum time.

Thirdly, you may have wondered why I used 10,000 trials: why not 100, or 1000 or 20,000. This has to do with the tricky issue of convergence. In a nutshell, the estimates we obtain should not depend on the number of trials used.  Why? Because if they did, they’d be meaningless.

Operationally, convergence means that any predicted quantity based on aggregates should not vary with number of trials.  So, if our Monte Carlo simulation has converged, our prediction of 19.5 days for the 90% likely completion time should not change substantially if I increase the number of trials from ten to twenty thousand. I did this and obtained almost the same value of 19.5 days. The average and median completion times (shown in cell Q3 and Q4 of Sheet 1) also remained much the same (16.8 days). If you wish to repeat the calculation, be sure to change the formulas on all three sheets appropriately. I was lazy and hardcoded the number of trials. Sorry!

Finally, I should mention that simulations can be usefully performed at a higher level than individual tasks. In their highly-readable book,  Waltzing With Bears: Managing Risk on Software Projects, Tom De Marco and Timothy Lister show how Monte Carlo methods can be used for variables such as  velocity, time, cost etc.  at the project level as opposed to the task level. I believe it is better to perform simulations at the lowest possible level, the main reason being that it is easier, and less error-prone, to estimate individual tasks than entire projects. Nevertheless, high level simulations can be very useful if one has reliable data to base these on.

There are a few more things I could say about the usefulness of the generated distribution functions and Monte Carlo in general, but they are best relegated to a future article. This one is much too long already and I think I’ve tested your patience enough. Thanks so much for reading, I really do appreciate it and hope that you found it useful.

Acknowledgement: My thanks to Peter Holberton for pointing out a few typographical and coding errors in an earlier version of this article. These have now been fixed. I’d be grateful if readers could bring any errors they find to my attention.

Written by K

March 27, 2018 at 4:11 pm

%d bloggers like this: