# Eight to Late

Sensemaking and Analytics for Organizations

## Anchored and over-optimistic: why quick and dirty estimates are almost always incorrect

Some time ago, a sales manager barged into my office. “I’m sorry for the short notice,” she said, “but you’ll need to make some modifications to the consolidated sales report by tomorrow evening.”

I could see she was stressed and I wanted to help, but there was an obvious question that needed to be asked. “What do you need done? I’ll have to get some details before I can tell you if it can be done within the time,” I replied.

She pulled up a chair and proceeded to explain what was needed. Within a minute or two I knew there was no way I could get it finished by the next day. I told her so.

“Oh no…this is really important. How long will it take?”

I thought about it for a minute or so. “OK how about I try to get it to you by day after?”

“Tomorrow would be better, but I can wait till day after.” She didn’t look very happy about it though. “Thanks,” she said and rushed away, not giving me a chance to reconsider my off-the-cuff estimate.

After she left, I had a closer look at what needed to be done. Soon I realised it would take me at least twice as long if I wanted to do it right. As it was, I’d have to work late to get it done in the agreed time, and may even have to cut a corner or two ( or three) in the process.

So why was I so wide off the mark?

I had been railroaded into giving the manager an unrealistic estimate without even realising it. When the manager quoted her  timeline, my subconscious latched on to it as an initial value for my estimate.  Although I revised the initial estimate upwards, I was “pressured” – albeit unknowingly – into quoting an estimate that was biased towards the timeline she’d mentioned. I was a victim of what psychologists call anchoring bias – a human tendency to base judgements on a single piece of information or data, ignoring all other relevant factors. In arriving at my estimate, I had focused on one piece of data (her timeline) to the exclusion of all other potentially significant information (the complexity of the task, other things on my plate etc.).

Anchoring bias was first described by Amos Tversky and Daniel Kahnemann in their pioneering paper entitled, Judgement under Uncertainty: Heuristics and Biases. Tversky and Kahnemann found that people often make quick judgements based on initial (or anchor) values that are suggested to them. As the incident above illustrates, the anchor value (the manager’s timeline) may have nothing to do with the point in question (how long it would actually take me to do the work). To be sure, folks generally adjust the anchor values based on other information. These adjustments, however, are generally inadequate. The final estimates arrived at are incorrect  because they remain biased towards the initial value. As Tversky and Kahnemann state in their paper:

In many situations, people make estimates by starting from an initial value that is adjusted to yield the final answer. The initial value, or starting point, may be suggested by the formulation of the problem, or it may be the result of a partial computation. In either case, adjustments are typically insufficient. That is, different starting points yield different estimates, which are biased toward the initial values. We call this phenomenon anchoring.

Although the above quote may sound somewhat academic, be assured that anchoring is very real. It affects even day-to-day decisions that people make. For example, in this paper Neil Stewart presents evidence that credit card holders repay their debt more slowly when their statements suggest a minimum payment. In other words the minimum payment works as an anchor, causing the card holder to pay a smaller amount than they would have been prepared to (in the absence of an anchor).

Anchoring, however, is only part of the story.  Things get much worse for complex tasks because another bias comes into play. Tversky and Kahnemann found that subjects tended to be over optimistic when asked to make predictions regarding complex matters. Again, quoting from their paper:

Biases in the evaluation of compound events are particularly significant in the context of planning. The successful completion of an undertaking, such as the development of a new product, typically has a conjunctive character: for the undertaking to succeed, each of a series of events must occur. Even when each of these events is very likely, the overall probability of success can be quite low if the number of events is large. The general tendency to overestimate the probability of conjunctive events leads to unwarranted optimism in the evaluation of the likelihood that a plan will succeed or that a project will be completed on time.

Such over-optimism in the face of complex tasks is sometimes  referred to as the planning fallacy.1

Of course, as discussed by Kahnemann and Fredrick in this paper, biases such as anchoring and the planning fallacy can be avoided by a careful, reflective approach to estimation – as opposed to a “quick and dirty” or intuitive one. Basically, a reflective approach seeks to eliminate bias by reducing the effect of individual judgements. This is why project management texts advise us (among other things) to:

• Base estimates on historical data for similar tasks. This is the basis of reference class forecasting which I have written about in an earlier post.
• Draft independent experts to do the estimation.
• Use multipoint estimates (best and worst case scenarios)

In big-bang approaches to project management, one has to make a conscious effort to eliminate bias because there are fewer chances to get it right. On the other hand, iterative / incremental methodologies have bias elimination built-in because one starts with initial estimates, which include inaccuracies due to bias, and subsequently refine these as one progresses. The estimates get better as one goes along because every refinement is based on an improved knowledge of the task.

Anchoring and the planning fallacy are examples cognitive biases – patterns of deviations of judgement that humans display in a variety of situations. Since the  pioneering work of Tversky and Kahnemann, these biases  have been widely studied by psychologists. It is important to note that these biases come into play whenever quick and dirty judgements are involved. They occur even when subjects are motivated to make accurate judgements. As Tversky and Kahnemann state towards the end of their paper:

These biases are not attributable to motivational effects such as wishful thinking or the distortion of judgments by payoffs and penalties. Indeed, several of the severe errors of judgment reported earlier (in the paper) occurred despite the fact that subjects were encouraged to be accurate and were rewarded for the correct answers.

The only way to avoid cognitive biases in estimating is to proceed with care and consideration. Yes, that’s a time consuming, effort-laden process, but that’s the price one pays for doing it right. To paraphrase Euclid, there is no royal road to estimation.

Footnotes:

1 The planning fallacy is related to optimism bias which I have discussed in my post on reference class forecasting.

Written by K

January 30, 2009 at 11:11 pm

Posted in Bias, Estimation, Project Management

Tagged with

## Improving project forecasts

Many projects are plagued by cost overruns and benefit shortfalls. So much so that a quick search on Google News  almost invariably returns a recent news item reporting a high-profile cost overrun.  In a 2006 paper entitled, From Nobel Prize to Project Management: Getting Risks Right, Bent Flyvbjerg discusses the use of reference class forecasting to reduce inaccuracies in project forecasting. This technique, which is based on theories of decision-making in uncertain (or risky) environments,1 forecasts the outcome of a planned action based on actual outcomes in a collection of actions similar to the one being forecast. In this post I present a brief overview of reference class forecasting and its application to estimating projects. The discussion is based on Flyvbjerg’s paper.

According to Flyvbjerg, the reasons for inaccuracies in project forecasts fall into one or more of the following categories:

• Technical – These are reasons pertaining to unreliable data or the use of inappropriate forecasting models.
• Psychological  – This pertains to the inability of most people to judge future events in an objective way. Typically it manifests itself as undue optimism, unsubstantiated by facts; behaviour that is sometimes referred to as optimism bias. This is the reason for statements like, “No problem, we’ll get this to you in a day.” – when the actual time is more like a week.
• Political – This refers to the tendency of people to misrepresent things for their own gain – e.g. one might understate costs and / or overstate benefits in order to get a project funded. Such behaviour is sometimes called strategic misrepresentation (commonly known as lying!) .

Technical explanations are often used to explain inaccurate forecasts. However, Flyvbjerg rules these out as valid explanations for the following reasons. Firstly, inaccuracies attributable to data errors (technical errors) should be normally distributed with average zero, but actual inaccuracies were shown to be non-normal in a variety of cases. Secondly, if inaccuracies in data and models were the problem, one would expect this to get better as models and data collection techniques get better. However, this clearly isn’t the case, as projects continue to suffer from huge forecasting errors.

Based on the above Flyvbjerg concludes that technical explanations do not account for forecast inaccuracies as comprehensively as psychological and political explanations do.   Both the latter involve human bias. Such bias is inevitable when one takes an inside view, which focuses on the internals of a project – i.e. the means (or processes) through which a project will be implemented.  Instead, Flyvbjerg suggests taking an outside view – one which focuses on outcomes of similar (already completed) projects rather than on the current project. This is precisely what reference class forecasting does, as I explain below.

Reference class forecasting is a systematic way of taking an outside view of planned activities, thereby eliminating human bias. In the context of projects this amounts to creating a probability distribution of estimates based on data for completed projects that are similar to the one of interest, and then comparing the said project with the distribution in order to get a most likely outcome. Basically, reference class forecasting consists of the following steps:

1. Collecting data for a number of similar past projects – these projects form the reference class. The reference class must encompass a sufficient number of projects to produce a meaningful statistical distribution, but individual projects must be similar to the project of interest.
2. Establishing a probability distribution based on (reliable!) data for the reference class.  The challenge here is to get good data for a sufficient number of reference class projects.
3. Predicting most likely outcomes for the project of interest based on comparisons with the reference class distribution.

In the paper, Flyvbjerg describes an application of reference class forecasting to large scale transport infrastructure projects. The processes and procedures used are published in a guidance document entitled Procedures for Dealing with Optimism Bias in Transport Planning, so I won’t go into details here. The trick, of course, is to get reliable data for similar projects. Not an easy task.

To conclude, project forecasts are often off the mark by a wide margin. Reference class forecasting is an objective technique that eliminates human bias from the estimating process. However, because of the cost and effort involved in building the reference distribution, it may only be practical to use it on megaprojects.

Footnotes:

1Daniel Kahnemann received the Nobel Prize in Economics in 2002 for his work on how people make decisions in uncertain situations. His work, which is called Prospect Theory, forms the basis of Reference Class Forecasting.

Written by K

June 15, 2008 at 12:18 pm

Posted in Bias, Project Management

Tagged with

## On the inherent uncertainty of project tasks estimates

The accuracy of a project schedule depends on the accuracy of the individual activity (or task) duration estimates that go into it. Project managers  know this from (often bitter) experience.  Treatises such as the gospel according to PMBOK recognise this, and exhort project managers to estimate uncertainties and include them when reporting activity durations. However, the same books have little to say on how these uncertainties should be integrated into the project schedule in a meaningful way. Sure, well-established techniques such as PERT do incorporate probabilities into schedules via averaged or expected durations. But the resulting schedules are always treated as deterministic, with each task (and hence, the project)  having a definite completion date.  Schedules rarely, if ever, make explicit allowance for uncertainties.

In this post I look into the nature of uncertainty in project tasks – in particular I focus on the probability distribution of task durations. My approach is intuitive and somewhat naive. Having said that up front, I trust purists and pedants will bear with my somewhat loose use of terminology relating to probability theory.

Theory is good for theorists; practitioners prefer examples, so I’ll start with one. Consider an activity that you do regularly – such as getting ready in the morning.  Since you’ve done it so often, you have a pretty good idea how long it takes on average. Say it takes you an hour on average – from when you get out of bed to when you walk out of your front door. Clearly, on a particular day you could be super-quick and finish in 45 minutes, or even 40 minutes. However, there’s a lower limit to the early finish –  you can’t get ready in 0 minutes! Let’s say the lower limit is 30 minutes. On the other hand, there’s really no upper limit. On a bad day you could take a few hours.  Or if you slip in the shower and hurt your back, you could take a few days! So, in terms of probabilities, we have a 0% probability at a lower limit and also at infinity (since the probability of taking an infinite time to get to work is essentially zero). In between we’d expect the probability to hit a maximum at a lowish value of time (may be 50 minutes or so). Beyond the maximum, the probability would decay rapidly at first, then slowly becoming zero at an infinite time.

If we were to plot the probability of activity completion for this example as a function of time, it would look like the long-tailed function I’ve depicted in Figure 1 below. The distribution starts at a non-zero cutoff (corresponding to the minimum time for the activity); increases to a maximum (corresponding to the most probable time); and then falls off rapidly at first, then with a long, slowly decaying tail. The mean (or average) of the distribution is located to the right of the maximum because of the long tail. In the example, $t_{0}$ (30 mins) is the minimum time for completion so the probability of finishing within 30 mins is 0%. There’s a 50% probability of completion within an hour (denoted by $t_{50}$), 80% probability of completion within 2 hours (denoted by $t_{80}$) and a 90% probability of completion in 3 hours (denoted by $t_{90}$). The large values for $t_{80}$ and $t_{90}$ compared to $t_{50}$ are a consequence of the long tail. In the example, the tail –  which goes all the way to infinity – accounts for the remote possibility you may slip in the shower, hurt yourself  badly, and make it work very late (or may be not at all!).

It turns out that many phenomena can be modeled by this kind of long-tailed distribution. Some of the better known long-tailed distributions include lognormal and power law distributions. A quick, informal review of project management literature revealed that lognormal distributions are more commonly used than power laws to model activity duration uncertainties. This may be because lognormal distributions have a finite mean and variance whereas power law distributions can have infinite values for both (see this presentation by Michael Mitzenmacher, for example). [An Aside:If you’re curious as to why infinities are possible in the latter, it is because power laws decay more slowly than lognormal distributions – i.e they have “fatter” tails, and hence enclose larger (even infinite) areas.]. In any case, regardless of the exact form of the distribution for activity durations, what’s important and non-controversial is the short cutoff, the peak and long, decaying tail. These characteristics are true of all probability distributions that describe activity durations.

There’s one immediate consequence of the long tail:   if you want to be really, really sure of completing any activity, you have to add a lot of “air” or  safety because there’s a chance that you may “slip in the shower” so to speak.  Hence, many activity estimators add large buffers to their estimates. Project managers who suffer the consequences of the resulting inaccurate schedule are thus victims of the tail.

Very few methodologies explicitly acknowledge  uncertainty in activity estimates, let alone present ways to deal with it. Those that do include  The Critical Chain Method, Monte Carlo Simulation and Evidence Based Scheduling.  The Critical Chain technique deals with uncertainty by slashing estimates to their $t_{50}$ values and consolidating safety or “air” into a single buffer, whereas the latter two techniques use simulations to generate expected durations (at appropriate confidence levels).  It would take me way past my self-imposed word limit to discuss these any further, but I urge you to follow the links listed above if you want to find out more.

(Note: Portions of this post are based on my article on the Critical Chain Method)

Written by K

March 4, 2008 at 5:25 pm

Tagged with