Six ways in which project estimates go wrong
Despite the increasing focus on project estimation, the activity still remains more guesswork than art or science. In his book on the fallacies of software engineering, Robert Glass has this to say about it:
Estimation, as you might imagine, is the process by which we determine how long a project will take and how much it will cost. We do estimation very badly in the software field. Most of our estimates are more like wishes than realistic targets. To make matters worse, we seem to have no idea how to improve on those very bad practices. And the result is, as everyone tries to meet an impossible estimation target, shortcuts are taken, good practices are skipped, and the inevitable schedule runaway becomes a technology runaway as well…
Moreover, he suggests that poor estimation is one of the top two causes of project failure.
Now, there are a number of reasons why project estimates go wrong, but in my experience there are half-dozen standouts. Here they are, in no particular order:
1. False analogies: Project estimates based on historical data are generally considered to be more reliable than those developed using other methods such as expert judgement (see this article, from the MS Project support site for example). This is fine and good as long as one uses data from historical projects that are identical to the one at hand in relevant ways. Problem is, one rarely knows what is relevant and what isn’t. It is all too easy too select a project that is superficially similar to the one at hand, but actually differs in critical ways. See my posts on false analogies and the reference class problem for more on this point.
2. False precision: Project estimates are often quoted as single numbers rather than ranges. Such estimates are incorrect because they ignore the fact that uncertain quantities should be quantified by a range of numbers (or more accurately, a distribution) rather than point values. As Dr. Sam Savage emphasises in his book, The Flaw of Averages: an uncertain quantity is a shape, not a number (see my review of the book for more on this point). In short, an estimate quoted as a single number is almost guaranteed to be incorrect.
3. Estimation by decree: It should be obvious that estimation must be done by those who will do the work. Unfortunately this principle is one of the first to be sacrificed on Death March Projects. In such projects, schedules are shoe-horned into predetermined timelines, with estimates cooked up by those who have little or no idea of the actual effort involved in doing the work.
4. Subjectivity: This is where estimates are plucked out of thin air and “justified” based on gut-feel and other subjective notions. Such estimates are prone to overconfidence and a range of other cognitive biases. See my post on cognitive biases as project meta-risks for a detailed discussion of how these biases manifest themselves in project estimates.
5. Coordination neglect: Projects consist of diverse tasks that need to be coordinated and integrated carefully. Unfortunately, the time and effort needed for coordination and integration is often underestimated (or even totally overlooked) by project decision makers. This is referred to as coordination neglect. Coordination neglect is a problem in projects of all sizes, but is generally more significant for projects involving large teams (see this paper for an empirical study of the effect of team size on coordination neglect). As one might imagine, coordination neglect also becomes a significant problem in projects that consist of a large number of dependent tasks or have a large number of external dependencies.
6. Too coarse grained: Large tasks are made up of smaller tasks strung together in specific ways. Consequently, estimates for large tasks should be built up from estimates for the smaller sub-tasks. . Teams often short-circuit the process by attempting to estimate the large task directly. Such estimates usually turn out to be incorrect because sub-tasks are overlooked. Another problem is coordination neglect between sub-tasks, as discussed in the earlier point. It is true – – the devil is always in the details.
I should emphasise that the above list based on personal experience, not on any systematic study.
I’ll conclude this piece with another fragment from Glass, who is not very optimistic about improvements in the area of project estimation. As he states in his book:
The bottom line is that, here in the first decade of the twenty-first century, we don’t know what constitutes a good estimation approach, one that can yield decent estimates with good confidence that they will really predict when a project will be completed and how much it will cost. That is a discouraging bottom line. Amidst all the clamor to avoid crunch mode and end death marches, it suggests that so long as faulty schedule and cost estimates are the chief management control factors on software projects, we will not see much improvement.
True enough, but being aware of the ways in which estimates can go wrong is the first step towards improving them.