Archive for December 2012
It started with a presentation,
a proforma regurgitation:
a tired old story,
of a repository
for all data in an organization.
The business was duly seduced
by promises of costs reduced.
But the data warehouse,
so glibly espoused,
was not so simply produced.
For the team was soon in distress,
‘cos the data landscape was a mess:
in databases and files countless.
And politics had them bogged down;
in circles they went round and round.
in a sea of data they drowned.
In the light of the following morn,
the truth upon them did dawn.
An enterprise data store
is IT lore
as elusive as the unicorn.
A couple of years ago, I wrote a post entitled, A project manager’s ruminations on quality, in which I discussed meaning of the term quality as it pertains to project work. In that article I focused on how the standard project management definition of quality differs from the usual (dictionary) meaning of the term. Below, I expand on that post by presenting some alternate perspectives on quality.
Quality in mainstream project management
Let’s begin with a couple of dictionary definitions of quality to see how useful they are from a project management perspective:
- An essential or distinctive characteristic, property, or attribute.
- High grade; superiority; excellence
Clearly, these aren’t much help because they don’t tell us how to measure quality. Moreover, the second definition confuses quality and grade – two terms that the PMBOK assures us are as different as chalk and cheese.
So what is a good definition of quality from the perspective of a project manager? The PMBOK, quoting from the American Society for Quality (ASQ), defines quality as, “the degree to which a set of inherent characteristics fulfil requirements.” This is clearly a much more practical definition for a project manager, as it links the notion of quality to what the end-user expects from deliverables. PRINCE2, similarly, keeps the end-user firmly in focus when it defines quality, rather informally, as, fitness for purpose.”
Project managers steeped in PRINCE and other methodologies would probably find the above unexceptional. The end-goal in project management is to deliver what’s agreed to, whilst working within the imposed constraints of resources and time. It is therefore no surprise that the definition of quality focuses on the characteristics of the deliverables, as they are specified in the project requirements.
Quality as an essential characteristic
The foregoing project management definitions beg the question:
Is “fitness of purpose” or the “degree to which product characteristics fulfils requirements” really a measure of quality?
The problem with these definitions is that they conflate quality with fulfilling requirements. But surely there is more to it than that. An easy way to see this is that one can have a high quality product that does not satisfy user requirements or meet cost and schedule targets. For example, many people would agree that WordPress blogging software is of a high quality, yet it does not meet the requirement of, say, “a tool to manage projects.”
Indeed, Robert Glass states this plainly in his book, Facts and Fallacies of Software Engineering. Fact 47 in the book goes as follows:
Quality is not user satisfaction, meeting requirements, meeting cost and schedule targets or reliability.
So what is quality, then?
According to Glass quality is a set of (product) attributes, including things such as:
- Reliability – does the product work as it should?
- Useability – is it easy to use?
- Modifiability – can it be modified (maintained) easily?
- Understandability – is it easy to understand how it works?
- Efficiency – does it make efficient use of resources (including storage, computing power and time)?
- Testability – can it be tested easily?
- Portability – can it be ported to other platforms? This isn’t an issue for all products – some programs need run only on one operating system.
Note that the above listing is not in order of importance. For some products useability maybe more important than efficiency, in others it could be the opposite – the order depends very much on the product and its applications.
Glass notes that these attributes are highly technical. Consequently, they are best dealt with by people who are directly involved in creating the product, not their managers, not even the customers. In this view, the responsibility for quality lies not with project managers, but with those who do the work. To quote from the book:
…quality is one of the most deeply technical issues in the software field. Management’s job, far from taking responsibility for achieving quality, is to facilitate and enable technical people and the get out of their way.
Another point to note is that the above characteristics are indeed measurable (if only in a qualitative sense), which addresses the objection I noted at in the previous section.
Quality as a means to an end
In our book, The Heretic’s Guide to Best Practices, Paul Culmsee and I discuss a couple of perspectives on quality which I summarise in this and the following section.
Our first contention is that quality cannot be an end in itself. This is a subtle point so I’ll illustrate with an example. Consider the two “ends-focused” definitions of quality mentioned earlier: quality as “fitness for purpose” and quality as a set of objective attributes. Chances are that different project stakeholders will have differing views on which definition is “right”. The problem, as we have seen in the earlier sections, is that the two definitions are not the same. Hence quality cannot be an end in itself.
Instead, we believe that a better definition comes from asking the question: “What difference would quality make to this project?” The answer determines an appropriate definition of quality for a particular project. Implicit here is the notion of quality as an enabler to achieve the desired project objective. In other words, quality here is a means to an end, not an end in itself.
Quality and time
Typically, project deliverables – be they software or buildings or anything else – have lifetimes that are much longer than the duration of the project itself. There are a couple of important implications of this:
- Deliverables may be used in ways that were not considered when the project was implemented.
- They may have side effects that were not foreseen.
Rarely, if ever, do project teams worry about the long term consequences of their creations. Their time horizons are limited to the duration of their projects. This myopic view is perpetuated by the so called iron triangle which tells us that quality is a function of cost, scope and time (i.e. duration) of a project.
The best way to see the short-sightedness of this view is through an example. Consider the Sydney Opera House as an example of a project output. As we state in our book:
It is a global icon and there are people who come to Sydney just to see it. In term of economic significance to Sydney, it is priceless andi rreplaceable. The architect who designed it, Jørn Utzon, was awarded the Pritzker Prize (architecture’s highest honour) for it in 2003.
But the million dollar question is . . . “Was it a successful project?” If one was to ask one of the two million annual tourists who visit the place, we suspect that the answer would be an emphatic “Yes.” Yet, when we judge the project through the lens of the “iron triangle,” the view changes significantly. To understand why, consider these fun filled facts about the Sydney Opera House.
- The Opera House was formally completed in 1973, having cost $102 million
- The original cost estimate in 1957 was $7 million
- The original completion date set by the government was 1963
- Thus, the project was completed ten years late and over-budget by more than a factor of fourteen
If that wasn’t bad enough, Utzon, the designer of the opera house, never lived to set foot in it. He left Australia in disgust, swearing never to come back after his abilities had been called into question and payments suspended. When the Opera House was opened in 1973 by Queen Elizabeth II, Utzon was not invited to the ceremony, nor was his name mentioned…
Judged by the criteria of the iron triangle, the project was an abject failure. However, judged by through the lens of time, the project is an epic success! Quality must therefore also be viewed in terms of the legacy that the project leaves – how the deliverables will be viewed by future generations and what it will mean to them.
As we have seen, the issue of quality is a vexed one because how one understands it depends on which school of thought one subscribes to. We have seen that quality can refer to one of the following:
- The “fitness for purpose” of a product or its ability to “meet requirements.” (Source: PRINCE2 and PMBOK)
- An essential attribute of a product. This is based on the standard, dictionary definition of the term.
- A means of achieving a particular end. Here quality is viewed as a process rather than a project output.
Moreover, none of the above perspectives considers the legacy bequeathed by a project; how the deliverables will perceived by future generations.
So where does that leave us?
Perhaps it is best to leave definitions of quality to pedants, for as someone wise once said, “What is good and what is not good, need we have anyone tell us these things?”
One of the ways in which we attempt to understand and explain natural and social phenomena is by building models of them. A model is a representation of a real-world phenomenon, and since the real world is messy, models are generally based on a number of simplifying assumptions. It is worth noting that models may be mathematical but they do not have to be – I present examples of both types of models in this article.
In this post I make two points:
- That all models are incomplete and are therefore wrong.
- That certain models are not only wrong, but can have harmful consequences if used thoughtlessly. In particular I will discuss a model of human behaviour that is widely taught and used in management practice, much to the detriment of organisations.
Before going any further I should clarify that I don’t “prove” that all models are wrong; that is likely an impossible task. Instead, I use an example to illustrate some general features of models which strongly suggest that no model can possibly account for all aspects of a phenomenon. Following that I discuss how models of human behaviour must be used with caution because they can have harmful consequences.
All models are wrong
Since models are based on simplifying assumptions, they can at best be only incomplete representations of reality. It seems reasonable to expect that all models will breakdown at some point because they are not reality. In this section, I illustrate this looking at a real-world example drawn from the king of natural sciences, physics.
Theoretical physicists build mathematical models that describe natural phenomena. Sir Isaac Newton was a theoretical physicist par excellence. Among other things, he hypothesized that the force that keeps the earth in orbit around the sun is the same as the one that keeps our feet firmly planted on the ground. Based on observational inferences made by Johannes Kepler, Newton also figured out that the force is inversely proportional to the square of the distance between them. That is: if the distance between two bodies is doubled, the gravitational force between them decreases four-fold. For those who are interested, there is a nice explanation of Newton’s law of gravitation here.
Newton’s law tells us the precise nature of the force of attraction between two bodies. It is universal in that it applies to all things that have a mass, regardless of the specific material they are made of. It’s utility is well established: among other things, it enables astronomers and engineers to predict the trajectories of planets, satellites and spacecraft to extraordinary accuracy; on the flip side it also enables war mongers to compute the trajectories of missiles. Newton’s law of gravitation has been tested innumerable times since it was proposed in the late 1700s, and it has passed with flying colours every time.
Yet, strictly speaking, it is wrong.
To understand why, we need to understand what it means to explain something. I’ll discuss this somewhat philosophical issue by sticking with gravity. Newton’s law enables us to predict the effects of gravity, but it does not tell us what gravity actually is. Yes, it’s a force, but what exactly is this force? How does it manifest itself? What is it that passes between two bodies to make them “aware” of each other’s existence?
Newton is silent on all these questions.
An explanation had to wait for a century and a half. In 1914 Einstein proposed that every body that has mass creates a distortion of space (actually space and time) around it. He formalised this idea in his General Theory of Relativity which tells us that gravity is a consequence of the curvature of space-time.
This is difficult to visualise, so perhaps an analogy will help. Think of space-time as a flat rubber sheet. A marble on the sheet causes a depression (or curvature) in the vicinity of the marble. Another marble close enough would sense the curvature and would tend to roll towards the original marble. To an observer who wasn’t aware of the curvature (imagine the rubber sheet to be invisible) the marbles would appear to be attracted to each other. Yet at a deeper level, the attraction is simply a consequence of geometry. In this sense then, Einstein’s theory “explains” gravity at a more fundamental level than Newton’s law does.
Now, one of the predictions of Einstein’s theory is that the force of gravitation is ever so slightly different from that predicted by Newton’s law. This difference is so small that it is unnoticeable in the case of spacecraft or even planets, but it does make a difference in the case of dense, massive bodies such as black holes. Many experiments have confirmed that Einstein’s theory is more accurate than Newton’s.
So Newton was wrong.
However, the story doesn’t end there because Einstein was wrong too. It turns out, that Einstein’s theory of gravitation is not consistent with Quantum Mechanics, the theory that describes the microworld of atoms and elementary particles. One of the open problems in theoretical physics is the development of a quantum theory of gravity. To be honest, I don’t know much at all about quantum gravity, so if you want to know more about this other holy grail of physics, I’ll refer you to Lee Smolin’s excellent book, Three Roads to Quantum Gravity.
Anyway, the point I wish to make is not that these luminaries were wrong but that the limitations of their models were in a sense inevitable. Why? Well, because our knowledge of the real world is never complete, it is forever work in progress. We build models based on what we know at a given time, which in turn is based on our current state of knowledge and the empirical data that supports it. The world, however, is much more complex than our limited powers of reasoning and observation , even if these are enhanced by instruments. Consequently any models that we construct are necessarily incomplete – and therefore, wrong.
Some models are harmful
The foregoing brings me to the second point of this post.
There’s nothing wrong in being wrong, of course; especially if our understanding of the world is enhanced in the process. I would be quite happy to leave it there if that was all there was to it. The problem is that there is something more insidious and dangerous: some models are not only wrong, they are positively harmful.
And no, I’m not referring to nuclear weapons; nuclear fission by itself is neither benign nor dangerous, it is what we do with it that makes it so. I’m referring to something far more commonplace, a model that underpins much of modern day management: it is the notion that humans are largely rational beings who make decisions based solely on their narrow self-interest. According to this view of humans as economic beings, we are driven by material gain to the exclusion of all other considerations. This is a narrow, one-dimensional view of humans but is one that is legitimised by mainstream economics and has been adopted enthusiastically by many management schools and their alumni.
Among other things, those who subscribe to this model believe that:
- Employees are inherently untrustworthy because they will act in their own personal interests, with no consideration of the greater good. Consequently their performance needs to be carefully “incentivised” and monitored.
- Management’s goals should be to maximise profits. Consequently they should be “incentivised” by bonuses that are linked solely to profit earned.
These are harmful because
- Treating employees like potential shirkers who need to “motivated” by a carrot and stick policy will only demotivate them.
- Linking senior management bonuses to financial performance alone encourages managers to follow strategies that boost short term profits regardless of the long term consequences.
The fact of the matter is that humans are not atoms or planets; they can (and will) change their behaviour depending on how they are treated.
To sum up
All models are wrong, but some models – especially those relating to human behaviour – are harmful. The danger of taking models of human behaviour literally is that they tend become self fulfilling prophecies. As Eliyahu Goldratt once famously said, “Tell me how you measure me and I’ll tell you how I’ll behave.” Measure managers by the profits they generate and they’ll work to maximise profits to the detriment of longer-term sustainability, treat employees like soulless economic beings and they’ll end up behaving like the self-serving souls the organisation deserves.