A quick search reveals that the topic of project failure gets a fair bit of attention on the Internet. Many of the articles identify factors such as lack of executive support or incomplete/misunderstood requirements as the prime causes of failure. In this post I use concepts from systems theory to argue that commonly identified “causes” of project failure – such as the ones noted above – are symptoms rather than causes. I then surface the real causes of project failure by taking a systems perspective – a viewpoint that considers the project and the hosting organisation as a whole.
As I will argue, project failures can often be traced back to dysfunctional structures and processes within the organisation. These factors usually have little to do with the project directly and are therefore not always obvious at first sight.
Setting the scene
Readers who have waded through the vast literature on failed projects will have noted that there are diverse opinions on the prime reasons for failure. It would take me too long to wade through these articles, so I’ll just pick a source that is well known even if not entirely credible: The Chaos Report by the Standish Group. As per this summary the Chaos Report 2009 claimed the following were the top three causes of project failure:
- Lack of user input.
- Incomplete or changing requirements/specifications.
- Lack of executive support.
Although the report lists the top ten factors, in the interests of space I’ll focus on just these three.
I should mention that the report refers to the above as “Project Challenged Factors.” Grammar issues aside, this is a somewhat strange way of putting it. Anyway, I interpret this phrase to mean reasons (or driving factors) for failure.
Systems theory and projects
First up, what exactly is a system?
Here is a jargon-free definition from the wonderful book by Donella Meadows entitled, Thinking in Systems: A Primer. Incidentally, I highly recommend the book as an easy-to-read and engaging introduction to systems theory:
A system is a set of things interconnected in such a way that they produce their own pattern of behaviour over time. The system may be buffeted, constricted , triggered or driven by outside forces. But its response to these is characteristic of itself and is seldom simple in the real world. (italics mine)
The word interconnected is important because it tells us that when we study something from a systems perspective, we must identify all the important connections it has with its environment. In the case of projects, the important connections are fairly obvious. A project is usually carried out within an organisational setting so it would have many connections to the hosting organisation. Chief amongst these is the fact that a project is staffed and resourced by the hosting organisation (or by an organisation designated by it). Another important connection is that a project will affect different stakeholder groups within the hosting organisation in different ways. However, since all stakeholders have ongoing organisational roles that go beyond the project, the project is not their only interest. This is a key point to which we will return later.
The phrases “own pattern of behaviour over time” and “characteristic of itself” tell us that systems have a unique pattern of behaviour that is characteristic to them. The important point to note is that characteristic behaviour implies that different systems that have the same signature – i.e. identifying features – will tend to behave in the same way. From the perspective of projects this tells us that projects within similar organisations will evolve in similar ways.
A systems view of the causes of project failure
With only this very brief look at systems theory we are now in a position to get some insights into the real causes of project failure. As mentioned earlier, we will focus on the top three reasons ala Standish.
Lack of user input
First up, consider lack of user input. Systems theory tells us that we need to look at the issue from the perspective of the project and the organisation. From this point of view it is clear that users who have been asked to work on the project in addition to their normal duties will view the project as a burden rather than something that may benefit them in the future.
This by itself is not a new insight. In fact, project management gurus have talked themselves hoarse about the need to free up resources etc. However, the point is that organisational structures typically work against this. Firstly, people who are asked to work on a project know that it is an initiative that will end in a finite (usually, reasonably short) time. Therefore, their long terms interests lies in their ongoing roles rather than in projects. Secondly, most organisations are still structured along functional lines and people’s identities are anchored within their functional reporting lines rather than ephemeral project hierarchies.
The issue of changing requirements and specifications can also be understood from a systems point of view. A characteristic of many systems is that they are stable – i.e. they resist change. Organisations are typically stable systems – they tend to retain their identity despite changes in their environment. One of the characteristics of such organisations is that people in them tend to think and act in set ways – that is, their actions and thinking processes follow well worn patterns to the point where they do not need to think about what they do too deeply.
One of the consequences of this is that when they are asked for requirements users often provide incomplete description of what they do, leaving out significant items that are obvious to them (but not to the analysts who are gathering requirements!). Although I don’t have the figures to back this – I speculate that a fair proportion of changes in requirements are the result of inadequate detail or thought put into developing initial requirements. The point is users and sponsors don’t necessarily see these as changes, but project teams do.
Lack of executive support
Finally, let’s look at the the problem of lack of executive support. Project sponsors usually hold important executive-level positions within the hosting organisation. By virtue of their positions, they have a number of important things that compete for their attention. A project – even an important one – is only one of many things going on in an organisation at any one time. Moreover, organisational priorities do shift, perhaps more often than executives may want to admit. So a project that was the key focus yesterday may be superseded by other priorities today.
There are of course many other ways in which project sponsors can be distracted, but I think I’ve made my point which is that lack of executive support is due to features that are inherent in organisations. So no amount of forcing executives to pay attention to their projects is going to work unless the entire system (project + organisation) changes. And this is difficult if not impossible to achieve because stable systems such as organisations tend to resist change, and therefore continue to display their characteristic patterns of behaviour.
So we see that the causes of project failure can be traced back to the organisations in which they are embedded. Specifically, they lie in unwritten norms and formal policies that dictate how the hierarchy operates and how things are done within the organisation. The most important consequence of this is that standard fixes (of encouraging user input and executive support, or instituting change management, say) will not cure the problem of project failure because they do not address the dysfunctional norms and policies that are the root cause of failure.
The above is not news. In fact, the matrix organisation structure was proposed as a response to the need for “project friendly” organisations. I’m no expert on matrix organisations so I will leave it to others to comment on how successful they are. The only point I would make is that in my experience multiple reporting lines (even if dotted and solid) do not work too well. There are always conflicting interests that cause divided loyalties and conflicting interests.
So, the natural question is : what – if anything – can we do about this? The answer is implicit in the foregoing paragraphs. One has to align the project with the organisation, not just at the level of objectives or structure, but also in operational matters such as timelines, budget and resources – the very things that make up the so-called iron triangle. The best time to do this is at the front-end of the project, i.e. the start. At this time, the person(s) driving the project have to engage all stakeholders who will be affected by the initiative and find out their motivations, interests and – most importantly – their concerns regarding the project. If this discussion happens in an open and frank manner, it should surface the issues highlighted In the previous section. Since the discussion takes place even before the project starts, there is at least some hope of addressing these concerns.
There are many ways to structure and facilitate such discussions. Check out this post for an introduction to one and have a look at my book co-authored with Paul Culmsee for much more. That said, one doesn’t need any particular technique – the willingness to discuss difficult matters openly and an openness to other points of view is all that’s needed. That, however, is not always easy to come by…
We have seen that the top causes of project failure can be traced back to the hierarchies and incentive systems of the hosting organisation. Therefore, superficial attempts to fix the problem at the level of individual projects (or even a PMO) will not work. The only hope of addressing the root causes of project failure is to focus on the systemic dysfunctions that cause them.
The term learning organisation refers to an organisation that continually modifies itself in response to changes in its environment. Ever since Peter Senge coined the term in his book, The Fifth Discipline, assorted consultants and academics have been telling us that a learning organisation is an ideal worth striving towards. The reality, however, is that most organisations that undertake the journey actually end up in a place far removed from this ideal. Among other things, the journey may expose managerial hypocrisies that contradict the very notion of a learning organisation. In this post, I elaborate on the paradoxes of learning organisations, drawing on an excellent and very readable paper by Paul Tosey entitled, The Hunting of the Learning Organisation: A Paradoxical Journey.
(Note: I should point out that the term learning organisation should be distinguished from organisational learning: the latter refers to processes of learning whereas the former is about an ideal type of organisation. See this paper for more on the distinction.)
The journey metaphor
Consultants and other experts are quick to point out that the path to a learning organisation is a journey towards an ideal that can never be reached. Quoting from this paper, Tosey writes, “we would talk about the fact that, in some ways, the learning organization represented all of our collective best wishes for Utopia in the workplace.” As another example, Peter Senge writes of it being, “a journey in search of the experience of being a member of `a great team.” Elsewhere, Senge suggests that the learning organisation is a vision that is essentially unattainable.
The metaphor of a journey seems an apt one at first, but there are a couple of problems with it. Firstly, the causal connection between initiatives that purport to get one to the goal and actual improvements in an organisation’s capacity to learn is tenuous and impossible to establish. This suggests the journey is one without a map. Secondly, the process of learning about learning within the organisation – how it occurs, and how it is perceived by different stakeholders – can expose organisational hypocrisies and double-speak that may otherwise have remained hidden. Thus instead of progressing towards the the ideal one may end up moving away from it. Tosey explores these paradoxes by comparing the journey of a learning organisation to the one described in Lewis Carroll’s poem, The Hunting of The Snark.
Hunting the Snark (and the learning organisation)
Carroll’s poem tells the story of ten characters who set of in search of a fabulous creature called a Snark. After many trials and tribulations, they end up finding out that the Snark is something else: a not-so-pleasant creature called a Boojum. Tosey comments that the quest described in the poem is a superb metaphor for the journey towards a learning organisation. As he states:
Initially, when reflecting on personal experience of organizational events… I was struck by the potential of the dream-like voyage of fancy on which Carroll’s characters embarked as an allegory of the quest for the learning organization. Pure allegory has limitations. Through writing and developing the article I came to view the poem more as a paradigm of the consequences of human desire for, and efforts at, progress through the striving for ideals. In other words the poem expresses something about our `hunting’. In this respect it may represent a mythological theme,a profound metaphor more than a mere cautionary moral tale.
There are many interesting parallels between the hunt for the Snark and the journey towards a learning organisation. Here are a few:
The expedition to find the Snark is led by a character called the Bellman who asserts: “What I tell you three times is true.” This is akin to the assurances (pleas?) from experts who tell us (several times over) that it is possible to transform our organisations into ones that continually learn.
The journey itself is directionless because the Bellman’s map is useless. In Carroll’s words:
Other maps are such shapes, with their islands and capes!
But we’ve got our brave Captain to thank:
(So the crew would protest) “that he’s bought us the best—
A perfect and absolute blank!
Finally, the Snark is never found. In its stead, the crew find a scary creature called a Boojum that has the power to make one disappear. Quoting from the poem:
In the midst of the word he was trying to say,
In the midst of his laughter and glee,
He had softly and suddenly vanished away—
For the Snark was a Boojum, you see.
The journey towards a learning organisation often reveals the Boojum-like dark side of organisations. One common example of this is when the process of learning surfaces questions that are uncomfortable for those in power. Tosey relates the following tale which may be familiar to some readers,
…a multinational company intending to develop itself as a learning organization ran programmes to encourage managers to challenge received wisdom and to take an inquiring approach. Later, one participant attended an awayday, where the managing director of his division circulated among staff over dinner. The participant raised a question about the approach the MD had taken on a particular project; with hindsight, had that been the best strategy? `That was the way I did it’, said the MD. `But do you think there was a better way?’, asked the participant. `I don’t think you heard me’, replied the MD. `That was the way I did it’. `That I heard’, continued the participant, `but might there have been a better way?’. The MD fixed his gaze on the participants’ lapel badge, then looked him in the eye, saying coldly, `I will remember your name’, before walking away.
One could argue that a certain kind of learning – that of how the organisation learns – occurred here: the employee learnt that certain questions were out of bounds. I think it is safe to say, though, that this was not the kind of learning that was intended by those who initiated the program.
In the preface to the poem, Carroll notes that the Bellman there is a rule – Rule 42 – which states, “No one shall speak to the Man at the Helm,” to which the Bellman (the leader) added, “and the Man at the Helm shall speak to no one.” This rendered communication between the helmsman and the crew impossible. In such periods the ship was not steered. The parallels between this and organisational life are clear: there is rarely open communication between the those steering the organisational ship and rank and file employees. Indeed, Tosey reformulates Rule 42 in organisational terms as, “the organization shall not speak to the supervision, and the supervision shall not speak to the organization.” This, he tells us, interrupts the feedback loop between individual experience and the organisations which renders learning impossible.
In the poem, the ship sometimes sailed backwards when Rule 42 was in operation. Tosey draws a parallel between “sailing backwards” and unexpected or unintended consequence of organisational rules. He argues that organisational actions can result in learning even if those actions were originally intended to achieve something else. The employee in the story above learnt something about the organisational hierarchy and how it worked.
Finally, it is a feature of Rule-42-like rules that they cannot be named. The employee in the story above could not have pointed out that the manager was acting in a manner that was inconsistent with the intent of the programme – at least not without putting his own position at risk. Perhaps that in itself is a kind of learning, though of a rather sad kind.
Experts and consultants have told us many times over that the journey towards a learning organisation is one worth making….and as the as the Bellman in Carroll’s poem says: “What I tell you three times is true.” Nevertheless, the reality is that instances in which learning actually occurs tend to be more a consequence of accident than plan, and tend to be transient than lasting. Finally, and perhaps most important, the Snark may turn out to Boojum: people may end up learning truths that the organisation would rather remained hidden. And therein lies the paradox of the learning organisation.
This post is inspired by a comment made by my elder son some years ago:
“Dad’s driving” he said.
A simple statement, one would think, with not much scope for ambiguity or misunderstanding. Yet, as I’ll discuss below, the two words had deeper implications than suggested by their mere dictionary meanings.
The story begins in mid 2010, when I was driving my son Rohan back from a birthday party.
I’m not much a driver – I get behind the wheel only when I absolutely have to, and then too with some reluctance. The reason I was driving was that my dear wife (who does most of the driving in our household) was pregnant with our second child and just a few weeks away from the big day. She therefore thought it would be a good idea for me to get some driving practice as I would soon need to do a fair bit.
Back to the story: as we started the trip home, my son (all of seven and half at the time) said, “Dad, you should go by North Road, there’s a traffic light there, it will be easier for you to turn right.”
“Nah, I’ll go the shorter way.”
“Dad, the shorter way has no traffic light. It has a roundabout, you might have trouble making a right turn.” He sounded worried.
“Don’t worry, I can handle a simple right turn at a roundabout on a Sunday evening. You worry too much!”
As it happened I had an accident at the roundabout…and it was my fault.
I checked that he was OK then got out of the car to speak with the unfortunate whose car door I had dented. Rohan sat patiently in the car while I exchanged details with the other party.
I got back in and asked again if he was OK. He nodded. We set off and made it home without further incident.
My wife was horrified to hear about the whole thing of course. Being pretty philosophical about my ineptness at some of the taken-for-granted elements of modern existence, she calmed down very quickly. In her usual practical way she asked me if I had reported the accident to the police, which I hadn’t. I reported the accident and made an appointment with the a smash repairer to fix up the damage to the bumper.
A week later my wife summoned me from work saying it was time. I duly drove her to the hospital without incident. A few hours later, our second son, Vikram, was born.
I pick up the story again a few days later, after we had just got used to having an infant in the house again. Sleep deficit was the order of the day, but life had to go on: Rohan had to get to school, regardless of how well or badly Vik had slept the previous night; and I had to get to work.
Soon Rohan and I had our morning routine worked out: we would walk to school together, then I would catch a bus from outside his school after dropping him there.
On the day Rohan uttered the words I started this post with, it was raining heavily – one of those torrential downpours that are a Sydney characteristic. It was clear that walking to school would be impossible, I would have to drive him there.
My wife gave him the bad news.
“Dad’s driving,” he said, in what appeared to be his usual matter of fact way.
However, if one listened carefully, there was a hint of a question, even alarm, in his words.
Given the back-story one can well understand why.
According to the most commonly accepted theory of truth, the validity of a statement depends on whether or not it is factually correct – i.e. a statement is true if it corresponds to some of aspect of reality. Philosophers refer to this as the correspondence theory of truth . There are a few other well known theories of truth but it would take me too far afield to discuss them here. See my post on data, information and truth if you are interested in finding out more.
Of course, it is true that Rohan’s statement would in retrospect either be true (if I did drive him to school) or false (if I didn’t). But that was hardly the point: there was a lot more implied in his words than just an observation that I would be driving him to school that day. In other words, his meaning had little do with any objective truth. Consider the following possibilities:
There was a hint of a question:
“Dad’s driving?” (…”You do remember what happened a couple of weeks ago, don’t you?…”)
or even alarm:
“Dad’s driving!” (I could almost hear the, “ I’m not getting in the car with him”)
Whatever the thoughts running through his head, it is clear that Rohan saw the situation quite differently from the way my wife or I did.
Indeed, the main problem with correspondence theories of truth is that they require the existence of an objective reality that we can all agree on – i.e. that we all perceive in the same way. This assumption is questionable, especially for issues that cannot be settled on logical grounds alone. Typical examples of such issues are those that are a matter of opinion – such as which political party is best or whether a certain book is worth reading…or even whether certain folks should be allowed to get behind the wheel. These are issues that are perceived differently by different people; there is no clear cut right/wrong, true/false or black/white.
There are other problems with correspondence theories too. For one, it isn’t clear how they would apply to statements that are not assertions about something. For example, it makes no sense to ask whether questions such as, “how much is this?” or “how are you?” are true or false. Nevertheless, these statements are perfectly meaningful when uttered in the right situations.
This brings us to the crux of the matter: in most social interactions, the meaning of a statement (or action, for that matter) depends very much on the context in which it is made. Indeed, context rather than language determines meaning in our everyday interactions. For example, my statement, “It is sunny outside,” could be:
- An observation about the weather conditions (which could be true or false, as per the correspondence theory)
- A statement of anticipation – it is sunny so I can play with my kids in the park.
- A statement of regret – it’s going to be a scorching hot day and we’ll have to stay indoors.
To find out which one of the above (or many other possibilities) I mean, you would need to know the context in which the statement is made. This includes things such as the background, the setting, the people present, the prior conversation, my mood, others’ moods …the list is almost endless.
Context is king when it comes to language and meaning in social situations. Paraphrasing the polymath Gregory Bateson , the phenomenon of context and the closely related phenomenon of meaning are the key difference between the natural and social sciences. It is possible in physics to formulate laws (of say, gravity) that are relatively independent of context (the law applies on Jupiter just the same as it does on earth). However, in the social sciences, general laws of this kind are difficult because context is important.
Indeed, this is why management models or best practices abstracted from context rarely work, if ever at all. They are not reality, but abstractions of reality. To paraphrase Bateson, all such approaches confuse the map with the territory.
I started this post almost three years ago, around the time the events related occurred. All I had written then were the lines I began this post with:
“Dad’s driving” he said. A simple statement, one would think, with not much scope for ambiguity or misunderstanding. ..
The lines lay untouched in a forgotten file on my computer until last weekend, when I came across them while cleaning up some old folders. At the time I had been reading Bateson’s classic, Steps to an Ecology of Mind, and had been mulling over his ideas about meaning and context. With that as background, the story came back to me with all its original force. The way forward was clear and the words started to flow.
Bateson was right, you know – context illuminates meaning.
My thanks go out to Arati Apte for comments and suggestions while this piece was in progress.
The platitude “our people are our most important asset” reflects a belief that the survival and evolution of organisations depends on the intellectual and cognitive capacities of the individuals who comprise them. However, in view of the many well documented examples of actions that demonstrate a lack of foresight and/or general callousness about the fate of organisations or those who work in them, one has to wonder if such a belief is justified, or even if it is really believed by those who spout such platitudes.
Indeed, cases such as Enron or Worldcom (to mention just two) seem to suggest that stupidity may be fairly prevalent in present day organisations. This point is the subject of a brilliant paper by Andre Spicer and Mats Alvesson entitled, A stupidity based theory of organisations. This post is an extensive summary and review of the paper.
The notion that the success of an organization depends on the intellectual and rational capabilities of its people seems almost obvious. Moreover, there is a good deal of empirical research that seems to support this. In the opening section of their paper, Alvesson and Spicer cite many studies which appear to establish that developing the knowledge (of employees) or hiring smart people is the key to success in an ever-changing, competitive environment.
These claims are mirrored in theoretical work on organizations. For example Nonaka and Takeuchi’s model of knowledge conversion acknowledges the importance of tacit knowledge held by employees. Although there is still much debate about tacit/explicit knowledge divide, models such as these serve to perpetuate the belief that knowledge (in one form or another) is central to organisational success.
There is also a broad consensus that decision making in organizations, though subject to bounded rationality and related cognitive biases, is by and large a rational process. Even if a decision is not wholly rational, there is usually an attempt to depict it as being so. Such behaviour attests to the importance attached to rational thinking in organization-land.
At the other end of the spectrum there are decisions that can only be described as being, well… stupid. As Rick Chapman discusses in his entertaining book, In Search of Stupidity, organizations occasionally make decisions that are plain dumb However, such behaviour seldom remains hidden because of its rather obvious negative consequences for the organisation. Such stories thus end up being immortalized in business school curricula as canonical examples of what not to do.
Notwithstanding the above remarks on obvious stupidity, there is another category of foolishness that is perhaps more pervasive but remains unnoticed and unremarked. Alvesson and Spicer use the term functional stupidity to refer to such “organizationally supported lack of reflexivity, substantive reasoning, and justitication.”
In their words, functional stupidity amounts to the “…refusal to use intellectual resources outside a narrow and ‘safe’ terrain.” It is reflected in a blinkered approach to organisational problems, wherein people display an unwillingness to consider or think about solutions that lie outside an arbitrary boundary. A common example of this is when certain topics are explicitly or tacitly deemed as being “out of bounds” for discussion. Many “business as usual” scenarios are riddled with functional stupidity, which is precisely why it’s often so hard to detect.
As per the definition offered above, there are three cognitive elements to functional stupidity:
- Lack of reflexivity: this refers to the inability or unwillingness to question claims and commonly accepted wisdom.
- Lack of substantive reasoning: This refers to reasoning that is based on a small set of concerns that do not span the whole issue. A common example of this sort of myopia is when organisations focus their efforts on achieving certain objectives with little or no questioning of the objectives themselves.
- Lack of justification: This happens when employees do not question managers or, on the other hand, do not provide explanations regarding their own actions. Often this is a consequence of power relationships in organisations. This may, for example, dissuade employees from “sticking their necks out” by asking questions that managers might deem out of bounds.
It should be noted that functional stupidity has little to do with limitations of human cognitive capacities. Nor does it have anything to do with ignorance, carelessness or lack of thought. The former can be rectified through education and/or the hiring of consultants with the requisite knowledge, and the latter via the use of standardised procedures and checklists.
It is also important to note that functional stupidity is not necessarily a bad thing. For example, by placing certain topics out of bounds, organisations can avoid discussions about potentially controversial topics and can thus keep conflict and uncertainty at bay. This maintains harmony, no doubt, but it also strengthens the existing organisational order which in turn serves to reinforce functional stupidity.
Of course, functional stupidity also has negative consequences, the chief one being that it prevents organisations from finding solutions to issues that involve topics that have been arbitrarily deemed as being out of bounds.
Examples of functional stupidity
There are many examples of functional stupidity in recent history, a couple being the irrational exuberance in the wake of the internet boom of the 1990s, and the lack of critical examination of the complex mathematical models that lead to the financial crisis of last decade.
However, one does not have to look much beyond one’s own work environment to find examples of functional stupidity. Many of these come under the category of ”business as usual” or “that’s just the way things are done around here” – phrases that are used to label practices that are ritually applied without much thought or reflection. Such practices often remain unremarked because it is not so easy to link them to negative outcomes. Indeed, the authors point out that “most managerial practices are adopted on the basis of faulty reasoning, accepted wisdom and complete lack of evidence.”
The authors cite the example of companies adopting HR practices that are actually detrimental to employee and organisational wellbeing. Another common example is when organisations place a high value on gathering information which is then not used in a meaningful way. I have discussed this “information perversity” at length in my post on entitled, The unspoken life of information in organisations, so I won’t rehash it here. Alvesson and Spicer point out that information perversity is a consequence of the high cultural value placed on information: it is seen as a prerequisite to “proper” decision making. However, in reality it is often used to justify questionable decisions or simply “hide behind the facts.”
These examples suggest that functional stupidity may be the norm rather than the exception. This is a scary thought…but I suspect it may not be surprising to many readers.
The dynamics of stupidity
Alvesson and Spicer claim that functional stupidity is a common feature in organisations. To understand why it is so pervasive, one has to look into the dynamics of stupidity – how it is established and the factors that influence it. They suggest that the root cause lies in the fact that organisations attempt to short-circuit critical thinking through what they call economies of persuasion, which are activities such as corporate culture initiatives, leadership training or team / identity building, relabelling positions with pretentious titles – and many other such activities that are aimed at influencing employees through the use of symbols and images rather than substance. Such symbolic manipulation, as the authors calls it, is aimed at increasing employees’ sense of commitment to the organisation.
As they put it:
Organizational contexts dominated by widespread attempts at symbolic manipulation typically involve managers seeking to shape and mould the ‘mind-sets’ of employees . A core aspect of this involves seeking to create some degree of good faith and conformity and to limit critical thinking
Although such efforts are not always successful, many employees do buy in to them and thereby identify with the organisation. This makes employees uncritical of the organisation’s goals and the means by which these will be achieved. In other words, it sets the scene for functional stupidity to take root and flourish.
Stupidity management and stupidity self-management
The authors use the term stupidity management to describe managerial actions that prevent or discourage organisational actors (employees and other stakeholders) from thinking for themselves. Some of the ways in which this is done include the reinforcement of positive images of the organisation, getting employees to identify with the organisation’s vision and myriad other organisational culture initiatives aimed at burnishing the image of the corporation. These initiatives are often backed by organisational structures (such as hierarchies and reward systems) that discourage employees from raising and exploring potentially disruptive issues.
The monitoring and sanctioning of activities that might disrupt the positive image of the organisation can be overt (in the form of warnings, say). More often, though, it is subtle. For example, in many meetings, participants participants know that certain issues cannot be raised. At other times, discussion and debate may be short circuited by exhortations to “stop thinking and start doing.” Such occurrences serve to create an environment in which stupidity flourishes.
The net effect of managerial actions that encourage stupidity is that employees start to cast aside their own doubts and questions and behave in corporately acceptable ways – in other words, they start to perform their jobs in an unreflective and unquestioning way. Some people may actually internalise the values espoused by management; others may psychologically distance themselves from the values but still act in ways that they are required to. The net effect of such stupidity self-management (as the authors call it) is that employees stop questioning what they are asked to do and just do it. After a while, doubts fade and this becomes the accepted way of working. The end result is the familiar situation that many of us know as “business as usual” or “that’s just the way things are done around here.”
The paradoxes and consequences of stupidity
Functional stupidity can cause both feelings of certainty and dissonance in members of an organisation. Suppressing critical thinking can result in an easy acceptance of the way things are. The feelings of certainty that come from suppressing difficult questions can be comforting. Moreover, those who toe the organisational line are more likely to be offered material rewards and promotions than those who don’t. This can act to reinforce functional stupidity because others who see stupidity rewarded may also be tempted to behave in a similar fashion.
That said, certain functionally stupid actions, such as ignoring obvious ethical lapses, can result in serious negative outcomes for an organisation. This has been amply illustrated in the recent past. Such events can prompt formal inquiries at the level of the organisation, no doubt accompanied by informal soul-searching at the individual level. However, as has also been amply illustrated, there is no guarantee that inquiries or self-reflection lead to any major changes in behaviour. Once the crisis passes, people seem all too happy to revert to business as usual.
In the end , though, when stark differences between the rhetoric and reality of the organisation emerge – as they eventually will– employees will see the contradictions between the real organisation and the one they have been asked to believe in. This can result in alienation from and cynicism about the organisation and its objectives. So, although stupidity management may have beneficial outcomes in the short run, there is a price to be paid in the longer term.
Nothing comes for free, not even stupidity…
The authors main message is that despite the general belief that organisations enlist the cognitive and intellectual capacities of their members in positive ways, the truth is that organisational behaviour often exhibits a wilful ignorance of facts and/or a lack of logic. The authors term this behaviour functional stupidity.
Functional stupidiy has the advantage of maintaining harmony at least in the short term, but its longer term consequences can be negative. Members of an organisation “learn” such behaviour by becoming aware that certain topics are out of bounds and that they broach these at their own risk. Conformance is rewarded by advancement or material gain whereas dissent is met with overt or less obvious disciplinary action. Functional stupidity thus acts as a barrier that can stop members of an organisation from developing potentially interesting perspectives on the problems the organisations face.
The paper makes an interesting and very valid point about the pervasiveness of wilfully irrational behaviour in organisations. That said, I can’t help but think that the authors have written it with tongue firmly planted in cheek.
They came for me at 11:00 am.
I was just settling down to finishing that damned business case when I heard the rat-a-tat-tat on my office door. “Come in,” I said, with a touch of irritation in my voice.
The door opened and there they were. They looked at me as though I was something that had crawled out from under a rock. “Mr. Hersey, I presume,” said the taller, uglier one.
“Yes, that’s me.”
“Joe Hersey?” He asked, wanting to make sure before unloading on me.
“Yes, the one and only,” I said, forcing a smile. I had a deep sense of foreboding now: they looked like trouble; I knew they couldn’t be enquiring after my welfare.
“You need to come with us,” said the shorter one. I did imply he was handsomer of the two, but I should clarify that it was a rather close call.
“I have better things to do than follow impolite summons from people I don’t know. I think you should talk to my manager. In fact, I will take you to him,” I replied, rising from my chair. “He won’t be happy that you’ve interrupted my business case. He wants it done by lunchtime,” I added, a tad smugly.
“We’ve already seen him. He knows. I would advise you to come with us. It would make life easier for everyone concerned,” I forget which one of the two said this.
“What is going on?” I asked, toning down my irritation. To be honest, I had no clue what they were on about.
“We’re the methodology police,” they said in unison. I guess they’d had a fair bit of practice scaring the crap out of hapless project managers. “We’re from the PMO,” they added unnecessarily – I mean, where else could they be from.
“Holy s**t,” I said to myself. I was in big trouble.
“Well, Hersey,” said the short one, “I think you owe the PMO an explanation.” Ah, I loved his use of the third person– not “us” but “the PMO.”
We were seated at a table in a meeting room deep in the bowels of the PMO: windowless, with low wattage lighting sponsored by one of those new-fangled, energy-saving, greenie bulbs . The three chairs were arranged in interrogation mode , with the two goons on one side and me – Joseph M. Hersey, Project Manager Extraordinaire – on the other.
I was in trouble alright, but I have this perverse streak in me, “I don’t know what you are talking about,” I said, feeling a bit like a hero from a Raymond Chandler novel. I knew what I had done, of course. But I also knew that I was one of the good guys. The clowns sitting opposite me were the forces of evil…such thoughts, though perverse, lifted my spirits.
I must have smiled because the tall one said, “You think this is funny, do you? We have a direct line to the board and we could make life really unpleasant for you if you continue this uncooperative attitude.”
That was bad. I did not want to be hauled up in front of the big cheese. If I was branded a troublemaker at that level, there would be no future for me in the company. And to be absolutely honest, I actually enjoyed working here – visits from the methodology police excepted, of course.
“OK, tell me what you want to know,” I said resignedly.
“No, you tell us, Hersey. We want to hear the whole story of your subversion of process in your own words. We’ll stop you if we need any clarification.” Again, I forget which one of the two said this. Understandable, I think – I was pretty stressed by then.
Anyway, there is no sense in boring you with all the PMO and process stuff. Suffice to say, I told them how I partitioned my big project into five little ones, so that each mini project would fall below the threshold criteria for major projects and thus be exempt from following the excruciating methodology that our PMO had instituted.
Process thus subverted, I ran each of the mini projects separately, with deliverables from one feeding into the next. I’d got away with it; with no onerous procedures to follow I was free to devise my own methodology, involving nothing more complicated than a spreadsheet updated daily following informal conversations with team members and stakeholders. All this held together – and, sorry, this is going to sound corny – by trust.
The methodology cops’ ears perked up when they heard that word, “Trust!” they exclaimed, “What do you mean by trust?”
“That’s when you believe people will do as they say they will,” I said. Then added, “A concept that may be foreign to you.” I regretted that snide aside as soon as I said it.
“Look, “ said the uglier guy, “I suggest you save the wisecracks for an audience that may appreciate them. “You are beginning to annoy me and a report to the board is looking like a distinct possibility if you continue in this vein.”
I have to say, if this guy had a lot of patience if he was only just “beginning to get annoyed.” I was aware that I had been baiting him for a while. Yes, I do know when I do that. My wife keeps telling me it will get me into trouble one day. May be today’s the day.
“…I do know what trust is,” the man continued, “but I also know that you cannot run a project on warm and fuzzy notions such as trust, sincerity, commitment etc. The only thing I will trust are written signed off project documents.”
Ah, the folly, the folly. “Tell me this, what would you prefer – project documentation as per the requirements of your methodology or a successful project.”
“The two are not mutually exclusive. In fact, methodology improves the chance of success.”
“No it doesn’t,” I retorted.
“It does,” he lobbed back.
Jeez, this was beginning to sound like recess in the local kindergarten. “Prove it,” I said, staking my claim to the title of King of Kindergarten Debates.
“There are several studies that prove the methodologies efficacy,” said the short one, “but that is not the point.”
“All those studies are sponsored by the Institute,” I said, referring to the August Body that maintains the standard. “so there is a small matter of vested interest….anyway, you say that isn’t the point. So what is your point then?.”
“The methodology is an internal requirement, so you have to follow It regardless. We could have a lot of fun debating it, but that is neither here no there. Compliance is mandatory, you have no choice.”
“I did comply,” I said, “none of my projects were over the threshold, so I did not need to follow the methodology.”
“That was subterfuge – it was one project that you deliberately divided into five so that you could bypass our processes.”
I was getting tired and it was close to my lunchtime. “OK, fair point“ I said, “I should not have done that. I will not do it again. Can I go now?”
“Hmm,” they said in unison. I don’t think either of them believed me. “That’s not good enough.”
I sighed. “What do you want then?” I asked, weary of this pointless drama.
“You will read and sign this form,” said the short one, “declaring you have been trained in the PMO processes – which you were last year, as you well know – and that you will follow the processes henceforth. I particularly urge you to read and digest the bit about the consequences of non-compliance.” He flicked the form in my direction.
I was not surprised to see that the form was a multi-page affair, written in 8pt bureaucratese, utterly incomprehensible to mere mortals such as I. I knew I would continue to bypass or subvert processes that made no sense to me, but I also knew that they needed me to sign that form – their boss would be very unhappy with them if I didn’t. Besides, I didn’t want to stay in that room a second longer than necessary.
“OK, where do I sign,” I said, picking up a pen that lay on the table.
“Don’t you want to read it.”
“Nah,” I said, “I have a pretty fair idea of what it’s about.”
“We’re done, Hersey. You can go back to your business case now. But you can be sure that you are on our radar now. We are watching you.”
“Well Gents, enjoy the show. I promise, to lead a faultless life henceforth. I will be a model project manager,” I said as I rose to leave.
“We’re counting on it Hersey. One more violation and you are in deep trouble.”
I refrained from responding with a wisecrack as I exited, leaving them to the paperwork that is their raison d’etre.
The yearly performance review
is something we all must go through.
So you may well know
the story below
…it may’ve even happened to you.
The boss, a hawk not a dove,
dictated the goals from above.
He said, “You will do
as I tell you to,
and that should be more than enough.”
The year whizzed by like a race.
(Isn’t that always the case?)
Soon it was time
for that moment sublime,
when performance would be appraised.
And as the review progressed,
the minion suffered much stress,
because it was clear
he’d be marked a failure
even though he’d given his best.
In the end he said, “OK, that’s fine,
but we were never aligned.
I know you don’t care
but it just ain’t fair
that these were your goals, not mine.
In the last few decades two technology trends have changed much of the thinking about corporate IT infrastructures: commoditisation and the cloud. As far as the first trend is concerned, the availability of relatively cheap hardware and packaged “enterprise” software has enabled organisations to create their own IT infrastructures. Yet, despite best efforts of IT executives and planners, most of these infrastructures take on lives of their own, often increasing in complexity to the point where they become unmanageable.
The maturing of cloud technologies in the last few years appears to offer IT decision makers an attractive solution to this problem: that of outsourcing their infrastructure headaches. Notwithstanding the wide variety of mix-and-match options of commodity and cloud offerings, the basic problem still remains: one can create as much of a mess in the cloud as one can in an in-house data center. Moreover, the advertised advantages of cloud-based enterprise solutions can be illusory: customers often find that solutions are inflexible and major changes can cost substantial sums of money.
Conventional wisdom tells us that these problems can be tackled by proper planning and control. In this post I draw on Claudio Ciborra’s book, From Control to Drift: The Dynamics of Corporate Information Infrastructures, to show why such a view is simplistic and essentially untenable.
The effects of globalisation and modernity
The basic point made by Ciborra and Co. is that initiatives to plan and control IT infrastructures via centrally-driven, standards-based governance structures are essentially misguided reactions to the unsettling effects of globalisation and modernity, terms that I elaborate on below.
Globalisation refers to the processes of interaction and integration between people of different cultures across geographical boundaries. The increasing number of corporations with a global presence is one of the manifestations of globalisation. For such organisations, IT infrastructures systems are seen as a means to facilitate globalisation and also control it.
There are four strategies that an organisation can choose from when establishing a global presence. These are:
- Multinational: Where individual subsidiaries are operated autonomously.
- International: Where work practices from the parent company diffuse through the subsidiaries (in a non-formal way).
- Global: Where local business activities are closely controlled by the parent corporation.
- Transnational: This (ideal) model balances central control and local autonomy in a way that meets the needs of the corporation while taking into account the uniqueness of local conditions.
These four business strategies map to two corporate IT strategies:
- Autonomous: where individual subsidiaries have their own IT strategies, loosely governed by corporate.
- Headquarters-driven: where IT operations are tightly controlled by the parent corporation.
Neither is perfect; both have downsides that start to become evident only after a particular strategy is implemented. Given this, it is no surprise that organisations tend to cycle between the two strategies, with cycle times varying from five to ten years; a trend that corporate IT minions are all too familiar with. Typically, though, executive management tends to favour the centrally-driven approach since it holds the promise of higher control and reduced costs.
Another consequence of globalisation is the trend towards outsourcing IT infrastructure and services. This is particularly popular for operational IT – things like infrastructure and support. In view of this, it is no surprise that organisations often choose to outsource IT development and support to external vendors. Equally unsurprising, perhaps, is that the quality of service often does not match expectations and there’s little that can be done about it. The reason is simple: complex contracts are hard to manage and perhaps more importantly, not everything can be contractualised. See my post on the transaction cost economics of outsourcing for more on this point.
The effect of modernity
The phenomenon of modernity forms an essential part of the backdrop against which IT systems are implemented. According to a sociological definition due to Anthony Giddens, modernity is “associated with (1) a certain set of attitudes towards the world, the idea of the world as open to transformation, by human intervention; (2) a complex of economic institutions, especially industrial production and a market economy; (3) a certain range of political institutions, including the nation-state and mass democracy”
Modernity is characterised by the following three “forces” that have a direct impact on information infrastructures:
- The separation of space and time: This refers to the ways in which technology enables us reconfigure our notions of geographical space and time. For instance, coordinating activities in distant locations is now possible - global supply chains and distributed project teams being good examples. The important consequence of this ability, relevant to IT infrastructures such as ERP and CRM systems, is that it makes it possible (at least in principle) for organisations to increase their level of surveillance and control of key business processes across the globe.
- The development of disembedding mechanisms: As I have discussed at length in this post, organisations often “import” procedures that have worked well in organisations. The assumption underlying this practice is that the procedures can be lifted out of their original context and implemented in another one without change. This, in turn, tacitly assumes that those responsible for implementing the procedure in the new context understand the underlying cause-effect relationships completely. This world-view, where organisational processes and procedures are elevated to the status of universal “best practices” is an example of a disembedding mechanism at work. Disembedding mechanisms are essentially processes via which certain facts are abstracted from their context and ascribed a universal meaning. Indeed, most “enterprise” class systems claim to implement such “best practices.”
- The reflexivity of knowledge and practice: Reflexive phenomena are those for which cause-effect relationships are bi-directional – i.e. causes determine effects which in turn modify the causes. Such phenomena are unstable in the sense that they are continually evolving – in potentially unpredictable ways. Organisational practices (which are based on organisational knowledge) are reflexive in the sense that they are continually modified in the light of their results or effects. This conflicts with the main rationale for IT infrastructures such as ERP systems, which is to rationalise and automate organisational processes and procedures in a relatively inflexible manner.
Implications for organisations
One of the main implications of globalisation and modernity is that the world is now more interconnected than ever before. This is illustrated by the global repercussions of the financial crises that have occurred in recent times. For globalised organisations this manifests itself in not-so-obvious dependencies of the organisation’s well-being on events within the organisation and outside it. These events are usually not within the organisation’s control So they have to be managed as risks.
A standard response to risk is to increase control. Arguably, this may well be the most common executive-level rationale behind decisions to impose stringent controls and governance structures around IT infrastructures. Yet, paradoxically, the imposition of controls often lead to undesirable outcomes because of unforeseen side effects and the inability to respond to changing business needs in a timely manner.
A bit about standards
Planners of IT infrastructures spend a great deal of time worrying about which standards they should follow. This makes sense if for no other reason than the fact that corporate IT infrastructures are embedded in a larger (external) ecosystem that is made up of diverse organisations, each with their own infrastructures. Standards ease the problem of communication between interconnected organisations. For example, organisations often have to exchange information electronically in various formats. Without (imposed or de-facto) standards, this would be very difficult as IT staff would have to write custom programs to convert files from one format to another.
The example of file formats illustrates why those who plan and implement IT infrastructures prefer to go with well established technologies and standards rather than with promising (but unproven) new ones. The latter often cause headaches because of compatibility problems with preexisting technologies. There are other reasons, of course, for staying with older technologies and established standards – acceptance, maturity and reliability being a few important ones.
Although the rationale for adopting standards seems like a sound one, there are a few downsides too. Consider the following:
- Lock in: This refers to the fact that once a technology is widely adopted, it is very difficult for competing technologies to develop. The main reason for this is that dominant technology will attract a large number of complementary products. These make it more attractive to stick with the dominant standard. Additionally, contractual commitments, availability of expertise, switching costs make it unviable for customers to move to competitor products.
- Inefficiency: This refers to the fact that a dominant standard is not necessarily the best. There are many examples of cases where a dominant standard is demonstrably inferior to a less popular competitor. My favourite example is the Waterfall project management methodology which became a standard for reasons other than its efficacy. See this paper for details of this fascinating story.
- Incompatibility: In recent years, consumer devices such as smartphones and tablets have made their way into corporate computing environments, primarily because of pressures and demands from technology savvy end-users. These devices pose problems for infrastructure planners and administrators because they are typically incompatible with existing corporate technology standards and procedures. As an example, organisations that have standardised on a particular platform such as Microsoft Windows may face major challenges when introducing devices such as iPads in their environments.
Finally, and most importantly, the evolution of standards causes major headaches for corporate IT infrastructure planners. Anyone who has been through a major upgrade of an operating system at an organisation-wide level will have lived this pain. Indeed, it is such experiences that have driven IT decision-makers to cloud offerings. The cloud brings with it a different set of problems, but that’s another story. Suffice to say that the above highlights, once again, the main theme of the book: that infrastructure planning is well and good, but planners have to be aware that the choices they make constrain them in ways that they will not have foreseen.
The main argument that Ciborra and his associates make is that corporate information infrastructures drift because they are subject to unpredictable forces within and outside the hosting organisation. Standards and processes may slow the drift (if at all) but they cannot arrest it entirely. Infrastructures are therefore best seen as ever-evolving constructs made up systems, people and processes that interact with each other in (often) unforeseen ways. As Ciborra so elegantly puts it:
Corporate information infrastructures are puzzles, or better collages, and so are the design and implementation processes that lead to their construction and operation. They are embedded in larger, contextual puzzles and collages. Interdependence, intricacy, and interweaving of people, systems, and processes are the culture bed of infrastructure. Patching, alignment of heterogeneous actors and making do are the most frequent approaches…irrespective of whether management [is] planning or strategy oriented, or inclined to react to contingencies.
And therein lies an important message for those who plan and oversee information infrastructures.
Sections of this post are drawn from my article entitled, The ERP Paradox.