Archive for the ‘Book Review’ Category
Management, as it is practiced, is largely about “getting things done.” Consequently management education and research tends to focus on improving the means by which specified ends are achieved. The ends themselves are not questioned as rigorously as they ought to be. The truth of this is reflected in the high profile corporate scandals that have come to light over the last decade or so, not to mention the global financial crisis.
Today, more than ever, there is a need for a new kind of management practice, one in which managers critically reflect on the goals they pursue and the means by which they aim to achieve them. In their book entitled, Making Sense of Management: A Critical Introduction, management academics Mats Alvesson and Hugh Willmott describe what such an approach to management entails. This post is a summary of the central ideas described in the book.
Critical theory and its relevance to management
The body of work that Alvesson and Willmott draw from is Critical Theory, a discipline that is based on the belief that knowledge ought to be based on dialectical reasoning – i.e. reasoning through dialogue – rather than scientific rationality alone. The main reason for this being that science (as it is commonly practiced) is value free and is therefore incapable of addressing problems that have a social or ethical dimension. This idea is not new, even scientists such as Einstein have commented on the limits of scientific reasoning.
Although Critical Theory has its roots in the Renaissance and Enlightenment, its modern avatar is largely due to a group of German social philosophers who were associated with the Frankfurt-based Institute of Social Research which was established in the 1920s. Among other things, these philosophers argued that knowledge in the social sciences (such as management) can never be truly value-free or objective. Our knowledge of social matters is invariably coloured by our background, culture, education and sensibilities. This ought to be obvious, but it isn’t: economists continue to proclaim objective truths about the right way to deal with economic issues, and management gurus remain ready to show us the one true path to management excellence.
The present day standard bearer of the Frankfurt School is the German social philosopher, Juergen Habermas who is best known for his theory of communicative rationality – the idea that open dialogue, free from any constraints is the most rational way to decide on matters of importance. For a super-quick introduction to the basic ideas of communicative rationality and its relevance in organisational settings, see my post entitled, More than just talk: rational dialogue in project environments. For a more detailed (and dare I say, entertaining) introduction to communicative rationality with examples drawn from The Borg and much more, have a look at Chapter 7 of my book, The Heretic’s Guide to Best Practices, co-written with Paul Culmsee.
The demise of command and control?
Many professional managers see their jobs in purely technical terms, involving things such as administration, planning, monitoring etc. They tend to overlook the fact that these technical functions are carried out within a particular social and cultural context. More to the point, and this is crucial, managers work under constraints of power and domination: they are not free to do what they think is right but have to do whatever their bosses order them to, and, so in turn behave with their subordinates in exactly the same way.
As Alvesson and Willmott put it:
Managers are intermediaries between those who hire them and those whom they manage. Managers are employed to coordinate, motivate, appease and control the productive efforts of others. These ‘others’ do not necessarily share managerial agendas…
Despite the talk of autonomy and empowerment, modern day management is still very much about control. However, modern day employees are unlikely to accept a command and control approach to being managed, so organisations have taken recourse to subtler means of achieving the same result. For example, organisational culture initiatives aimed at getting employees to “internalise”the values of the organisation are attempts to “control sans command.”
A critical look at the status quo
A good place to start with a critical view of management is in the area of decision-making. Certain decisions, particularly those made at executive levels, can have a long term impact on an organisation and its employees. Business schools and decision theory texts tells us that decision-making is a rational process. Unfortunately, reality belies that claim: decisions in organisations are more often made on the basis of politics and ideology rather than objective criteria. This being the case, it is important that decisions be subject to critical scrutiny. Indeed it is possible that many of the crises of the last decade could have been avoided if the decisions that lead to them had been subjected a to critical review.
Many of the initiatives that are launched in organisation-land have their origins in executive-level decisions that are made on flimsy grounds such as “best practice” recommendations from Big 4 consulting companies. Mid-level managers who are required to see these through to completion are then faced with the problem of justifying these initiatives to the rank and file. Change management in modern organisation-land is largely about justifying the unjustifiable or defending the indefensible.
The critique, however, goes beyond just the practice of management. For example, Alvesson and Willmott also draw attention to things such as the objectives of the organisation. They point out that short-sighted objectives such as “maximising shareholder value” is what lead to the downfall of companies such as Enron. Moreover, they also remind us of an issue that is becoming increasingly important in today’s world: that natural resources are not unlimited and should be exploited in a judicious, sustainable manner.
As interesting and important as these “big picture” issues are, in the remainder of this post I’ll focus attention on management practices that impact mid and lower level employees.
A critical look at management specialisations
Alvesson and Willmott analyse organisational functions such as Human Resource Management (HRM), Marketing and Information Systems (IS) from a critical perspective. It would take far too many pages to do justice to their discussion so I’ll just present a brief summary of two areas: HR and IS.
The rhetoric of HRM in organisations stands in stark contradiction to its actions. Despite platitudinous sloganeering about empowerment etc., the actions of most HR departments are aimed at getting people to act and behave in organisationally acceptable ways. Seen in a critical light, seemingly benign HR initiatives such as organizational culture events or self-management initiatives are exposed as being but subtle means of managerial control over employees. (see this paper for an example of the former and this one for an example of the latter).
Since the practice of IS focuses largely on technology, much of the IS research and practice tends to focus on technology trends and “best practices.” As might be expected, the focus is on “fad of the month” and thus turns stale rather quickly. As examples: the 1990s saw an explosion of papers and projects in business process re-engineering; the flavour of the decade in the 2000s was service-oriented architecture; more recently, we’ve seen a great deal of hot air about the cloud. Underlying a lot of technology related decision-making is the tacit assumption that choices pertaining to technology are value-free and can be decided on the basis of technical and financial criteria alone. The profession as a whole tends to take an overly scientific/rational approach to design and implementation, often ignoring issues such as power and politics. It can be argued that many failures of large-scale IS projects are due to the hyper-rational approach taken by many practitioners.
In a similar vein, most management specialisations can benefit from the insights that come from taking a critical perspective. Alvesson and Willmott discuss marketing, accounting and other functions. However, since my main interest is in solutions rather than listing the (rather well-known) problems, I’ll leave it here, directing the interested reader to the book for more.
Towards an enlightened practice of management
In the modern workplace it is common for employees to feel disconnected from their work, at least from time to time if not always. In a prior post, I discussed how this sense of alienation is a consequence of our work and personal lives being played out in two distinct spheres – the system and the lifeworld. In brief, the system refers to the professional and administrative sphere in which we work and/or interact with institutional authority and the lifeworld is is the everyday world that we share with others. Actions in the lifeworld are based on a shared understanding of the issue at hand whereas those in the system are not.
From the critical analysis of management specialisations presented in the book, it is evident that the profession, being mired in a paradigm consisting of prescriptive, top-down practices, serves to perpetuate the system by encroaching on the lifeworld values of employees. There are those who will say that this is exactly how it should be. However, as Alvesson and Wilmott have stated in their book, this kind of thinking is perverse because it is ultimately self-defeating:
The devaluation of lifeworld properties is perverse because …At the very least, the system depends upon human beings who are capable of communicating effectively and who are not manipulated and demoralized to the point of being incapable of cooperation and productivity.
Alvesson and Willmott use the term emancipation, to describe any process whereby employees are freed from shackles of system-oriented thinking even if only partially (Note: here I’m using the term system in the sense defined above – not to be confused with systems thinking, which is another beast altogether). Acknowledging that it is impossible to do this at the level of an entire organisation or even a department, they coin the term micro-emancipation to describe any process whereby sub-groups of organisations are empowered to think through issues and devise appropriate actions by themselves, free (to the extent possible) from management constraints or directives.
Although this might sound much too idealistic to some readers, be assured that it is eminently possible to implement micro-emancipatory practices in real world organisations. See this paper for one possible framework that can be used within a multi-organisation project along with a detailed case study that shows how the framework can be applied in a complex project environment.
Alvesson and Willmott warn that emancipatory practices are not without costs, both for employers and employees. For example, employees who have gained autonomy may end up being less productive which will in turn affect their job security. In my opinion, view, this issue can be addressed through an incrementalist approach wherein both employers and employees work together to come up with micro-emancipatory projects at the grassroots level, as in the case study described in the paper mentioned in the previous paragraph.
…and so to conclude
Despite the rhetoric of autonomy and empowerment, much of present-day management is stuck in a Taylorist/Fordist paradigm. In modern day organisations command and control may not be obvious, but they often sneak in through the backdoor in not-so-obvious ways. For example, employees almost always know that certain things are simply “out of bounds” for discussion and of the consequences of breaching those unstated boundaries can be severe.
In its purest avatar, a critical approach to management seeks to remove those boundaries altogether. This is unrealistic because nothing will ever get done in an organisation in which everything is open for discussion; as is the case in all social systems, compromise is necessary. The concept of micro-emancipation offers just this. To be sure, one has to go beyond the rhetoric of empowerment to actually creating an environment that enables people to speak their minds and debate issues openly. Though it is impossible to do this at the level of an entire organisation, it is definitely possible to achieve it (albeit approximately) in small workgroups.
To conclude: the book is worth a read, not just by management researchers but also by practicing managers. Unfortunately the overly-academic style may be a turn off for practitioners, the very people who need to read it the most.
Did I write this review because I wanted to, or is it because my background and circumstances compelled me to?
Some time ago, the answer to this question would have been obvious to me but after reading Free Will by Sam Harris, I’m not so sure.
In brief: the book makes the case that the widely accepted notion of free will is little more than an illusion because our (apparently conscious) decisions originate in causes that lie outside of our conscious control.
Harris begins by noting that the notion of free will is based on the following assumptions:
- We could have behaved differently than we actually did in the past.
- We are the originators of our present thoughts and actions.
Then, in the space of eighty odd pages (perhaps no more than 15,000 words), he argues that the assumptions are incorrect and looks into some of the implications of his arguments.
The two assumptions are actually interrelated: if it is indeed true that we are not the originators of our present thoughts and actions then it is unlikely that we could have behaved differently than we did in the past.
A key part of Harris’ argument is the scientifically established fact that we are consciously aware of only a small fraction of the activity that takes place in our brains. This has been demonstrated (conclusively?) by some elegant experiments in neurophysiology. For example:
- Activity in the brain’s motor cortex can be detected 300 milliseconds before a person “decides” to move, indicating that the thought about moving arises before the subject is aware of it.
- Magnetic resonance scanning of certain brain regions can reveal the choice that will be made by a person 7 to 10 seconds before the person consciously makes the decision.
These and other similar experiments pose a direct challenge to the notion of free will: if my brain has already decided on a course before I am aware of it , how can I claim to be the author of my decisions and, more broadly, my destiny? As Harris puts it:
…I cannot decide what I will think next or intend until a thought or intention arises. What will my next mental state be? I do not know – it just happens. Where is the freedom in that?
The whole notion of free will, he argues, is based on the belief that we control our thoughts and actions. Harris notes that although we may feel that are in control of the decisions we make, this is but an illusion: we feel that we are free, but this freedom is illusory because our actions are already “decided” before they appear in our consciousness. To be sure, there are causes underlying our thoughts and actions, but the majority of these lie outside our awareness.
If we accept the above then the role that luck plays in determining our genes, circumstances, environment and attitudes cannot be overstated. Although we may choose to believe that we make our destinies, in reality we don’t. Some people may invoke demonstrations of willpower – conscious mental effort to do certain things – as proof against Harris’ arguments. However, as Harris notes,
You can change your life and yourself through effort and discipline – but you have whatever capacity for effort and discipline you have in this moment, and not a scintilla more (or less). You are either lucky in this department or you aren’t – and you can’t make your own luck.
Although I may choose to believe that I made the key decisions in my life, a little reflection reveals the tenuous nature of this belief. Sure, some decisions I have made resulted in experiences that I would not have had otherwise. Some of those experiences undoubtedly changed my outlook on life, causing me to do things I would not have done had I not undergone those experiences. So to that extent, those original choices changed my life.
The question is: could I have decided differently when making those original choices?
Or, considering an even more immediate example: could I have chosen not to write this review? Or, having written it, could I have chosen not to publish it?
Harris tells us that this question is misguided because you will do what you do. As he states,
…you can do what you decide to do – but you cannot decide what you will decide to do.
We feel that we are free to decide, but the decision we make is the one we make. If we choose to believe that we are free to decide, we are free to do so. However, this is an illusion because our decisions arise from causes that we are unaware of. This is the central point of Harris’ argument.
There are important moral and ethical implications of the loss of free will. For example what happens to the notion of moral responsibility for actions that might harm others? Harris argues that we do not need to invoke the notion of free will in order to see that this is not right – as he tells us, what we condemn in others is the conscious intent to do harm.
Harris is careful to note that his argument against free will does not amount to a laissez-faire approach wherein people are free to do whatever comes to their minds, regardless of consequences for society. As he writes:
….we must encourage people to work to the best of their abilities and discourage free riders wherever we can. And it is wise to hold people responsible for their actions when doing so influences their behavior and brings benefits to society….[however this does not need the] illusion of free will. We need only acknowledge that efforts matter and that people can change. [However] we do not change ourselves precisely – because we have only ourselves with which to do the changing -but we continually influence, and are influenced by, the world around us and the world within us. [italics mine]
Before closing I should mention some shortcomings of the book:
Firstly, Harris does not offer a detailed support for his argument. Much of what he claims depends on the results of experiments research in neurophysiology that demonstrate the lag between the genesis of a thought in our brains and our conscious awareness of it, yet he describes only a handful experiments detail. That said there are references to many others in the notes.
Secondly, those with training in philosophy may find the book superficial as Harris does not discuss of alternate perspectives on free will. Such a discussion would have provided much needed balance that some critics have taken him to task for (see this analysis or this review for example).
Although the book has the shortcomings I’ve noted, I have to say I enjoyed it because it made me think. More specifically, it made me think about the way I think. Maybe it will do the same for you, maybe not – what happens in your case may depend on thoughts that are beyond your control.
Much of the research literature and educational material on project communication focuses on artefacts such as business cases, status reports and lessons learned reports. In an earlier post I discussed how these seemingly unambiguous documents are open to being interpreted in different or even contradictory ways. However, documents are only a small part of the story. Much of the communication that takes place in a project involves direct interaction via dialogue between stakeholders. In this post, I discuss this interactional aspect of project communication, drawing on a book by Paul Watzlawick entitled, The Pragmatics of Human Communication.
The pragmatics of communication
Those who have done a formal course on communication may already be familiar with Watzlawick’s book. I have to say, I was completely ignorant of his work until I stumbled on it a few months ago. Although the book was published in 1967, it remains a popular text and an academic bestseller. As such, it is a classic that should be mandatory reading for project managers and others who work in group settings.
Much of the communication literature focuses on syntactics (the rules of constructing messages) and semantics (the content, or information contained in messages). Watzlawick tells us that there is a third aspect, one that is often neglected: pragmatics, which refers to the behavioural or interactional aspect of communication. An example might help clarify what this means.
Let’s look at the case of a project manager who asks a team member about the status of a deliverable. The way the question is asked and the nature of the response says a lot about the relationship between the project manager and his or her team. Consider the following dialogues, for example:
“What is the status of the module? “ Asks the manager
“There have been some delays; I may be a couple of days late.”
“That’s unacceptable,” says the manager, shaking his head.
As opposed to:
“What is the status of the module? “ Asks the manager.
“There have been some delays. I may be a couple of days late.”
“ Is there anything I can do to help speed things up?”
Among other things, the book presents informal rules or axioms that govern such exchanges.
The axioms of interactional communication and their relevance to project communication
In this section I discuss the axioms of interactional communication, using the example above to demonstrate their relevance to project communication.
In the presence of another person, it is impossible not to communicate: This point is so obvious as to often be overlooked: silence amounts to communicating that one does not want to communicate. For example, if in the first conversation above, the team member chooses not to respond to his manager’s comment that the delay is unacceptable, the manager is likely to see it as disagreement or even insubordination. The point is, there is nothing the team member can do that does not amount to a response of some kind. Moreover, the response the team member chooses to give determines the subsequent course of the conversation.
Every communication has two aspects to it: content and relationship: Spoken words and how they are strung together form the content of communication. Most communication models (such as sender-receiver model) focus on the coding, transmission and decoding (or interpretation) of content. However, communication is more than just content; what matters is not only what is said, but how it is said and the context in which it is said. For instance, the initial attitude of the manager in the above example sets the tone for the entire exchange: if he takes an adversarial attitude, the team member is likely to be defensive; on the other hand, if his approach is congenial the team member is more likely to look for ways to speed things up. What is really important is that relationship actually defines content. In other words, how a message is understood depends critically on the relationship between participants.
The relationship is defined by how participants perceive a sequence of exchanges: A dialogue consists of a sequence of exchanges between participants. However, the participants will punctuate the sequence differently. What the word punctuate means in this context can be made clear by referring back to our example above. If the team member feels (from previous experience) that the manager’s query is an assertion of authority, he may respond by challenging the basis of the question. For instance, he may say that he had to deal with other work that was more important. This may provoke the project manager to assert his authority even more strongly, thereby escalating discord…and so on. This leads to a situation that can be represented graphically as shown in Figure 1.
The important point here is that both participants believe they are reacting to the other’s unreasonableness: the team member perceives groupings 1-2-3 , 3-4-5 , where his challenges are a consequence of the “over-assertive” behaviour of the project managers etc. whereas the project manager perceives groupings 2-3-4, 4-5-6 etc., where his assertive behaviour is a consequence of the team member’s “gratuitous” challenges. In other words, each participant punctuates the sequence of events in a way that rationalizes their responses. The first step to resolving this problem lies in developing an understanding of the other’s punctuation – i.e. in reaching a shared understanding of the reason(s) behind the differing views.
Human communication consists of verbal and non-verbal elements: This axiom asserts that communication is more than words. The non verbal elements include (but are not limited to) gestures, facial expressions etc. Since words can either be used or not used, verbal communication has an binary (on/off) aspect to it. Watzlawick refers to verbal communication as digital communication (and yes, it seems strange to use the term digital in this context, but the book was published in 1967). In contrast, non-verbal communication is more subtle; a frown may convey perplexity or anger in varying intensities, depending on other expressions and/or gestures that are used. Watzlawick termed such communication as analogic.
In the context of our example, the digital aspects of the communication refer to the words spoken by the team member and the project manager whereas the analogic aspects refer to all other non-verbal cues – including emotions – that the participants choose to display. The important point to note is that digital communication has a highly developed syntax but lacks the semantics to express relationships, whereas analogic communication has the semantics to express relationships well, but lacks the syntax. In lay terms, words cannot express how I feel; my gestures and facial expressions can, but they can also be easily misunderstood. This observation accounts for many of the misunderstandings that occur in project and other organizational dialogues.
All communicational interchanges are either symmetrical or complementary, depending on the relationship between those involved: Symmetry and complementarity refer to whether the relationship is based on equality of the participants or differences between them. For, example the relationship illustrated in figure 1 is symmetrical – the PM and the team member communicate in a manner that suggests they see each other as peers. On the other hand, if the team member had taken a submissive attitude towards the PM, the exchange would have been complementary. Seen in this light, symmetrical interactions are based on minimization of differences between the two communicators and complementary relationships are based on maximization of differences. It should be noted that one type of interaction is in no way better than the other – they are simply two different ways in which communication-based interactions occur
Communication can be improved by strengthening relationships
In the interactional approach to communication, the relationship between participants is considered to be more important than the content of their communication. Unfortunately, the relational aspects are the hardest to convey because of the ambiguity in sequence punctuation and the semantics of analogic communication. These ambiguities are the cause of many vicious cycles of communication – an example being the case illustrated in Figure 1.
Indeed, the interactional view questions the whole notion of an objective reality of a particular communicative situation. In the end, it matters little as to whose view is the “right” one. What’s more important is the recognition that a person’s perception of a particular communicative situation depends critically on how he or she punctuates it. As Watzlawick puts it:
In the communicational perspective, the question whether there is such a thing as an objective reality of which some people are more clearly aware than others is of relatively little importance compared to the significance of different views of reality due to different punctuations.
In their book, they also point out that it is impossible for participants to be fully aware of the relational aspects of their communication (such as punctuation) because it is not possible to analyse a relationship objectively when one is living it. As they put it:
… awareness of how one punctuates is extremely difficult owing to another basic property of communication. Like all other complex conceptual systems which attempt to make assertions about themselves (e.g. language, logic, mathematics) communication typically encounters the paradoxes of self-reflexivity when trying to apply itself to itself. What this amounts to is that the patterns of communication existing between oneself and others cannot be fully understood, for it is simply impossible to be both involved in a relationship (which is indispensable in order to be related) and at the same time stand outside it as a detached, uninvolved observer…
The distinction between content and relationship is an important one. Among other things, it explains why those with opposing viewpoints fail to reach a genuine shared understanding even when they understand the content of the other positions. The difficulty arises because they fail to relate to each other in an empathetic way. Techniques such as dialogue mapping help address relational issues by objectifying issues, ideas and arguments. Such approaches can take some of the emotion out of the debate and thus help participants gain a better appreciation of opposing viewpoints.
To sum up
The interactional view of communication tells us that relationships are central to successful communication. Although traditional project communication tools and techniques can help with the semantic and syntactical elements of communication, the relational aspects can only be addressed by strengthening relationships between stakeholders and using techniques that foster open dialogue.
Once implemented, IT systems can evolve in ways that can be quite different from their original intent and design. One of the reasons for this is that enterprise systems are based on simplistic models that do not capture the complexities of real organisations. The gap between systems and reality is the subject of a fascinating book by Claudio Ciborra entitled, The Labyrinths of Information. Among other things, the book presents an alternative viewpoint on systems development, one that focuses on reasons for divergence between design and reality. It also discusses other aspects of system development that tend to be obscured by mainstream development methodologies and processes. This post is a summary and review of the book.
The standard treatment of systems development in corporate environments is based on the principles of scientific management. Yet, as Ciborra tells us,
…science-based, method-driven approaches can be misleading. Contrary to their promise, they are deceivingly abstract and removed from practice. Everyone can experience this when he or she moves from the models to the implementation phase. The words of caution and pleas for ‘change management’ interventions that usually accompany the sophisticated methods and polished models keep reminding us of such an implementation gap. However, they offer no valid clue on how to overcome it…
Just to be clear, Ciborra offers no definitive solutions either. However, he offers “clues on how to bridge the gap” by looking into some of the informal techniques and approaches that people “on the ground” – users, designers, developers or managers – use to work and cope with technology. He is not concerned with techniques or methodologies per se, but rather with how people deal with the messy day-to-day business of working with technology in organisations.
The book is organised as a collection of essays based on Ciborra’s research papers spanning a couple of decades – from the mid 1980s until a few years prior to his death in 2005. I discuss each of the chapters in order below, providing links to the original papers where I could find them.
The divergence between models and reality
Most of the tools and techniques used in systems evaluation, design and development are based on simplified models of organisational reality. However, organisations do not function according to organograms, data flow diagrams or entity-relationship models. Models used by systems professionals abstract away much of the messiness of real-life. The methods that come out of such simplifications cannot deal with the complexities of a real organisation. As Ciborra states, “…concern with method is one of the key aspects of our discipline and possibly the true origin of its crisis…”
Indeed, as any systems professional will attest to, unforeseen occurrences and situations inevitably encountered in real life are what cause the biggest headaches in the implementation and acceptance of systems. Those on the ground deal with such exceptions by creative but essentially ad-hoc approaches. Much of the book is a case-study based discussion of such improvised approaches to systems development.
Making (do) with what is at hand
Ciborra argues that successful systems are invariably imitated by competitors, so any competitive advantage offered by such systems is, at best, limited. A similar argument holds for standards and best practices – they promote uniformity rather than distinction. Given this, organisations should strive towards practices that cannot be copied. They should work towards inimitability.
In art, bricolage refers to a process of creating a work from whatever is at hand. Among other things it involves tinkering, improvising and generally making do with what is available. Ciborra argues that many textbook cases of strategic systems in fact evolved through bricolage, tinkering and serendipity, rather than plan. Some of the cases he discusses include Sabre Reservation System developed by American Airlines, and the development of Email (as part of the ARPANET project). Moreover, although the Sabre System afforded American Airlines a competitive advantage for a while, it soon became a part of the travel reservation infrastructure thereby becoming an operational necessity rather than an advantage. This is much the same point that Nicholas Carr made in his article, IT Doesn’t Matter.
The question that you may be asking at this point is: “All this is well and good, but does Ciborra have any solutions to offer?” Well, that’s the problem: Ciborra tells us that bricolage and improvisation ought to be encouraged, but offers little advice on how this can be done. For example, he tells us to “Value bricolage strategically”, “Design tinkering” and “Establish systematic serendipity” – sounds great in theory, but what does it really mean? It is platitudinous advice that is hard to action.
Nevertheless his main point is a good one: that managers should encourage informal, creative practices instead of clamping down on them. This advice has not generally been heeded. Indeed, corporate IS practices have gone the other wa, down the road of standardisation and best practices. Ciborra tells us in no uncertain terms that this is not a good thing.
The enframing effect of technology
This part is, in my opinion, the most difficult chapter in the book. It is based on a paper by Ciborra and Hanseth entitled, From tool to Gestell: Agendas for managing the information infrastructure. In German the term Gestell means shelf or rack. The philosopher Martin Heidegger used the term to describe the way in which technology frames the way we view (or “organise”) the world. Ciborra highlights the way in which existing infrastructure affects the success of businesses processes and practices. Ciborra emphasises that technology-based enterprise initiatives are doomed to fail unless they pay due attention to:
- Existing or installed infrastructure.
- Local needs and concerns.
Instead of attempting to oust old technology, system designers and implementers need to co-opt or cultivate the installed base (and the user community) if they are to succeed at all. In this sense installed infrastructure is an actor (like an individual) with its own interest and agenda. It provides a context for the way people think and also influences future development.
The notion of Gestell thus reminds us of how existing technology influences and limits the way we think. To get around this, Ciborra suggests that we should:
- Be aware of technology and standards, but not be captive to them.
- Think imaginatively, but pay attention to the installed base (existing platforms and users).
- Remember that top down technology initiatives rarely succeed.
The drifting of information infrastructure
Ciborra uses Donald Schoen’s metaphor of the high ground and the swamp to highlight the gap between theory and practice of information systems (see this paper by Schoen, for a discussion of the metaphor). The high ground is the executive management view,where methodologies and management theories hold sway, while the swamp is the coalface where messy, day-to-day reality of organisational work unfolds. In the swamp of day-to-day work, people tend to use available technology in any way possible to solve real (and messy) problems. So, although a particular technology may have an espoused or intended aim, it may well be used in ways that are completely unforeseen by its designers.
The central point of this essay is that the full implications of a technology are often realised only after it has been implemented and used for a while. In Ciborra’s words, technology drifts – that is, it is put to uses that cannot be foreseen. Moreover, it may be never be used in ways that were intended by the designer. Although Ciborra lists several cases that demonstrate this point, in my opinion, his blanket claim that technology drifts is a bit over the top. Sure, in some cases, technologies may be used in unforeseen ways, but by and large they are used in ways that are intended and planned.
The organisation as a host
Reactions to a new technology in an organisation are generally mixed – some people may view the technology with some trepidation (because of the changes to their work routines, for instance) while others may welcome it (because of promised efficiencies, say). In metaphorical terms, the new technology is a “guest,” whose “desires” and “intentions” aren’t fully known. Seen in this light of this metaphor, the notion of hospitality makes sense: as Ciborra puts it, the organisation hosts the technology.
To be sure, the idea of hospitality applying to objects such as information systems will probably cause a few raised eyebrows. However it isn’t as “out there” as it sounds. Consider, for example, the following implications of the metaphor
- Interaction between the host and guest can change both parties.
- If the technology is perceived as unfriendly, it will be rejected (or even ejected!).
- System development and operations methodologies are akin to cultural rituals (it is how we “deal with” the guest).
- Technologies, like guests, stay for a while but not forever.
Ciborra’s intent in this and most of the other essays is to make us ponder over the way we design, develop and run systems, and possibly view what we do in a different light.
The organisation as a platform
In this essay Ciborra looks at the way in which successful technology organisations adapt and adjust to rapidly changing environments. It is based on his paper entitled, The Platform Organization: Recombining Strategies, Structures and Surprises, Using a case-study, he makes the point that the only way organisations can respond to rapidly evolving technology markets is to be open to recombining available resources in flexible ways: it is impossible to start from scratch; one has work with what is at hand, using it in creative ways.
Another point he makes is that the organisation of an organisation (hierarchy and structure) at any particular time is less important than how it gets there, where it’s headed and what are the obstacles in the way. To quote from the book:
…analysing and evaluating the platform organisation at a fixed point in time is of little use: it may look like a matrix, or a functional hierarchy, and one may wonder how well its particular form fits the market for that period and what its level of efficiency really is. What should be appreciated, instead, is the whole sequence of forms adopted over time, and the speed and friction in shifting from one to the other.
However, the identification of such a trajectory can be misleading – despite after-the-fact rationalisations, management in such situations is often based on improvised actions rather than carefully laid plans. Although this may not always be so, I suspect it is more common than managers would care to admit.
Improvisation and mood
By now the reader would have noted that Ciborra’s focus is squarely on the unexpected occurrences in day-to-day organisational work. So it will come as no surprise that the last essay in the book deals with improvisation.
Ciborra argues that most studies on improvisation have a cognitive focus – that is, they deal with how people respond to emerging situations by “quick thinking.” In his opinion, such studies ignore the human aspect of improvised actions, the emotions and moods evoked by situations that call for improvisation. These, he suggests, can be the difference between improvised actions and panic.
As he puts it, people are not cognitive robots – their moods will determine whether they respond to a situation with indifference or interest and engagement. This human dimension of improvisation, though elusive, is the key to understanding improvisation (and indeed, any creative / innovative action)
He also discusses the relationship between improvisation and time – something I have discussed at length in an earlier post, so I’ll say no more about it here.
A methodological postscript
In a postscript to the book, Ciborra discusses his research philosophy – the thread that links the essays in the book.. His basic contention is that methodologies and organisational models are based on after-the-fact rationalisations of real phenomena. More often than not such methods and models are idealisations that omit the messiness of real life organisations. They are abstractions, not reality. As such they can guide us, but we should be ever open to the surprises that real life may afford us.
The essential message that Ciborra conveys is a straightforward one – that the real world is a messy place and that the simplistic models on which systems are based cannot deal with this messiness in full. Despite our best efforts there will always be stuff that “leaks out” of our plans and models. Ciborra’s book celebrates this messiness and reminds us that people matter more than systems or processes.
I’ll begin with an example. Assume you’re having a dishwasher installed in your kitchen. This (simple?) task requires the services of a plumber and an electrician, and both of them need to be present to complete the job. You’ve asked them to come in at 7:30 am. Going from previous experience, these guys are punctual 50% of the time. What’s the probability that work will begin at 7:30 am?
At first sight, it seems there’s a 50% chance of starting on time. However, this is incorrect – the chance of starting on time is actually 25%, the product of the individual probabilities for each of the tradesmen. This simple example illustrates the central theme of a book by Sam Savage entitled, The Flaw of Averages: Why We Underestimate Risk in the Face of Uncertainty. This post is a detailed review of the book.
The key message that Savage conveys is that uncertain quantities cannot be represented by single numbers, rather they are a range of numbers each with a different probability of occurrence. Hence such quantities cannot be manipulated using standard arithmetic operations. The example mentioned in the previous paragraphs illustrate this point. This is well known to those who work with uncertain numbers (actuaries, for instance), but is not so well understood by business managers and decision makers. Hence the executive who asks his long-suffering subordinate to give him a projected sales figure for next month, with the quoted number then being taken as the 100% certain figure. Sadly such stories are more the norm than the exception, so it is clear that there is a need for a better understanding of how uncertain quantities should be interpreted. The main aim of the book is to help those with little or no statistical training achieve that understanding.
Developing an intuition for uncertainty
Early in the book, Savage presents five tools that can be used to develop a feel for uncertainty. He refers to these tools as mindles – or mind handles. His five mindles for uncertainty are:
- Risk is in the eye of the beholder, uncertainty isn’t. Basically this implies that uncertainty does not equate to risk. An uncertain event is a risk only if there is a potential loss or gain involved. See my review of Douglas Hubbard’s book on the failure of risk management for more on risk vs. uncertainty.
- An uncertain quantity is a shape (or a distribution of numbers) rather than a single number. The broadness of the shape is a measure of the degree of uncertainty. See my post on the inherent uncertainty of project task estimates for an intuitive discussion of how a task estimate is a shape rather than a number.
- A combination of several uncertain numbers is also a shape, but the combined shape is very different from those of the individual uncertainties. Specifically, if the uncertain quantities are independent, the combined shape can be narrower (i.e. less uncertain) than that of the individual shapes. This provides the justification for portfolio diversification, which tells us not to put all our money on one horse, or eggs in one basket etc. See my introductory post on Monte Carlo simulations to see an example of how multiple uncertain quantities can combine in different ways.
- If the individual uncertain quantities (discussed in the previous point) aren’t independent, the overall uncertainty can increase or decrease depending on whether the quantities are positively or negatively related. The nature of the relationship (positive or negative) can be determined from a scatter plot of the quantities. See my post on simulation of correlated project tasks for examples of scatter plots. The post also discusses how positive relationships (or correlations) can increase uncertainty.
- Plans based on average numbers are incorrect on average. Using average numbers in plans usually entails manipulating them algebraically and/or plugging them into functions. Savage explains how the form of the function can lead to an overestimation or underestimation of the planned value. Although this sounds a somewhat abstruse, the basic idea is simple: manipulating an average number using mathematical operations will amplify the error caused by the flaw of averages.
Savage explains the above concepts using simple arithmetic supplemented with examples drawn from a range of real-life business problems.
The two forms of the flaw of averages
The book makes a distinction between two forms of the flaw of averages. In its first avatar, the flaw states that the combined average of two uncertain quantities equals the sum of their individual averages, but the shape of the combined uncertainty can be very different from the sum of the individual shapes (Recall that an uncertain number is a shape, but its average is a number). Savage calls this the weak form of the flaw of averages. The weak form applies when one deals with uncertain quantities directly. An example of this is when one adds up probabilistic estimates for two independent project tasks with no lead or lag between them. In this case the average completion time is the sum of the average completion times for the individual tasks, but the shape of the distribution of the combined tasks does not resemble the shape of the individual distributions. The fact that the shape is different is a consequence of the fact that probabilities cannot be “added up” like simple numbers. See the first example in my post on Monte Carlo simulation of project tasks for an illustration of this point.
In contrast, when one deals with functions of uncertain quantities, the combined average of the functions does not equal the sum of the individual averages. This happens because functions “weight” random variables in a non-uniform manner, thereby amplifying certain values of the variable. An example of this is where we have two sequential tasks with an earliest possible start time for the second. The earliest possible start time for the second task introduces a nonlinearity in cases where the first task finishes early (essentially because there is a lag between the finish of the first task and the start of the second in this situation). The constraint causes the average of the combined tasks to be greater than the sum of the individual averages. Savage calls this the strong form of the flaw of averages. It applies whenever one deals with nonlinear functions of uncertain variables. See the second example in my post on Monte Carlo simulation of multiple project tasks for an illustration of this point.
Much of the book presents real-life illustrations of the two forms of the flaw in risk assessment, drawn from finance to the film industry and from petroleum to pharmaceutical supply chains. He also covers the average-based abuse of statistics in discussions on topical “hot-button” issues such as climate change and health care.
A layperson-friendly feature of the book is that it explains statistical terms in plain English. As an example, Savage spends an entire chapter demystifying the term correlation using scatter plots . Another term that he explains is the Central Limit Theorem (CLT), which states that the sum of independent random variables resembles the Normal (or bell-shaped) distribution. A consequence of CLT is that one can reduce investment risk by diversifying one’s investments – i.e. making several (small) independent investments rather than a single (large) one – this is essentially mindle # 3 discussed earlier.
Towards the middle of the book, Savage makes a foray into decision theory, focusing on the concept of value of information. Since decisions are (or should be) made on the basis of information, one needs to gather pertinent information prior to making a decision. Now, information gathering costs money (and time, which translates to money). This brings up the question as to how much should one spend in collecting information relevant to a particular decision? It turns out that in many cases one can use decision theory to put a dollar value on a particular piece of information. Surprisingly it turns out that organisations often over-spend in gathering irrelevant information. Savage spends a few chapters discussing how one can compute the value of information based on simple techniques of decision theory. As interesting as this section is, however, I think it is a somewhat disconnected from the rest of the book.
Curing the flaw: SIPs, SLURPS and Probability Management
The last part of the book is dedicated to outlining a solution (or as Savage calls it, a cure) to average-based – or flawed – statistical thinking. The central idea is to use pre-generated libraries of simulation trials for variables of interest. Savage calls such a packaged set of simulation trials a Stochastic Information Packet (SIP). Here’s an example of how it might work in practice:
Most business organisations worry about next year’s sales. Different divisions in the organisation might forecast sales using different of techniques. Further, they may use these forecasts as the basis for other calculations (such as profit and expenses for example). The forecasted numbers cannot be compared with each other because each calculation is based on different simulations or worse, different probability distributions. The upshot of this is that forecasted sales results can’t be combined or even compared. The problem can be avoided if everyone in the organisation uses the same SIP for forecasted sales. The results of calculations can be compared, and even combined, because they are based on the same simulation.
Calculations that are based on the same SIP (or set of SIPs) form a set of simulations that can be combined and manipulated using arithmetic operations. Savage calls such sets of simulations, Scenario Library Units with Relationships Preserved (or SLURPS). The name reflects the fact that each of the calculations is based on the same set of sales scenarios (or results of simulation trials). Regarding the terminology: I’m not a fan of laboured acronyms, but concede that they can serve as a good mnemonics.
The proposed approach ensures that the results of the combined calculations will avoid the flaw of averages,and exhibit the correct statistical behaviour. However, it assumes that there is an organisation-wide authority responsible for generating and maintaining appropriate SIPs. This authority – the probability manager – will be responsible for a “database” of SIPs that covers all uncertain quantities of interest to the business, and make these available to everyone in the organisation who needs to use them. To quote from the book, probability management involves:
…a data management system in which the entities being managed are not numbers, but uncertainties, that is, probability distributions. The central database is a Scenario Library containing thousands of potential future values of uncertain business parameters. The library exchanges information with desktop distribution processors that do for probability distributions what word processors did for words and what spreadsheets did for numbers.
Savage sees probability management as a key step towards managing uncertainty and risk in a coherent manner across organisations. He mentions that some organizations that have already started down this route (Shell and Merck, for instance). The book can thus also be seen as a manifesto for the new discipline of probability management.
I have come across the flaw of averages in various walks of organizational life ranging from project scheduling to operational risk analysis. Most often, the folks responsible for analysing uncertainty are aware of the flaw, and have the requisite knowledge of statistics to deal with it. However, such analyses can be hard to explain to those who lack this knowledge. Hence managers who demand a single number. Yes, such attitudes betray a lack of understanding of what uncertain numbers are and how they can be combined, but that’s the way it is in most organizations. The book is directed largely to that audience.
To sum up: the book is an entertaining and informative read on some common misunderstandings of statistics. Along the way the author translates many statistical principles and terms from “jargonese” to plain English. The book deserves to be read widely, especially by those who need it the most: managers and other decision-makers who need to understand the arithmetic of uncertainty.
Any future-directed activity has a degree of uncertainty, and uncertainty implies risk. Bad stuff happens – anticipated events don’t unfold as planned and unanticipated events occur. The main function of risk management is to deal with this negative aspect of uncertainty. The events of the last few years suggest that risk management as practiced in many organisations isn’t working. A book by Douglas Hubbard entitled, The Failure of Risk Management – Why it’s Broken and How to Fix It, discusses why many commonly used risk management practices are flawed and what needs to be done to fix them. This post is a summary and review of the book.
Interestingly, Hubbard began writing the book well before the financial crisis of 2008 began to unfold. So although he discusses matters pertaining to risk management in finance, the book has a much broader scope. For instance, it will be of interest to project and program/portfolio management professionals because many of the flawed risk management practices that Hubbard mentions are often used in project risk management.
The book is divided into three parts: the first part introduces the crisis in risk management; the second deals with why some popular risk management practices are flawed; the third discusses what needs to be done to fix these. My review covers the main points of each section in roughly the same order as they appear in the book.
The crisis in risk management
There are several risk management methodologies and techniques in use ; a quick search will reveal some of them. Hubbard begins his book by asking the following simple questions about these:
- Do these risk management methods work?
- Would any organisation that uses these techniques know if they didn’t work?
- What would be the consequences if they didn’t work
His contention is that for most organisations the answers to the first two questions are negative. To answer the third question, he gives the example of the crash of United Flight 232 in 1989. The crash was attributed to the simultaneous failure of three independent (and redundant) hydraulic systems. This happened because the systems were located at the rear of the plane and debris from a damaged turbine cut lines to all them. This is an example of common mode failure – a single event causing multiple systems to fail. The probability of such an event occurring was estimated to be less than one in a billion. However, the reason the turbine broke up was that it hadn’t been inspected properly (i.e. human error). The probability estimate hadn’t considered human oversight, which is way more likely than one-in-billion. Hubbard uses this example to make the point that a weak risk management methodology can have huge consequences.
Following a very brief history of risk management from historical times to the present, Hubbard presents a list of common methods of risk management. These are:
- Expert intuition – essentially based on “gut feeling”
- Expert audit – based on expert intuition of independent consultants. Typically involves the development of checklists and also uses stratification methods (see next point)
- Simple stratification methods – risk matrices are the canonical example of stratification methods.
- Weighted scores – assigned scores for different criteria (scores usually assigned by expert intuition), followed by weighting based on perceived importance of each criterion.
- Non-probabilistic financial analysis –techniques such as computing the financial consequences of best and worst case scenarios
- Calculus of preferences – structured decision analysis techniques such as multi-attribute utility theory and analytic hierarchy process. These techniques are based on expert judgements. However, in cases where multiple judgements are involved these techniques ensure that the judgements are logically consistent (i.e. do not contradict the principles of logic).
- Probabilistic models – involves building probabilistic models of risk events. Probabilities can be based on historical data, empirical observation or even intuition. The book essentially builds a case for evaluating risks using probabilistic models, and provides advice on how these should be built
The book also discusses the state of risk management practice (at the end of 2008) as assessed by surveys carried out by The Economist, Protiviti and Aon Corporation. Hubbard notes that the surveys are based largely on self-assessments of risk management effectiveness. One cannot place much confidence in these because self-assessments of risk are subject to well known psychological effects such as cognitive biases (tendencies to base judgements on flawed perceptions) and the Dunning-Kruger effect (overconfidence in one’s abilities). The acid test for any assessment is whether or not it use sound quantitative measures. Many of the firms surveyed fail on this count: they do not quantify risks as well as they claim they do. Assigning weighted scores to qualitative judgements does not count as a sound quantitative technique – more on this later.
So, what are some good ways of measuring the effectiveness of risk management? Hubbard lists the following:
- Statistics based on large samples – the use of this depends on the availability of historical or other data that is similar to the situation at hand.
- Direct evidence – this is where the risk management technique actually finds some problem that would not have been found otherwise. For example, an audit that unearths dubious financial practices
- Component testing – even if one isn’t able to test the method end-to-end, it may be possible to test specific components that make up the method. For example, if the method uses computer simulations, it may be possible to validate the simulations by applying them to known situations.
- Check of completeness – organisations need to ensure that their risk management methods cover the entire spectrum of risks, else there’s a danger that mitigating one risk may increase the probability of another. Further, as Hubbard states, “A risk that’s not even on the radar cannot be managed at all.” As far as completeness is concerned, there are four perspectives that need to be taken into account. These are:
- Internal completeness – covering all parts of the organisation
- External completeness – covering all external entities that the organisation interacts with.
- Historical completeness – this involves covering worst case scenarios and historical data.
- Combinatorial completeness – this involves considering combinations of events that may occur together; those that may lead to common-mode failure discussed earlier.
Finally, Hubbard closes the first section with the observation that it is better not to use any formal methodology than to use one that is flawed. Why? Because a flawed methodology can lead to an incorrect decision being made with high confidence.
Why it’s broken
Hubbard begins this section by identifying the four major players in the risk management game. These are:
- Actuaries: These are perhaps the first modern professional risk managers. They use quantitative methods to manage risks in the insurance and pension industry. Although the methods actuaries use are generally sound, the profession is slow to pick up new techniques. Further, many investment decisions that insurance companies do not come under the purview of actuaries. So, actuaries typically do not cover the entire spectrum of organizational risks.
- Physicists and mathematicians: Many rigorous risk management techniques came out of statistical research done during the second world war. Hubbard therefore calls this group War Quants. One of the notable techniques to come out of this effort is the Monte Carlo Method – originally proposed by Nick Metropolis, John Neumann and Stanislaw Ulam as a technique to calculate the averaged trajectories of neutrons in fissile material (see this article by Nick Metropolis for a first-person account of how the method was developed). Hubbard believes that Monte Carlo simulations offer a sound, general technique for quantitative risk analysis. Consequently he spends a fair few pages discussing these methods, albeit at a very basic level. More about this later.
- Economists: Risk analysts in investment firms often use quantitative techniques from economics. Popular techniques include modern portfolio theory and models from options theory (such as the Black-Scholes model) . The problem is that these models are often based on questionable assumptions. For example, the Black-Scholes model assumes that the rate of return on a stock is normally distributed (i.e. its value is lognormally distributed) – an assumption that’s demonstrably incorrect as witnessed by the events of the last few years . Another way in which economics plays a role in risk management is through behavioural studies, in particular the recognition that decisions regarding future events (be they risks or stock prices) are subject to cognitive biases. Hubbard suggests that the role of cognitive biases in risk management has been consistently overlooked. See my post entitled Cognitive biases as meta-risks and its follow-up for more on this point.
- Management consultants: In Hubbard’s view, management consultants and standards institutes are largely responsible for many of the ad-hoc approaches to risk management. A particular favourite of these folks are ad-hoc scoring methods that involve ordering of risks based on subjective criteria. The scores assigned to risks are thus subject to cognitive bias. Even worse, some of the tools used in scoring can end up ordering risks incorrectly. Bottom line: many of the risk analysis techniques used by consultants and standards have no justification.
Following the discussion of the main players in the risk arena, Hubbard discusses the confusion associated with the definition of risk. There are a plethora of definitions of risk, most of which originated in academia. Hubbard shows how some of these contradict each other while others are downright non-intuitive and incorrect. In doing so, he clarifies some of the academic and professional terminology around risk. As an example, he takes exception to the notion of risk as a “good thing” – as in the PMI definition, which views risk as “an uncertain event or condition that, if it occurs, has a positive or negative effect on a project objective.” This definition contradicts common (dictionary) usage of the term risk (which generally includes only bad stuff). Hubbard’s opinion on this may raise a few eyebrows (and hackles!) in project management circles, but I reckon he has a point.
In my opinion, the most important sections of the book are chapters 6 and 7, where Hubbard discusses why “expert knowledge and opinions” (favoured by standards and methodologies are flawed) and why a very popular scoring method (risk matrices) is “worse than useless.” See my posts on the limitations of scoring techniques and Cox’s risk matrix theorem for detailed discussions of these points.
A major problem with expert estimates is overconfidence. To overcome this, Hubbard advocates using calibrated probability assessments to quantify analysts’ abilities to make estimates. Calibration assessments involve getting analysts to answer trivia questions and eliciting confidence intervals for each answer. The confidence intervals are then checked against the proportion of correct answers. Essentially, this assesses experts’ abilities to estimates by tracking how often they are right. It has been found that people can improve their ability to make subjective estimates through calibration training – i.e. repeated calibration testing followed by feedback. See this site for more on probability calibration.
Next Hubbard tackles several “red herring” arguments that are commonly offered as reasons not to manage risks using rigorous quantitative methods. Among these are arguments that quantitative risk analysis is impossible because:
- Unexpected events cannot be predicted.
- Risks cannot be measured accurately.
Hubbard states that the first objection is invalid because although some events (such as spectacular stockmarket crashes) may have been overlooked by models, it doesn’t prove that quantitative risk as a whole is flawed. As he discusses later in the book, many models go wrong by assuming Gaussian probability distributions where fat-tailed ones would be more appropriate. Of course, given limited data it is difficult to figure out which distribution’s the right one. So, although Hubbard’s argument is correct, it offers little comfort to the analyst who has to model events before they occur.
As far as the second is concerned, Hubbard has written another book on how just about any business variable (even intangible ones) can be measured. The book makes a persuasive case that most quantities of interest can be measured, but there are difficulties. First, figuring out the factors that affect a variable is not a straightforward task. It depends, among other things, on the availability of reliable data, the analyst’s experience etc. Second, much depends on the judgement of the analyst, and such judgements are subject to bias. Although calibration may help reduce certain biases such as overconfidence, it is by no means a panacea for all biases. Third, risk-related measurements generally involve events that are yet to occur. Consequently, such measurements are based on incomplete information. To make progress one often has to make additional assumptions which may not justifiable a priori.
Hubbard is a strong advocate for quantitative techniques such as Monte Carlo simulations in managing risks. However, he believes that they are often used incorrectly. Specifically:
- They are often used without empirical data or validation – i.e. their inputs and results are not tested through observation.
- Are generally used piecemeal – i.e. used in some parts of an organisation only, and often to manage low-level, operational risks.
- They frequently focus on variables that are not important (because these are easier to measure) rather than those that are important. Hubbard calls this perverse occurrence measurement inversion. He contends that analysts often exclude the most important variables because these are considered to be “too uncertain.”
- They use inappropriate probability distributions. The Normal distribution (or bell curve) is not always appropriate. For example, see my posts on the inherent uncertainty of project task estimates for an intuitive discussion of the form of the probability distribution for project task durations.
- They do not account for correlations between variables. Hubbard contends that many analysts simply ignore correlations between risk variables (i.e. they treat variables as independent when they actually aren’t). This almost always leads to an underestimation of risk because correlations can cause feedback effects and common mode failures.
Hubbard dismisses the argument that rigorous quantitative methods such as Monte Carlo are “too hard.” I agree, the principles behind Monte Carlo techniques aren’t hard to follow – and I take the opportunity to plug my article entitled An introduction to Monte Carlo simulations of project tasks 🙂 . As far as practice is concerned, there are several commercially available tools that automate much of the mathematical heavy-lifting. I won’t recommend any, but a search using the key phrase monte carlo simulation tool will reveal many.
How to Fix it
The last part of the book outlines Hubbard’s recommendations for improving the practice of risk management. Most of the material presented here draws on the previous section of the book. His main suggestions are to:
- Adopt the language, tools and philosophy of uncertain systems. To do this he recommends:
- Using calibrated probabilities to express uncertainties. Hubbard believes that any person who makes estimates that will be used in models should be calibrated. He offers some suggestions on people can improve their ability to estimate through calibration – discussed earlier and on this web site.
- Employing quantitative modelling techniques to model risks. In particular, he advocates the use of Monte Carlo methods to model risks. He also provides a list of commercially available PC-based Monte Carlo tools. Hubbard makes the point that modelling forces analysts to decompose the systems of interest and understand the relationships between their components (see point 2 below).
- Developing an understanding of the basic rules of probability including independent events, conditional probabilities and Bayes’ Theorem. He gives examples of situations in which these rules can help analysts extrapolate
To this, I would also add that it is important to understand the idea that an estimate isn’t a number, but a probability distribution – i.e. a range of numbers, each with a probability attached to it.
- Build, validate and test models using reality as the ultimate arbiter. Models should be built iteratively, testing each assumption against observation. Further, models need to incorporate mechanisms (i.e. how and why the observations are what they are), not just raw observations. This is often hard to do, but at the very least models should incorporate correlations between variables. Note that correlations are often (but not always!) indicative of an underlying mechanism. See this post for an introductory example of Monte Carlo simulation involving correlated variables.
- Lobbying for risk management to be given appropriate visibility in organisation.s
In the penultimate chapter of the book, Hubbard fleshes out the characteristics or traits of good risk analysts. As he mentions several times in the book, risk analysis is an empirical science – it arises from experience. So, although the analytical and mathematical (modelling) aspects of risk are important, a good analyst must, above all, be an empiricist – i.e. believe that knowledge about risks can only come from observation of reality. In particular, tesing models by seeing how well they match historical data and tracking model predictions are absolutely critical aspects of a risk analysts job. Unfortunately, many analysts do not measure the performance of their risk models. Hubbard offers some excellent suggestions on how analysts can refine and improve their models via observation.
Finally, Hubbard emphasises the importance of creating an organisation-wide approach to managing risks. This ensures that organisations will tackle the most important risks first, and that its risk management budgets will be spent in the most effective way. Many of the tools and approaches that he suggests in the book are most effective if they are used in a consistent way across the entire organisation. In reality, though, risk management languishes way down in the priorities of senior executives. Even those who profess to understanding the importance of managing risks in a rigorous way, rarely offer risk managers the organisational visibility and support they need to do their jobs.
Whew, that was quite a bit to go through, but for me it was was worth it. Hubbard’s views impelled me to take a closer look at the foundations of project risk management and I learnt a great deal from doing so. Regular readers of this blog would have noticed that I have referenced the book (and some of the references therein) in a few of my articles on risk analysis.
I should add that I’ve never felt entirely comfortable with the risk management approaches advocated by project management methodologies. Hubbard’s book articulates these shortcomings and offers solutions to fix them. Moreover, he does so in a way that is entertaining and accessible. If there is a gap, it is that he does does not delve into the details of model building, but then his other book deals with this in some detail.
To summarise: the book is a must read for anyone interested in risk management. It is especially recommended for project professionals who manage risks using methods that are advocated by project management standards and methodologies.
I’ll say it at the outset: once in a while there comes along a book that inspires and excites because it presents new perspectives on old, intractable problems. In my opinion, Dialogue Mapping : Building a Shared Understanding of Wicked Problems by Jeff Conklin falls into this category. This post presents an extensive summary and review of the book.
Before proceeding, I think it is only fair that I state my professional views (biases?) upfront. Some readers of this blog may have noted my leanings towards the “people side” of project management (see this post , for example). Now, that’s not to say that I don’t use methodologies and processes. On the contrary, I use project management processes in my daily work, and appreciate their value in keeping my projects (and job!) on track. My problem with processes is when they become the only consideration in managing projects. It has been my long-standing belief (supported by experience) that if one takes care of the people side of things, the right outcomes happen more easily; without undue process obsession on part of the manager. (I should clarify that I’m not encouraging some kind of a laissez-faire, process-free approach, merely one that balances both people and processes). I’ve often wondered if it is possible to meld these two elements into some kind of “people-centred process”, which leverages the collective abilities of people in a way that facilitates and encourages their participation. Jeff Conklin’s answer is a resounding “Yes!”
Dialogue mapping is a process that is aimed at helping groups achieve a shared understanding of wicked problems – complex problems that are hard to understand, let alone solve. If you’re a project manager that might make your ears perk up; developing a shared understanding of complex issues is important in all stages of a project: at the start, all stakeholders must arrive at a shared understanding of the project goals (eg, what are we trying to achieve in this project?); in the middle, project team members may need to come to a common understanding (and resolution) of tricky implementation issues; at the end, the team may need to agree on the lessons learned in the course of the project and what could be done better next time. But dialogue mapping is not restricted to project management – it can be used in any scenario involving diverse stakeholders who need to arrive at a common understanding of complex issues. This book provides a comprehensive introduction to the technique.
Although dialogue mapping can be applied to any kind of problem – not just wicked ones – Conklin focuses on the latter. Why? Because wickedness is one of the major causes of fragmentation: the tendency of each stakeholder to see a problem from his or her particular viewpoint ignoring other, equally valid, perspectives. The first chapter of this book discusses fragmentation and its relationship to wickedness and complexity. Fragmentation is a symptom of complexity- one would not have diverse, irreconcilable viewpoints if the issues at hand were simple. According to Conklin, fragmentation is a function of problem wickedness and social complexity, i.e. the diversity of stakeholders. Technical complexity is also a factor, but a minor one compared to the other two. All too often, project managers fall into the trap of assuming that technical complexity is the root cause of many of their problems, ignoring problem complexity (wickedness) and social complexity. The fault isn’t entirely ours; the system is partly to blame: the traditional, process driven world is partially blind to the non-technical aspects of complexity. Dialogue mapping helps surface issues that arise from these oft ignored dimensions of project complexity.
Early in the book, Conklin walks the reader through the solution process for a hypothetical design problem. His discussion is aimed at highlighting some limitations of the traditional approach to problem solving. The traditional approach is structured; it works methodically through gathering requirements, analysing them, formulating a solution and finally implementing it. In real-life, however, people tend to dive headlong into solving the problem. Their approach is far from methodical – it typically involves jumping back and forth between hypothesis formulation, solution development, testing ideas, following hunches etc. Creative work, like design, cannot be boxed in by any methodology, waterfall or otherwise. Hence the collective angst on how to manage innovative product development projects. Another aspect of complexity arises from design polarity; what’s needed (features requested) vs. what’s feasible(features possible) – sometimes called the marketing and development views. Design polarity is often the cause of huge differences of opinion within a team; that is, it manifests itself as social complexity.
Having set the stage in the first chapter, the rest of the book focuses on describing the technique of dialogue mapping. Conklin’s contention is that fragmentation manifests itself most clearly in meetings – be they project meetings, design meetings or company board meetings. The solution to fragmentation must, therefore, focus on meetings. The solution is for the participants to develop a shared understanding of the issues at hand, and a shared commitment to a decision and action plan that addresses them. The second chapter provides an informal discussion of how these are arrived at via dialogue that takes place in meetings. Dialogue mapping provides a process – yes, it is a process – to arrive at these.
The second chapter also introduces some of the elements that make up the process of dialogue mapping. The first of these is a visual notation called IBIS (Issue Based Information System). The IBIS notation was invented by Horst Rittel, the man who coined the term wicked problem. IBIS consists of three elements depicted in Figure 1 below – Issues (or questions), Ideas (that generally respond to questions) and Arguments (for and against ideas – pros and cons) – which can be connected according to a specified grammar (see this post for a quick introduction to IBIS or see Paul Culmsee’s series of posts on best practices for a longer, far more entertaining one). Questions are at the heart of dialogues (or meetings) that take place in organisations – hence IBIS, with its focus on questions, is ideally suited to mapping out meeting dialogues.
The basic idea in dialogue mapping is that a skilled facilitator maps out the core points of the dialogue in real-time, on a shared display which is visible to all participants. The basic idea is that participants see their own and collective contributions to the debate, while the facilitator fashions these into a coherent whole. Conklin’s believes that this can be done, no matter how complex the issues are or how diverse and apparently irreconcilable the opinions. Although I have limited experience with the technique, I believe he is right: IBIS in the hands of a skilled facilitator can help a group focus on the real issues, blowing away the conversational chaff. Although the group as a whole may not reach complete agreement, they will at least develop a real understanding of other perspectives. The third chapter, which concludes the first part of the book, is devoted to an example that illustrates this point.
The second part of the book delves into the nuts and bolts of dialogue mapping. It begins with an introduction to IBIS – which Conklin calls a “tool for all reasons.” The book provides a nice informal discussion, covering elements, syntax and conventions of the language. The coverage is good, but I have a minor quibble : one has to read and reread the chapter a few times to figure out the grammar of the language. It would have been helpful to have an overview of the grammar collected in one place (say in a diagram, like the one shown in Figure 2). Incidentally, Figures 1 and 2 also show how an IBIS map is structured: starting from a root question (placed on the left of the diagram) and building up to the right as the discussion proceeds.
A good way to gain experience with IBIS is to use it to create issue maps of arguments presented in articles. See this post for an example of an issue map based on Fred Brooks’ classic article, No Silver Bullet.
Dialogue mapping is issue mapping plus facilitation. The next chapter – the fifth one in the book – discusses facilitation skills required for dialogue mapping. The facilitator (or technographer, as the person is sometimes called) needs to be able to listen to the conversation, guess at the intended meaning, write (or update) the map and validate what’s written; then proceed through the cycle of listening, guessing, writing and validating again as the next point comes up and so on. Conklin calls this the dialogue mapping listening cycle (see Figure 3 below). As one might imagine, this skill, which is the key to successful dialogue mapping, takes lots of practice to develop. In my experience, a good way to start is by creating IBIS maps of issues discussed in meetings involving a small number of participants. As one gains confidence through practice, one shares the display thereby making the transition from issue mapper to dialogue mapper.
One aspect of the listening cycle is counter-intuitive – validation may require the facilitator to interrupt the speaker. Conklin emphasises that it is OK to do so as long as it is done in the service of listening. Another important point is that when capturing a point made by someone, the technographer will need to summarise or interpret the point. The interpretation must be checked with the speaker. Hence validation – and the interruption it may entail – is not just OK, it is absolutely essential. Conklin also emphasises that the facilitator should focus on a single person in each cycle – it is possible to listen to only one person at a time.
A side benefit of interrruption is that it slows downs the dialogue. This is a good thing because everyone in the group gets more time to consider what’s on the screen and how it relates (or doesn’t) to their own thoughts. All too often, meetings are rushed, things are done in a hurry, and creative creative ideas and thoughts are missed in the bargain. A deliberate slowing down of the dialogue counters this.
The final part of the book – chapters six through nine – are devoted to advanced dialogue mapping skills.
The sixth chapter presents a discussion of the types of questions that arise in most meetings. Conklin identifies seven types of questions:
Deontic: These are questions that ask what should be done in order to deal with the issue at hand. For example: What should we do to improve our customer service? The majority of root questions (i.e. starting questions) in an IBIS map are deontic.
Instrumental: These are questions that ask how something should be done. For example: How can we improve customer service? These questions generally follow on from deontic questions. Typically root questions are either deontic or instrumental.
Criterial: These questions ask about the criteria that any acceptable ideas must satisfy. Typically ideas that respond to criterial questions will serve as a filter for ideas that might come up. Conklin sees criterial and instrumental questions as being complementary. The former specify high-level constraints (or criteria) for ideas whereas the latter are nuts-and-bolts ideas on how something is to be achieved. For example, a criterial question might ask: what are the requirements for improving customer service or how will we know that we have improved customer service.
Conklin makes the point that criterial questions typically connect directly to the root question. This makes sense: the main issue being discussed is usually subject ot criteria or constraints. Further, ideas that respond to criterial questions (in other words, the criteria) have a correspondence with arguments for and against the root questions. This makes sense: the pros and cons that come up in a meeting would generally correspond to the criteria that have been stated. This isn’t an absolute requirement – there’s nothing to say that all arguments must correspong to at least one criterion – but it often serves as a check on whether a discussion is taking all constraints into account.
Conceptual: These are questions that clarify the meaning of any point that’s raised. For example, what do we mean by customer service? Conklin makes the point that many meetings go round in circles because of differences in understanding of particular terms. Conceptual questions surface such differences.
Factual: These are questions of fact. For example: what’s the average turnaround time to respond to customer requests? Often meetings will debate such questions without having any clear idea of what the facts are. Once a factual question is identified as such, it can be actioned for someone to do research on it thereby saving a lot of pointless debate
Background: These are questions of context surrounding the issue at hand. An example is: why are we doing this initiative to improve customer service? Ideas responding to such questions are expected to provide the context as to why something has become an issue.
Stakeholder: These are the “who” questions. An example: who should be involved in the project? Such questions can be delicate in situations where there are conflicting interests (cross-functional project, say), but need to be asked in order to come up with a strategy to handle differences opinion. One can’t address everyone’s concerns until one knows who all constitute “everyone”.
Following the classification of questions, Conklin discusses the concept of a dialogue meta-map – an overall pattern of how certain kinds of questions naturally follow from certain others. The reader may already be able to discern some of these patterns from the above discussion of question types. Also relevant here are artful questions – open questions that keep the dialogue going in productive directions.
The seventh chapter is entitled Three Moves of Discourse. It describes three conversational moves that propel a discussion forward, but can also upset the balance of power in the discussion and evoke strong emotions. These moves are:
- Making an argument for an idea or proposal (a Pro)
- Making an argument against an idea (a Con)
- Challenging the context of the entire discussion.
Let’s look at the first two moves to start with. In an organisation, these moves have a certain stigma attached to them: anyone making arguments for or against an idea might be seen as being opinionated or egotistical. The reason is because these moves generally involve contradicting someone else in the room. Conklin contends that dialogue mapping takes removes these negative connotations because the move is seen as just another node in the map. Once on the map, it is no longer associated with any person – it is objectified as an element of the larger discussion. It can be discussed or questioned just as any other node can.
Conklin refers to the last move – challenging the context of a discussion – as “grenade throwing.” This is an apt way of describing such questions because they have the potential to derail the discussion entirely. They do this by challenging the relevance of the root question itself. But dialogue mapping takes these grenades in its stride; they are simply captured as any another conversational move – i.e. a node on the map, usually a question. Better yet, in many cases further discussion shows how these questions might connect up with the rest of the map. Even if they don’t, these “grenade questions” remain on the map, in acknowledgement of the dissenter and his opinion. Dialogue mapping handles such googlies (curveballs to baseball aficionados) with ease, and indicates how they might connect up with the rest of the discussion – but connection is neither required nor always desirable. It is OK to disagree, as long as it is done respectfully. This is a key element of shared understanding – the participants might not agree, but they understand each other.
Related to the above is the notion of a “left hand move”. Occasionally a discussion can generate a new root question which, by definition, has to be tacked on to the extreme left of the map. Such a left hand move is extremely powerful because it generally relates two or more questions or ideas that were previously unrelated (some of them may even have been seen as a grenade).
By now it should be clear that dialogue mapping is a technique that promotes collaboration – as such it works best in situations where openness, honesty and transparency are valued. In the penultimate chapter, the author discusses some situations in which it may not be appropriate to use the technique. Among these are meetings in which decisions are made by management fiat. Other situations in which it may be helpful to “turn the display off” are those which are emotionally charged or involve interpersonal conflict. Conklin suggests that the facilitator use his or her judgement in deciding where it is appropriate and where it isn’t.
In the final chapter, Conklin discusses how decisions are reached using dialogue mapping. A decision is simply a broad consensus to mark one of the ideas on the map as a decision. How does one choose the idea that is to be anointed as the group’s decision? Well quite obviously: the best one. Which one is that? Conklin states, the best decision is the one that has the broadest and deepest commitment to making it work. He also provides a checklist for figuring out whether a map is mature enough for a decision to be made. But the ultimate decision on when a decision (!) is to be made is up to group. So how does one know when the time is right for a decision? Again, the book provides some suggestions here, but I’ll say no more except to hint at them by paraphrasing from the book: “What makes a decision hard is lack of shared understanding. Once a group has thoroughly mapped a problem (issues) and its potential solutions (ideas) along with their pros and cons, the decision itself is natural and obvious.”
Before closing, I should admit that my experience with dialogue mapping is minimal – I’ve done it a few times in small groups. I’m not a brilliant public speaker or facilitator, but I can confirm that it helps keep a discussion focused and moving forward. Although Conklin’s focus is on dialogue mapping, one does not need to be a facilitator to benefit from this book; it also provides a good introduction to issue mapping using IBIS. In my opinion, this alone is worth the price of admission. Further, IBIS can also be used to augment project (or organisational) memory. So this book potentially has something for you, even if you’re not a facilitator and don’t intend to use IBIS in group settings.
This brings me to the end of my long-winded summary and review of the book. My discussion, as long as it is, does not do justice to the brilliance of the book. By summarising the main points of each chapter (with some opinions and annotations for good measure!), I have attempted to convey a sense of what a reader can expect from the book. I hope I’ve succeeded in doing so. Better yet, I hope I have convinced you that the book is worth a read, because I truly believe it is.