In this instalment of my sensemakers series, I chat with Dr. Neil Preston, an Organisational Psychologist based in Perth, about the very topical issue of organizational change. In a wide-ranging conversation, Neil draws interesting connections between myths that are deeply embedded in Western thought and the way we think about and implement change…and also how we could do it so much better.
KA: Hi Neil, thanks for being a guest on my ongoing series of interviews with sensemakers. You and I have corresponded for at least a year now via email, so it’s a real pleasure to finally meet you, albeit virtually. I’d like to kick things off by asking you to say a bit about yourself and your work.
NP: Well, I’m Dr. Neil Preston. I’m an organizational psychologist…what that means is that I’m specially registered in the area of organizational psychology, much like a clinical psychologist. My background professionally is that I originally worked in mental health, as a senior research psychologist. I’ve published 30 to 40 peer-reviewed papers in psychiatry, mental health and psychometrics, so I know my way around empirical psychology. My real love, however, has always been in organizational and industrial psychology, so in 2006 I decided to leave the Health Department of Western Australia and move into full time consulting.
Consulting work has led me mainly into infrastructure projects- these are very large, complex projects where organisations from both the private and public sector have to get together and create alliances in order to get the work done. My job on these projects – as I often put it to people – is to make the Addams Family look like the Brady Bunch [laughter]. The idea is to get different value systems and organizational cultures to align, with the aim of getting to a shared understanding of project goals and a shared commitment to achieving them.
My original approach was very diagnostic – which is the way psychologists are taught their trade – but as problems have become more complex, I’ve had to resort to dialogical (rather than diagnostic) approaches. As you well know, dialogue is more commensurate with complexity than diagnosis, so dialogical approaches are more appropriate for so-called wicked problems. This approach then led me to complex systems theory which in turn led to an area of work that Paul Culmsee, I and yourself are looking into: emergent design practices. (Editor’s note: This refers to a method of problem solving in which solutions are not imposed up front but emerge from dialogue between various stakeholders.)
KA: OK, so could you tell us a bit about the kinds of problems you get called in to tackle?
NP: Very broadly speaking, I’m generally called in when organisations have goals that are incommensurate with each other. For example: a billion dollar road that has to be on time and on budget…but, by the way, the alignment of the road also takes out a nesting site of a Carnaby White Tailed Cockatoo which triggers the environmental biodiversity protection act which in turn triggers issues with local councils and so on.
Complexity in projects often arises from situations like these, where the issue is not just about delivering on time and on budget, but also creating a sustainable habitat and ensuring alignment with local governments etc.
KA: So very broadly, I guess one could say that your work deals with the problems associated with change. The reason I put it in this way is that change is something that most people who work in organisations would have had to deal with – either as executives who initiate the change, managers who are charged with implementing it or employees who are on the receiving end of it. The one thing I’ve noticed through experience –initially as a consultant and then working in big organisations – is that change is formulated and implemented in a very prescriptive way. However, the end results are often less than satisfactory because there are many unintended consequences (loss of morale, drop in productivity etc.) – much like the unintended consequences of large infrastructure projects. I’ve long wondered about this is so: why, after decades of research and experience do we still get it so wrong?
NP: Let me give you an answer from a psychologist’s perspective. There are a couple of sub-disciplines of psychology called depth and archetypal psychology that look at myth. The kind of change management programs that we enact are driven by a (predominantly) Western myth of heroic intervention.
James Hillman, an archetypal psychologist once said that a myth is what is real. This is somewhat contrary to the usual sense in which the word is used because we usually think of a myth as being something that is not real. However, Hillman is right because a myth is really an archetype – an overarching way of seeing the world in a way that we believe to be true. The myth of the hero – the good guy overcoming all adversity to slay the bad guy – is essentially an interventionist one. It is based on the Graeco-Roman notion of the exercise of individual will. Does that make sense so far?
KA: Yeah absolutely. Please go on.
NP: OK, so this myth is dominant in the Western imagination. For example, any movie that a kid might go to see like, say, Star Wars is really about the exercise of the individual will. In much the same way, the paradigm in which your typical change management program operates is very much (individual) action and intervention oriented. Even going back to Homeric times – the Iliad and Odyssey are essentially stories about individuals exercising authority, power…and excellence is another word that crops up often too. The objective of all this of course is to effect dramatic, full-frontal change.
However, there is a problem with this myth, and it is that it assumes that things are not complex. It assumes that simple linear, cause-effect explanations hold – that if you do A then B will happen (if you restructure you will save costs, for example). Such models are convenient because they seem rational on the surface, perhaps because they are easy to understand. However, they overlook the little details that often trip things up. As a result, such change often has unforeseen consequences.
Unfortunately, much of the stuff that comes out of the Big 4 consultancies is based on this myth. The thing to note is that they do it not because it works but because it is in tune with the dominant myth of the Western business world.
KA: What you are saying definitely strikes a chord. What’s strange to me, however, is that there have been people challenging this for quite a while now. You mentioned the predominantly linear approach – A causes B sort of thinking – that change management practitioners tend to adopt. Now, as you well know, systems theorists and cyberneticists have proposed alternate approaches that are more cognizant of the multifaceted nature of change, and they have done so over fifty years ago! What happened to all that? When I read some of the papers, I see that they really speak to the problems we face now, but they seem to have been all but forgotten (Editor’s note: see this post that draws on work by the prominent cyberneticist, Heinz von Foerster, for example). One can’t help but wonder why that is so….
NP: Well that’s because myths are incredibly sticky. We are talking about an ancient myth of the exercise of the individual human will. And, by the way, it’s a very Western thing: I remember once hearing on the radio that the Western notion of the “squeaky wheel getting the grease” has an Eastern counterpart that goes something like, “the loudest goose is first to lose his head.” The point is, the two cultures have a very different way of looking at the world. That myth – the hero myth – is very much brought into the way we tell stories about organisations.
Now, why does that matter? Well, JR Hackman, an organizational psychologist said it quite brilliantly. He called our fixation on the hero myth (in the context of change) the leadership attribution error – he argues that we tend to over-attribute the success of a change process to the salient things that we can see – which is (usually) the leader. As a result we tend to overlook the hidden factors which give rise to the actual performance of the organization. These factors usually relate to the latent conditions present in the organization rather than specific causes like a leader’s actions.
So there are two types of change: planned change and emergent change. Planned change is the way organisations usually think about change. It is a causal view in which certain actions give rise to certain outcomes. But here is the problem: the causal approach focuses primarily on salient features, ignoring all the other things that might be going on.
Now, cybernetics and systems theory do a better job of taking into account features that are hidden. However, as you mentioned, they have not had much uptake. I think the reason for this is that myths are incredibly sticky…that is the best answer I can give.
KA: Hmm that’s interesting…I’d never thought of it that way – the stickiness of myths as blinding us to other viewpoints. Is there something in the nature of human thought or human minds that make us latch on to over-simplified explanations?
NP: Well, there’s this notion of cognitive bias – persistent biases in human perception or judgement (Editors’ Note: also see this post on the role of cognitive bias in project failure). The leadership attribution error is precisely such a bias. I should point out that these biases aren’t necessarily a problem; they just happen to be the way humans think. And there are good evolutionary reasons for the existence of biases: we can’t process every little bit of information that comes to us through our senses, and these biases offer a means to filter out what is unimportant. Unfortunately, sometimes they cause us to overlook what is important. They are heuristics and, like all heuristics, they don’t always work.
So in the case of leadership attribution bias – yes leadership does have an effect, but it is not as much as what people think. In fact, work done by Wageman (who worked with Hackman) shows that what is more important for team performance are the conditions in which the teams work rather than the qualities or abilities of the leader.
KA: From experience I would have to say that rings true: conditions trump causes any day as far as team performance is concerned.
NP: Yeah and there’s a good reason for it; and it is so simple that we often overlook it. Take the example of sending a rocket to the moon. If you set up the right conditions for the rocket – the right amount of fuel, the right load and so forth, then everything that is necessary for the performance of the rocket is already set up. The person who actually steers the rocket is not as critical to the performance as the conditions are. And the conditions are already present when the rocket is in flight.
Similarly, In the case of organizational change, we should not be looking for causes – be it leadership or planned actions or whatever– but the conditions that might give rise to emergent change.
KA: Yeah, but conditions are causes too, aren’t they.
NP: Yes they are, but the point is that they aren’t salient ones – that is, they aren’t immediately obvious. Moreover, and this is the important point: you do not know the exact outcomes of those causes except that they will in general be positive if the conditions are right and negative if they aren’t.
KA: That makes sense. Now I’d like to ask you about a related matter. When dealing with change or anything else, organisations invariably seem to operate at the limits of their capacity. Leaders always talk about “pushing ourselves” or “pushing the envelope” and so on. On the other hand, there’s also a great deal of talk about flexibility and the capacity for change, but we never seem to build this into our organisations. Is there a way one can do this?
NP: Yes, you can actually build in resilience. Organisations generally like to keep their systems and processes tightly coupled – that is, highly dependent on each other. This tends to make them fragile or prone to breakdown. So, one of the things organisations can do to build resilience is to keep systems and processes loosely coupled. (Editor’s note: for example, devolve decision-making authority to the lowest possible level in the organization. This increases flexibility and responsiveness while having the added benefit of reducing management overhead).
Conditions also play a role here. One of the things that organisations like to talk about is innovation. The point is you can’t put in place processes for innovation but you can create conditions that might foster it. You can’t ask people whether they “did their 15 minutes of innovation today” but you can give them the discretionary freedom to do things that have nothing to do with their work…and they just might do something that goes above and beyond their regular jobs. But of course what underpins all this is trust. Without trust you simply cannot build in flexibility or resilience.
KA: This really strikes a chord and let me tell you why. I read Taleb’s book a while ago. As you probably know, the book is about antifragility, which he defines as the ability to benefit from uncertainty rather than just being resilient to it. After I read the book I wrote a post on what an antifragile IT strategy might look like…and in an uncanny resonance with what you just said, I made the claim that trust would be the single important element of the strategy [laughter].
NP: Yeah, and trust is not something you receive as much as you give. So as a psychologist I know why it is so damaging to people. You know, “Et tu Brutus” – Caesar’s famous line – it was the betrayal of trust that was so damaging. Once trust is gone there’s nothing left.
KA: Indeed, I sometimes feel that the key job of a manager is to develop trust-based relationships with his or her peers and subordinates. However, what I see in the workplace is often (though definitely not always) the opposite: people simply do not trust their managers because managers are quick to pass the blame down (or even across) the hierarchy rather than absorbing it…which arguably, and ethically, is their job. They should be taking the heat so that people can get on with actual work. Unfortunately managers who do this are not as common as they should be.
NP: We’re getting into a complex area here, and it is one that I deal with at length in my masterclass on collaborative maturity and leadership. This is the old scapegoating mechanism at work, and it is related to the leadership attribution error and the hero myth. If the attribution is back to the individual, then the blame must also be attributable to an individual. In fact, I have this slide in one of my presentations that goes, “a scapegoat is almost as useful as the solution to a problem.” [laughter]
Now, there are two questions here. “The scapegoat” is the answer to the question “Who is responsible?” However, it is more important to look at conditions rather than causes, so the real question is, “How did this situation come about?” When you look at “Who” questions, you are immediately going into questions of character. It elicits responses like “Yeah, it’s Kailash’s fault because he is that kind of a guy…he is an INTP or whatever.” What’s happening here is that the problem is explained away because it is attributed to Kailash’s character. You see what is going on…and why it is so dangerous?
KA: Yeah, that’s really interesting.
NP: And you see, then they’ll say something like, “…so let’s take Kailash out and put Neil in”…but the point is that if the conditions remain the same, Neil will fall down the same hole.
KA: It’s interesting the way you tie both things back to the individual – the individual as hero and the individual as scapegoat.
NP: Yes, it’s two sides of the same coin. Followership acquiesces to leadership: Kailash will follow Neil, say, to the Promised Land. If we get there, Neil gets the credit but if we don’t, he gets the blame.
KA: Very interesting, but this brings up another question. Managers and leaders might turn around and say, “It’s all very well to criticize the way we operate, but the fact is that it is impossible to involve all stakeholders in determining, say, a strategy. So in a sense, we are forced to take on the role of “heroes,” as you put it.”
So my question is: are there some ways in which org are some of the ways in which organisations can address the difficulties associated with of collective decision-making?
NP: Of course, it is often impossible to include all stakeholders in a decision-making process, particularly around matters such as organisational strategy. What you have to do first is figure out who needs to be involved so that all interests are fairly represented. Second, I’m attracted to the whole idea of divergent (open-ended) and convergent (decisive) thinking. For example, if a problem is wicked or complex, there is no point attempting to use expert knowledge or analysis exclusively (Editor’s note: because no single expert holds the answers and there isn’t enough information for a sensible, unbiased analysis). Instead, one has to use collective intelligence or the wisdom of the crowd by seeking opinions from all groups of stakeholders who have a stake in the problem. This is divergent thinking.
However, there comes a time when one has to “make an incision in reality” – i.e. stop consultation and make the best possible decision based on data and ethics. – one has to use both IQ and EQ. This is the convergent side of the coin.
Another problem is that one often has the data one needs to make the right decision, but the decision does not get made for reasons of ideology. Then it becomes a question of power rather than collective intelligence: a solution is imposed rather than allowed to emerge.
KA: Well that happens often enough – this “short-circuiting” of the decision-making process by those in positions of power.
NP: Yes, and it is why I think deliberative decision-making which comes from the Western notion of deliberative democracy – i.e. decision-making based on dialogue and consultation is the best way forward but it can be a challenge to implement. Democracy is slow, but it is generally more accurate…
KA: Yes, that’s true, but it can also meander.
NP: Sure, everything is bound by certain limitations (like time) and that’s why you have to know when to intervene. One of the important things for a leader to have in this connection is negative capability – which is not “negative” in the usual sense of the word, but rather the ability to know how to be comfortable with ambiguity and be able to intervene in ambiguous situations in a way that gets some kind of useful outcome.
Of course, acting in such situations also means that one has to have good feedback mechanisms in place; one must know how things are actually working on the ground so that one can take corrective actions if needed. But, in the end, the success of this way of working depends critically on having the right conditions in place. If you don’t set up the right conditions, any intervention can have catastrophic consequences.
If I may talk politically for a minute – the current situation in the Middle East is a classic example of a planned intervention: direct, frontal, dramatic, causal, linear and supposedly rational. However, if the right conditions are not in place, such interventions can have unforeseen consequences that completely overshadow the alleged benefits. And that is exactly what we have seen.
In general I would say that emergent change is more likely to succeed than large-scale, direct, planned change. The example one hears all the time is that of continuous improvement – where small changes are put in place and then adjusted based on feedback on how they are working.
KA: This is a matter of some frustration for me: in general people will agree that collaboration and collective decision-making are good, but when the time comes, they revert to their old, top-down ways of working.
NP: Yes, well when I go into a consulting engagement on collaborative maturity, one of the first things I ask people is whether they want to use the collaborative process to inform people or to influence them. Often I find that they only want to use it to inform people. There is a big difference between the two: influencing is emergent, informing isn’t.
KA: This begs a question: say you walk into an organization where people say that they want to use collaborative processes to influence rather than inform, but you see that the culture is all wrong and it isn’t going to work. Do you actually tell them, “hey, this is not going to work in your organization?”
NP: Well if people don’t feel safe to speak their truth then it isn’t going to work. That’s why I’m so interested in Hackman’s work on conditions over causes. Coming to your question I don’t necessarily tell people that it’s not going to work because I believe it is more productive to invite them to explore the implications of doing things in a certain way. That way, they get to see for themselves how some of the things they are doing might actually be improved. One doesn’t preach but one hands things back to them.
In psychology there are these terms, transference and countertransference. In this context transference would be where a consultant thinks, “I’m a consultant so I’m going to assume a consultant persona by acting and behaving like I have all the answers”, and countertransference would be where the client reinforces this by saying something like, “you are the expert and you have all the answers.” Handing back stops this transference-countertransference cycle. So what we do is to get people to explore the consequences of their actions and thus see things that might have been hidden from their view. It is not to say “I told you so,” but rather “what are the implications of going down this path.” The idea is to appeal to the ethical or good side in human beings…and I believe that human beings are fundamentally good rather than not.
KA: I like your use of the word “ethical” here. I think that is really important and is what is often missing. One hears a lot about ethics in business these days, but it is most often taught and talked about in a very superficial way. The reality, however, is that the resolution of most wicked problems involves ethical considerations rather than logic and rationality…and this is something that many people do not understand. It isn’t about doing things right, rather it is about doing the right things.
NP: Yes, and this is related to what I call “meaning over motivation” – the idea being is that instead of attempting to motivate people to do something, try providing them with meaning. When you do this you will often find that change comes for free. And it is worth noting that meaning has both an emotional and rational component – or, put a little bit differently, an ethical and logical one. In one of his books, Daniel Pink makes the point that uncoupling ethics from profit can have catastrophic consequences…and we have good examples of that in recent history.
The broad lesson here is that if the conditions aren’t right then it is inevitable that unethical behavior will dominate.
KA: Yeah well human nature will ensure that won’t it?
NP: [laughs] Yeah, and you don’t need a psychologist to tell you that.
KA: [laughs] Indeed…and I think that would be a good note on which to bring this conversation to a close. Neil, thanks so much for your time and insights. It’s been a pleasure to chat with you and I look forward to catching up with you again…hopefully in person, in the not too distant future.
NP: Yeah, Singapore and Perth are not that far apart…
The classic aim of automation is to replace human manual control, planning and problem solving by automatic devices and computers. However… even highly automated systems, such as electric power networks, need human beings for supervision, adjustment, maintenance, expansion and improvement. Therefore one can draw the paradoxical conclusion that automated systems still are man-machine systems, for which both technical and human factors are important. This paper suggests that the increased interest in human factors among engineers reflects the irony that the more advanced a control system is, so the more crucial may be the contribution of the human operator.
These lines were written over thirty years ago, but are ever more apt today – such paradoxes are rife, not only in automation, but in any field in which technology plays an important part. To illustrate my point, I highlight a couple of ironies drawn from a domain that is likely to be familiar to many readers of this blog: the world of enterprise IT. I also present a brief discussion of how these ironies of enterprise IT can be avoided.
Ironies of enterprise IT
In the last few decades information technology has found its way into diverse organisational functions. This trend has been accompanied by an explosive growth in new technologies. As a result of this, corporate IT infrastructures have become ever more complex and the costs of maintaining them have burgeoned. Quite naturally, the focus has thus turned to taming both complexity and cost. The favoured approaches to tackling this problem are standardisation and/or outsourcing. However, as I discuss below, both often lead to ironic outcomes.
An irony of standardisation
Enterprise IT environments tend to evolve rapidly, reflecting the many demands made on them by the organisational functions they support. This is good because it means that IT is doing what it should be doing: supporting the work of the parent organisation. On the other hand, this can result in unwieldy environments that are difficult (not to mention, expensive) to maintain. One way to address this is to impose of standards relating to processes (such as ITIL) and infrastructure (such as SAP or any enterprise application).
The question is, how well does such standardisation work in practice?
In his book entitled, From Control to Drift, Claudio Ciborra pointed out that IT infrastuctures in organisations tend to drift – i.e. they escape processes, plans and standards, and take on a life of their own. The reason they drift is that they are subject to unpredictable forces within and outside the hosting organisation. The imposition of standards may slow the drift but cannot arrest it entirely. Infrastructures are therefore best seen as ever-evolving constructs consisting of systems, people and processes that interact with each other in often unforeseen ways. As he put it:
Corporate information infrastructures are puzzles, or better collages, and so are the design and implementation processes that lead to their construction and operation. They are embedded in larger, contextual puzzles and collages. Interdependence, intricacy, and interweaving of people, systems, and processes are the culture bed of infrastructure. Patching, alignment of heterogeneous actors and making do are the most frequent approaches…irrespective of whether management [is] planning or strategy oriented, or inclined to react to contingencies.
The essential message here is that standards and processes overlook the fact that enterprises are complex social systems that are subject to internal and external influences which cannot always be foreseen. Dealing with these, more often than not, entails the implementation of hacks and workarounds that violate the imposed standards and thus nullify the benefits of standardisation.
In summary, “standardised” IT environments often end up have a plethora of non-standard hacks and workarounds that are necessary, but are generally messy and expensive to maintain.
An irony of outsourcing
One of the main reasons for outsourcing IT is to reduce costs. Yes, I am aware that many decision-makers claim that their primary reason is to reduce complexity rather than cost, but the choices they make often belie their claims. The irony is that in their eagerness to control costs, they often end up increasing them because they overlook hidden factors. I explain this in brief below, drawing on my post on the transaction costs of outsourcing.
The basic idea is simple – it is that the upfront fee quoted by the vendor is but a fraction of the total cost that will be incurred by the customer. Some of the costs that are generally not included in upfront costs are:
- Search /selection costs: these are the costs associated with searching for and shortlisting vendors.
- Bargaining costs: these are costs associated with negotiations for a mutually acceptable contract.
- Costs of coordinating work: these are costs associated with coordinating external and internal work. This is particularly important in the case of software-as-a-service because the effort required to interface cloud applications with in-house systems is often underestimated.
- Costs of enforcement and change: These are costs associated with enforcing the terms of the contract and those associated with change.
The point to note is these costs are rarely if ever mentioned by the vendor, but almost always show up in one form or another. It is therefore important for the customer to try and get a handle on these before entering into any commercial agreements. The problem is, some of these costs (particularly 3 and 4 above) are hard if not impossible to figure out upfront. For example, if the relationship turns sour the only solution might be to switch vendors. The cost associated with this is often significant and is borne entirely by the customer. A lack of awareness of such costs associated with outsourcing will invariably result in ironical outcomes.
In summary: attempts to control costs by outsourcing IT can have the contrary effect of increasing them.
Avoiding ironical outcomes
So how does one avoid ironical outcomes?
I have only one piece of advice to offer here: when planning IT architectures or outsourcing initiatives, use an incremental or emergent approach that avoids big designs or commitments upfront. Using an emergent approach not only limits risk, it also provides opportunities for learning. Most important, it enables one to verify that the envisaged benefits are not just wishful architect or manager-level thinking
Below I outline what such an approach might entail for the two ironies discussed earlier:
- For infrastructures/systems: avoid grandiose system designs that attempt to span the “enterprise” – remember that one size will not fit all of your users. . Consequently, enterprise architectures and governance systems should provide guidelines, rather than detailed prescriptions. As Anders Jensen-Waud puts it in this post: they should foster resilience and adaptability rather than conformance,
- For outsourcing: start small, possibly with a small project or system. This will help you get a sense for how outsourcing would work in your environment and help you figure out whether the vendor you have selected is really right for you. Remember, no two environments are identical so others’ lessons learned may be considerably less useful than you think. Finally, if you’re going to the cloud, be sure to factor in costs and technical challenges associated with interfacing external apps with in-house ones.
Yes, there’s nothing particularly profound here, it is just common sense…but you know what they say about the commonality of common sense.
In this post I have highlighted some ironies of enterprise information systems and have briefly outlined an emergent approach to avoiding them. I believe but cannot prove that ironical outcomes are almost guaranteed if one takes a monolithic, enterprise-style approach or a let’s-outsource-it-all attitude to enterprise information technology. Such a view overlooks the messy little details and differences that trip up big designs and grandiose plans. In the end, the only way to avoid ironical outcomes is to start small, learn from experience and incorporate that learning in an incremental manner in whatever you’re building or doing. Yes, you might end up with something you did not envisage at the start, but you will have learnt much along the way. More important, perhaps, is that you will be able to rest assured that it works.
Much can be learnt about an organization by observing what management does when things go wrong. One reaction is to hunt for a scapegoat, someone who can be held responsible for the mess. The other is to take a systemic view that focuses on finding the root cause of the issue and figuring out what can be done in order to prevent it from recurring. In a highly cited paper published in 2000, James Reason compared and contrasted the two approaches to error management in organisations. This post is an extensive summary of the paper.
The author gets to the point in the very first paragraph:
The human error problem can be viewed in two ways: the person approach and the system approach. Each has its model of error causation and each model gives rise to quite different philosophies of error management. Understanding these differences has important practical implications for coping with the ever present risk of mishaps in clinical practice.
Reason’s paper was published in the British Medical Journal and hence his focus on the practice of medicine. His arguments and conclusions, however, have a much wider relevance as evidenced by the diverse areas in which his paper has been cited.
The person approach – which, I think is more accurately called the scapegoat approach – is based on the belief that any errors can and should be traced back to an individual or a group, and that the party responsible should then be held to account for the error. This is the approach taken in organisations that are colloquially referred to as having a “blame culture.”
To an extent, looking around for a scapegoat is a natural emotional reaction to an error. The oft unstated reason behind scapegoating, however, is to avoid management responsibility. As the author tells us:
People are viewed as free agents capable of choosing between safe and unsafe modes of behaviour. If something goes wrong, it seems obvious that an individual (or group of individuals) must have been responsible. Seeking as far as possible to uncouple a person’s unsafe acts from any institutional responsibility is clearly in the interests of managers. It is also legally more convenient…
However, the scapegoat approach has a couple of serious problems that hinder effective risk management.
Firstly, an organization depends on its frontline staff to report any problems or lapses. Clearly, staff will do so only if they feel that it is safe to do so – something that is simply not possible in an organization that takes scapegoat approach. The author suggests that the Chernobyl disaster can be attributed to the lack of a “reporting culture” within the erstwhile Soviet Union.
Secondly, and perhaps more important, is that the focus on a scapegoat leaves the underlying cause of the error unaddressed. As the author puts it, “by focusing on the individual origins of error it [the scapegoat approach] isolates unsafe acts from their system context.” As a consequence, the scapegoat approach overlooks systemic features of errors – for example, the empirical fact that the same kinds of errors tend to recur within a given system.
The system approach accepts that human errors will happen. However, in contrast to the scapegoat approach, it views these errors as being triggered by factors that are built into the system. So, when something goes wrong, the system approach focuses on the procedures that were used rather than the people who were executing them. This difference from the scapegoat approach makes a world of difference.
The system approach looks for generic reasons why errors or accidents occur. Organisations usually have a series of measures in place to prevent errors – e.g. alarms, procedures, checklists, trained staff etc. Each of these measures can be looked upon as a “defensive layer” against error. However, as the author notes, each defensive layer has holes which can let errors “pass through” (more on how the holes arise a bit later). A good way to visualize this is as a series of slices of Swiss Cheese (see Figure 1).
The important point is that the holes on a given slice are not at a fixed position; they keep opening, closing and even shifting around, depending on the state of the organization. An error occurs when the ephemeral holes on different layers temporarily line up to “let an error through”.
There are two reasons why holes arise in defensive layers:
- Active errors: These are unsafe acts committed by individuals. Active errors could be violations of set procedures or momentary lapses. The scapegoat approach focuses on identifying the active error and the person responsible for it. However, as the author points out, active errors are almost always caused by conditions built into the system, which brings us to…
- Latent conditions: These are flaws that are built into the system. The author uses the term resident pathogens to describe these – a nice metaphor that I have explored in a paper review I wrote some years ago. These “pathogens” are usually baked into the system by poor design decisions and flawed procedures on the one hand, and ill-thought-out management decisions on the other. Manifestations of the former include faulty alarms, unrealistic or inconsistent procedures or poorly designed equipment; manifestations of the latter include things such as unrealistic targets, overworked staff and the lack of funding for appropriate equipment.
The important thing to note is that latent conditions can lie dormant for a long period before they are noticed. Typically a latent condition comes to light only when an error caused by it occurs…and only if the organization does a root cause analysis of the error – something that is simply not done in an organization takes a scapegoat approach.
The author draws a nice analogy that clarifies the link between active errors and latent conditions:
…active failures are like mosquitoes. They can be swatted one by one, but they still keep coming. The best remedies are to create more effective defences and to drain the swamps in which they breed. The swamps, in this case, are the ever present latent conditions.
“Draining the swamp” is not a simple task. The author draws upon studies of high performance organisations (combat units, nuclear power plants and air traffic control centres) to understand how they minimised active errors by reducing system flaws. He notes that these organisations:
- Accept that errors will occur despite standardised procedures, and train their staff to deal with and learn from them.
- Practice responses to known error scenarios and try to imagine new ones on a regular basis.
- Delegate responsibility and authority, especially in crisis situations.
- Do a root cause analysis of any error that occurs and address the underlying problem by changing the system if needed.
In contrast, an organisation that takes a scapegoat approach assumes that standardisation will eliminate errors, ignores the possibility of novel errors occurring, centralises control and, above all, focuses on finding scapegoats instead of fixing the system.
Figure 1 was taken from the Patient Safety Education website of Duke University Hospital.
The Swiss Cheese model was first proposed in 1991. It has since been applied in many areas. Here are a couple of recent applications and extensions of the model to project management:
Information technology (IT) is an integral part of any modern day business. Indeed, as Bill Gates once put it, “Information technology and business are becoming inextricably interwoven. I don’t think anybody can talk meaningfully about one without the talking about the other.” Although this is true, decision makers often display ambivalent, even contradictory attitudes towards enterprise IT. For example, depending on the context, an executive might view IT as a cost of doing business or as a strategic advantage: the former view is common when budgets are being drawn up whereas the latter may come to the fore when a bold new e-marketing initiative is being discussed.
In this post I discuss some of these dilemmas of IT and show how the opposing viewpoints embodied in them need to be managed rather than resolved. I illustrate my point by describing one way in which this can be done.
The dilemmas in brief
Many of the dilemmas of IT are consequences of conflicting views of what IT is and/or how it should be managed. I’ll describe some of these in brief below, leaving a discussion of their implications to the next section:
- IT as a cost of doing business versus IT as strategic asset: This distinction highlights the ambivalent attitudes that senior executives have towards IT. On the one hand, IT is seen as offering strategic advantages to the organization (for example a custom built application for customer segmentation). On the other, it is seen as an operational necessity (for example, core banking systems in the financial industry).
- Centralised IT versus Autonomous IT: This refers to the debate about whether an organisation’s IT environment should be tightly controlled from head office or whether subsidiaries should be given a degree of autonomy. This is essentially a debate between top-down versus bottom-up approaches to IT planning.
- Planning versus Improvisation: This refers to the tension between the structure offered by a plan and process-driven approach to IT and the necessity to step outside of plans and processes in order to come up with improvised solutions suited to the situation at hand. I have written about this paradox in a post on planning and improvisation.
There are other dilemmas – for example, technology driven IT versus business driven IT. However, for the purpose of this discussion the three listed above will suffice.
The poles of a dilemma
In his book entitled Polarity Management, Barry Johnson described how complex organizational issues can often be analysed in terms of their mutually contradictory facets. He termed these facets poles or polarities. In this and the next section, I elaborate on Johnson’s notion of polarity and show how it offers a means to understand and manage the dilemmas of enterprise IT.
The key features of poles are as follows:
Each pole has associated positives and negatives. For example, the up side of viewing IT as a cost is that the organisation focuses on IT efficiency and value for money; the downside is that exploration and experimentation that is necessary for IT innovation would likely be seen as risky. On the other hand, the positive side of IT as a strategic asset is that it is seen as a means to enable an organisation’s growth and development; the negative is that it can encourage unproven technologies (since new technologies are more likely to offer competitive advantages) and uncontrolled experimentation along with their attendant costs.
Most organisations oscillate between poles. At any given time the organisation will be “living” in one pole. In such situations, some stakeholders will perceive the negatives of that pole strongly and will thus see the other pole as being more desirable (the “grass is greener on the other” side syndrome). Johnson labels such stakeholders crusaders” – those who want to rush off into the new world. On the other hand, there are tradition bearers, those who want to stay put. When an organisation has spent a fair bit of time in one pole, the influence of crusaders tends wax while that of the tradition bearers weakens because the negatives become apparent to more and more people.
A concrete example may help clarify this point:
Consider a situation where all subsidiaries of a multinational have autonomous IT units (and have had these for a while). The main benefits of such a model are responsiveness and relevance: local IT units will able to respond quickly to local needs and will also be able to deliver solutions that are tailored to the specific needs of the local business. However, this model has many negative aspects: for example, high costs, duplication of effort, massive software portfolio and attendant costs, high cost of interfacing between subsidiaries etc.
When the model has been in operation for a while, it is quite likely that IT decision makers will perceive the negatives of this pole more clearly than they see the positives. They will then initiate a reform to centralize IT because they perceive the positives of that pole –i.e. low costs, centralization of services etc. – as being worth striving for. However, when the new world is in place and has been operating for a while, the organisation will begin to see its downside: bureaucracy, lack of flexibility, applications that don’t meet specific local business needs etc. They will then start to delegate responsibility back to the subsidiaries…and thus goes the polarity merry-go-round.
Managing enterprise IT dilemmas
As discussed above, any option will have its supporters and detractors. For example, finance folks may see IT as a cost of doing business whereas those in IT will consider it to be a strategic asset. What’s important, however, is that most organisations “resolve” such contradictions by taking sides. That is, one side “wins” and their point of view gets implemented as a “solution.” The concerns of the “losing” side are overlooked entirely.
Although such a “solution” appears to solve the problem, it does not take long for the negative aspects of the other pole to manifest itself; the rumbles of discontent from those whose concerns have been ignored grow louder with time. In this sense, issues that can be defined in terms of polarities are wicked problems – they are perceived in different ways by different stakeholders and so are difficult to define, let alone solve.
As we have seen above, however, the poles of a dilemma are but different facets of a single reality. Hence, the first step towards managing a dilemma lies in realizing that it cannot be resolved definitively; regardless of the path chosen, there will always be a group whose concerns remain unaddressed. The best one can do is to be aware of the positives and negatives of each pole and ensure that the entire spectrum of stakeholders is aware of these. A shared awareness can help the group in figuring out ways to mitigate the worst effects of the negatives.
One which this can be done is via a facilitated session, involving people who represent the two sides of the issue. To begin with, the facilitator helps the group identify the poles. She then helps the group create a polarity map which shows the contradictory aspects of the issue along with their positives and negatives. A rudimentary polarity map for the autonomous/centralized IT dilemma is shown in Figure 1 below.
To ensure completeness of the map, the group must include stakeholders who represent both sides of the dilemma (and also those who hold views that lie between).
As mentioned in the previous section, organisations are not static, they oscillate between poles. Moreover, Johnson claimed that they follow a specific path in the map. Quoting from the book I wrote with Paul Culmsee:
According to Johnson, organisations tended to oscillate between poles. If you accept the notion of a wicked problem as a polarity, the overall pattern traced as one moves between these poles resembles an infinity symbol. The typical path is L- to R+, to R-, across to L+ and Johnson argued that the trajectory could not be avoided. All we can do is focus on minimizing our time spent in the lower quadrants.
Again, it is worth emphasizing that the conflict between the two groups of stakeholders cannot be resolved definitively. The best one can do is to get the two sides to understand each other’s’ point of view and hence attempt to minimize the downsides of each option.
Finally, polarity management is but one way to manage the dilemmas associate with enterprise IT or any other organizational decision. There are many others – and <advertisement> I highly recommend my book if you’re interested in finding out more about these </advertisement>. In the end, though, the point I wished to make in this post is less about any particular technique and more about the need to air and acknowledge differing perspectives on issues pertaining to enterprise IT or any other decision with organization-wide implications.
The dilemmas of enterprise IT are essentially consequences of mutually contradictory, yet equally valid perspectives. Is IT a cost of doing business or is it a strategic asset? The answer depends on the perspective one takes…and there is no objectively right or wrong answer. Given this, it is important to be aware of both the up and down side of each perspective (or pole) before one makes a decision. Unfortunately, most often decisions are made on the basis of the up side of one option and the down side of the other. As should be evident now, a decision that is based on such a selective consideration of viewpoints invariably invites conflict and leads to undesirable outcomes.
Welcome to the second post in my conversations series. This time around I chat with my friend and long-time collaborator Paul Culmsee who, among many other things, is a skilled facilitator and a master of the craft of dialogue mapping (more on that below).
In an hour-long conversation recorded a couple of weeks ago, Paul and I talked about the art of sensemaking. (Editor’s note: the conversation has been lightly edited for clarity)
What is sensemaking?
KA: Hi Paul, in this conversation I wanted to focus on sensemaking. From our association over the years, I know that’s a specialty of yours. Incidentally, I checked out your LinkedIn profile and saw that you announce yourself as an IT veteran of many years and a sensemaker. So, to begin with, could you tell us what sensemaking is?
PC: [Laughs] First up, thanks for stalking me on LinkedIn. Well the “IT veteran” part is easy – it’s is what I’ve been doing ever since I left university in 1989. Sensemaking came a while later.
In a nutshell, sensemaking is about helping groups make sense of complex situations that might otherwise lead them into tense or adversarial conditions. A lot of projects exhibit such situations from time to time. Sensemaking seeks to help groups develop a shared understanding of these sorts of situations.
KA: OK, so what I’m hearing is that it is about helping people get clarity on an ambiguous situation or may be, even define what the problem is.
PC: Yeah absolutely…and we alluded to this in our Heretic’s book. It is staggering when one realizes how many teams and individuals (in teams and in organisations) spend a stack of money and time without being aligned on the problem that they’re solving. Often this lack of alignment becomes evident only in the wash, long after anything can be done. In some ways it beggars belief that that could happen; that a project or initiative could go on for long without alignment, but it does happen quite often. Sensemaking seeks to eliminate that up front.
There are various tools, techniques and collaborative approaches that one can use to bring people together to air and reconcile different viewpoints. Of course, this assumes that people genuinely want to see things improve, and in my experience this is often the case. A lot of the time, therefore, sensemaking is simply about helping groups reach a shared understanding so that subsequent actions can be taken with full commitment from everybody concerned.
KA: All that sounds very reasonable, even obvious. Why do you think this has been neglected for so long? Why have people overlooked this?
PC: Mate, I’m glad we’re having beers as we talk about this [takes a swig].
Look I think it is often seen as an excuse to have a talk-fest, and I think that criticism is actually quite fair. I think organisations…or, rather, the people within them…tend to have a very strong drive to move to action. The idea of stopping and thinking is seen as not being a particularly productive use of time.
Actually, if you delve into it, sensemaking has been around for years and years. In fact, pretty much anyone who is a facilitator is a sensemaker as well in that he or she seeks to help people [overcome an issue that they’re facing as a group]. The problem is that a lot of the techniques used in sensemaking are rooted in theories or philosophies that aren’t seen as being particularly practical. To a certain extent, the theories themselves are to blame. For example, the first time I heard about soft systems theory, I had no idea what the person was talking about. (Editor’s note: a theory that underlies many facilitation techniques)
KA: Yes, that’s absolutely right. Systems theory itself has been around for a long time….since the 1950s I think. It’s also been resurrected in various guises ever since, but has always had this reputation (perhaps unfair) of being somewhat impractical. So, I’m curious as to how you actually get around that. How do you sell what you do? (Editor’s note: Systems theory is the precursor of soft systems theory)
PC: By example. It’s really as simple as that. If you take the example of dialogue mapping, which is a practical facilitation approach involving the visual capture of rationale using a graphical notation. Even that…which is a practical tool…is much easier to show by example than to explain conceptually. If I were to try to explain what it is in words, I’d have to say something like “I sit in a room and get paid to draw maps. I map the conversations and facilitate at the same time.” People might then say, “What’s that? Is it like mind mapping?” Then I have no choice but to say, “Well, yeah…but there’s a lot more to it than that.”
So I’ve long since given up on explaining it to people conceptually; it’s much easier to just show them. Moreover, in a lot of the situations where I do engagements for clients, I discourage the sponsor from making a big deal about the technique. I’d rather just let the technique “sell itself”.
KA: Yeah that rings true. You know, I was trying to write a blog post once on dialogue mapping, and realized it would be much better to tell it through a story (Editor’s note: …and the result was this post).
OK, so you’ve told us a bit about dialogue mapping, and I know it is a mainstay of your practice. Could you tell us a bit about how you came to it and what it has done for you?
PC: Oh, it’s changed my career. In terms of what it’s done for me – well, where I am now is a direct result of my taking up that craft. And I call it a craft because it took a damn lot of practice. It is not something where you can read a book and go “Oh that makes sense,” and then expect to facilitate a group of twenty people or anything like that.
How I came to it was as follows: I had come off a large failed project and was asking myself what I could have done differently. In hindsight, the problem was pretty obvious: there were times when things were said by certain stakeholders and I should have gone, “Right, stop!” But I didn’t. Of course, such mistakes are part of a learning journey. I subsequently did some research on techniques that might have helped resolve such issues and came to dialogue mapping directly as a result of that research.
Then, through sheer luck I got to apply the technique in areas other than my discipline. As I said earlier, I’m an IT guy and have been in IT for a long time, but I was lucky to get an early opportunity to use dialogue mapping in an area that was very different from IT (Urban planning to be precise). I sucked at it completely in that first engagement, but did enough that the client got value out of it and asked me back.
That engagement was a sink or swim situation, and I managed to do just enough to stay afloat. I should also say that the group I worked with really wanted to succeed: even though they were deadlocked, the group as a whole had a genuine intent to address the problem. Fortunately we were able to make a small breakthrough in the first session. We ended up doing six more sessions and had a really good outcome for the organisation.
KA: That’s interesting…but also a little bit scary. A lot of people would find a situation like that quite daunting to facilitate. In particular, when you walk into a situation where you know a group has been grappling with a problem for a long time, you first need to make sense of it yourself. How do you do that?
PC: Yeah, well as you do it more of it, you gain experience of different situations and domains – for example, not-for-profit organizations, executives of a business, public sector or what have you. You then begin to notice that the patterns behind complex problems are actually quite similar across different areas. Although I can’t quite put my finger on what exactly I do, I would say that it is largely about “listening to the situation” and “asking the right questions”.
When Jeff Conklin teaches dialogue mapping, he talks about the seven different question types (Editor’s note: Jeff Conklin is the inventor of dialogue mapping. See this post for more on his question types). He really gets you to think about the questions you’re going to ask. It took me a while to realize just how important that is: the power of asking questions in the right way or framing them appropriately. Indeed, the real learning for me began when I realized this, and it happened long after I had mastered the notation.
The fact is: each situation is unique. I approach strategic planning, team development or business analysis in completely different ways. I can’t give you a generic answer on the approach, but certainly nowadays when I’m presented with a scenario, I find that there is not much that is unfamiliar. I’ve seen most of the territory now, perhaps.
KA: So it’s almost like you’ve got a “library” of patterns which you can find a match to the situation you’re in
PC: Yes that’s right…and I should also mention that the guys I worked with in my early days of using the craft were also sensitive to this, even though they did not practice dialogue mapping. One of my earliest gigs was to develop a procurement strategy for a major infrastructure project. We spent half a day – from 8:30 in the morning to 1:00 in the afternoon – just trying to figure out what the first question should be. It’s conversations like that in the early days, followed by trial and error in actual facilitation scenarios that aided my learning.
KA: That’s interesting, and I’d like to pick up on what you said earlier about the power of asking the right questions. Jeff Conklin has his seven question types which he elaborates at length in his book (and we also talk about them in the Heretic’s Guide). However, since then, I know that your thinking on this has advanced considerably. Could you tell us a bit more about this?
PC: Yeah, if we ever do a second edition of the Heretic’s Guide, I’ll definitely be covering this kind of stuff. But, let me try to explain some of the ideas here in brief.
To set the context, I’ll start with one of Jeff’s question types. An important question type is the deontic question, which is a question that a lot of maps start with. A deontic question asks “What ought to be done?” – for example, “What should we do about X?” The aim of such a question is to open up a conversation.
However, deontic questions can be poorly framed. To take a concrete example, say if one were to ask, “What should we do about increasing X?” – well such a question implicitly suggests a course of action – i.e. one that increases X. A well framed deontic question doesn’t do that. It solicits information in a neutral or open way. (Editor’s Note: For example, a well-framed alternative to the foregoing question would be, “What should we do about X?”)
All that is well and good, and is something I teach in my dialogue mapping courses as does Jeff in his. However, I once taught dialogue mapping to a bunch of business analysts, and of course told them about the importance of asking deontic questions. Some of the guys told me that they intended to use it at work the following week. Well, I saw them again a few weeks later and naturally asked, “So how did it go?” They said, “Hey that the deontic question just didn’t work!”
I kind of realized then, and in fact I had mentioned to them (but may be not stressed it enough), that questions need to be framed to suit the situation. You can ask an open deontic question in a really bad way… or even lead at the wrong time with the wrong question.
The more I thought about it, the more I realized that there are patterns to [framing] questions. In other words, there are ways of asking questions that will lead to better outcomes. As an example, a deontic question might be, “What should our success indicators be on this project?” This is a perfectly valid open question (as per Conklin’s question types). However, in a workshop setting this is probably not a good question to ask because the conversation will go all over the place without reaching any consensus. Moreover, people who don’t have tolerance for ambiguity would be uncomfortable with a question like that.
A better option would be to reframe the question and ask something like, “If this initiative were highly successful and we look back on it after, say 2 years, how would things be different to now?” With that question what you’re saying to people is: let’s not even worry about problem definition, context, criteria and all that stuff that comes with a deontic question; instead you’re asking them to tell you about the difference between now and an aspirational future. This is easier to answer. A lot of people will say things like – we’ll have more of X and less of Y and so on.
On a related note, if you want to understand the long-term implications of “more of X” or “less of Y”, it is not helpful to ask a question like, “What do you think will happen in the long-term?” People won’t intuitively understand that, so you won’t get a useful answer. Instead it is better to ask a question like, “What behaviours do you think will change if we do all this sort of stuff?” Now, if you think about it, the immediate outcomes of projects are things like “increased awareness of something “or “better access to something”, but over time you’re looking for changes in behaviours because that’s when you know that the changes wrought by an initiative have really taken root.
Subtle reframings of this kind yield richer answers that are more meaningful to people. Moreover, when you solicit responses from a group in this manner, you’ll start to see common themes emerge. These are the sorts of subtleties I have come to understand and appreciate through my practice of sensemaking.
KA: That’s fascinating. So what you’re saying is that rather than asking a direct question it is better to ask an oblique one. Is that right?
PC: Yes, and I think that point is worth elaborating. You used the word “oblique” and I know you’re using it deliberately because we’ve talked about this in the past. Essentially I think the “law of asking questions” is that the more direct question, the less likely it is that you will get a useful answer.
Seriously, asking a question like “What should our vision be?” is a completely brainless way of getting to a vision. You’re more likely to get a useful answer from a question like “What would our organization look like three years from now, if we achieved all we are setting out to do?” The themes that come out of the answers to these kinds of question help in answering the direct question.
I’ve learnt that the question that everyone wants answered is never the one you start with. If you start with the direct question, the conversation will meander over all kinds of weird places.
I came across the notion of obliquity in an article (…and I think it was one of the rare times I sent you something instead of the other way around). In the article, the author (John Kay) made the observation that organisations that chased KPIs like earnings per share (for example) generally did not do as well as organisations that had a more holistic vision (Editor’s note: I also recommend Kay’s book on obliquity). One example Kay gives is that of Microsoft whose objective in the 90s wanted “a PC on every desk in the world.” Microsoft achieved the earnings per share alright, but it was pursued via an oblique goal.
Organisations that chase earnings per share or other financial metrics tend to be like the folks who “seek happiness” directly instead of trying to find it by, say, immersing themselves in activities that they enjoy. I guess I observed that the principle of obliquity – that things are best achieved indirectly – also applies to the art of asking questions.
KA: That makes a lot of sense. Indeed, after we exchanged notes on the article and Kay’s book, I’ve noticed this idea of obliquity popping up in all different kinds of contexts. I’m not sure why this is, but I think it has something to do with the fact that we don’t really know how the future is going to unfold, and obliquity helps open our minds up to possibilities that we would overlook if we took a “straight line from A to B” kind of approach.
PC: Yep, and that brings up an interesting aspect to oblique questions as well. You know, some people – especially those trained in a standard business school curriculum will be surprised if you ask them an oblique question because it seemingly makes no sense. They might say, “Well, why would you ask that? What we really want to answer is this”
Well, I’ve found a way to deal with this, and I learnt this from working with a facilitator who is a professor at one of the business schools here (in Perth). This was in a strategic planning workshop that we co-facilitated. Before starting, she walked up to the whiteboard and sketched out a very simple strategic planning model – literally a diagram that said here’s our vision, and the vision leads to a mission, which leads to areas of focus which, in turn, lead to processes…a simple causal diagram with a few boxes connected by arrows.
She spent only a minute or so explaining this model; she didn’t do it in any detail. Then she pointed to a particular box and said, “We’re going to talk about this particular one now.” And I don’t know why this is, but when you present a little model like that (which is familiar to the audience) and say that you’re going to focus on a particular aspect of it, people seem to become more receptive to ambiguity, and you can then get as oblique as you like. Perhaps this is because the narrative is then aligned with a mental model that is familiar to them.
What I’ve learnt, in effect, is that you can’t talk about the wonders of complex systems theory to a bunch of rational project managers. Conversely, when I’m dealing with a group of facilitators (who love all that systems theory stuff) I would never draw a management model going from vision to mission to execution. But when dealing with the corporate world, I will often use a model like that. Not to educate them – they already know the model – but purely to reduce their anxiety. After that I can ask them the questions I really want to ask. It’s a subtle trick: you put things in a familiar frame and then, once you have done that, you can get as oblique as you like.
Tying this back to a question you asked earlier about how I prepare for a facilitation session. Well, I usually try to figure out audience first. If I’m dealing with the public sector I might set the stage by talking about wicked problems, whereas with corporate clients I might start with a Strategic Planning 101 sort of presentation. Either way, I find a frame that is familiar to them and then – almost like a sleight of hand – I switch to the questions I really want to ask. Does that make sense?
KA: Yeah, so to summarise: you give them a security blanket and then scare the hell out of them [laughs]
PC: That’s actually pretty well summarized [laughs]; I like where you’re going with that…but I’d put it slightly differently. It’s actually a bit like when you’re trying to get little kids to eat something they don’t want to eat – you go, “see here’s the choo-choo train” or something like that, and then get them to have a spoonful while their focusing on that. In a way it’s like creating a distraction. But the aim is really to couch things in such a ways as to get to a point where you can start to have a productive dialogue. And the dialogue itself is driven by powerful questions.
From obliquity to directness
KA: By powerful, I guess you mean oblique…
PC: Yeah, the oblique aspect of questions is a common thread that runs through much of what I do now. Mind you, I don’t stay oblique all the way through a session. I start obliquely because I want to unpack a problem. Eventually, though, as people start to get insights and themes begin to emerge, I become more direct; I start to ask things such as who, when etc…putting names and dates down on the map.
KA: So you get more direct in your questioning as the group starts to reach a common understanding of a problem.
PC: That’s exactly right. But there’s the other side to it (and by the way, you should have a conversation with my colleague Neil on this kind of stuff): you typically have a mixed audience, the “left-brainers” who are rational engineer types as well as the “hippies” (the so-called “right-brainers”) who want to stay out in conceptual-land. Both groups like to stay in their comfort zone: the engineers don’t like moving to conceptual-land because they see it as a waste of time; on the other hand conceptual people don’t like moving to action because the conceptual world feels safer to them. So I sort of trick the engineers into doing conceptual stuff while also pushing conceptual guys into answering more direct questions.
Other techniques and skills
KA: Interesting. Let’s talk a bit about techniques – I know dialogue mapping is a mainstay of much of what you do. What are some of the other techniques you use [to draw people out of their comfort zones]? You mentioned soft systems theory and a few others; you seem to have quite a tool-chest of techniques to draw upon.
PC: Yeah well, when I got into mapping, I also looked at other techniques. I was interested in what else you could do, so I looked at various gamestorming techniques, graphic facilitation and, of course, many methods based on the principles of soft systems and related theories.
I use techniques from both the right-brained and left-brained ends of the spectrum…and by the way, I apologize to any neuroscientists who might be reading/listening to this because I know they hate the term left/right brained. However, I do find it useful sometimes.
Anyway, a popular technique on the right-brained end of the spectrum is Open Space, which operates almost entirely in conceptual-land. It relies on the wisdom of the crowd; there is no preset agenda, just a theme. People sit in a circle, there’s a Tibetan bell…an on the surface it all seems quite hippy. However, I’ve actually done it in construction projects where you have folks who have come off a building site, dressed in their safety gear – hard hats and all – and participated in such situations. And it does work, despite its touchy-feely, hippy image.
On the other hand, once you’ve conceptualized a project you need to get down to hard work of getting stuff done. One of the first questions that comes up is, “How do we measure success?” This usually boils down to defining KPIs. Now, I would never dialogue map or open space a conversation on KPIs. You might get a few themes from dialogue mapping, but definitely not enough detail. Instead, one of the things I often do is go to an online KPI library (like http://kpilibrary.com which has over 7000 KPIs) in which you can find KPIs relating to any area you can think of, ranging from project management to customer service to quality or sustainability. I’ll then print relevant ones on cards and then use a card-sorting technique in which I put people in different groups and ask them to look at specific focus areas [that emerged from the conceptualization phase], and figure out which KPIs are relevant to it.
Why do I do that? Not because I think they will find the KPI. They probably won’t. It’s more because such a process avoids those inevitable epic arguments on what a KPI actually is.
A very effective technique is to spend half a day unpacking a problem via dialogue mapping and get key themes to emerge. This “conceptualization phase” is done with the whole group. Then, when you want to drill down into detailed actions action, it helps to use a divide and conquer approach. This is why I split people up into smaller groups and get them to go off and work on themes that emerged from the conceptualization phase. The aim is to get them to come up with concrete KPIs or even actions. If I’m feeling really evil, when there’s only 10 minutes left, I’ll tell them that they can present only their top four actions or KPIs. This forces them to prioritise things according to value. It’s a bit like a Delphi technique really. Finally, the groups come back together and present all their findings, which I then dialogue map. Once that is done, the larger group (together) will turn to the map and synthesise the outputs of what the smaller groups have done. This example is quite typical of the kind of stuff that I do.
Another example: I did some strategic planning work for local government – this was in the area of urban planning. Now, we did not want them to just copy someone else’s community development plan and “cookie cutter” it. So the very first thing we did was a dialogue mapping session geared towards answering a couple of questions: 1) if the community development plan for this organization was highly successful, how would things be different from how they are now? And, 2) what is unique about this particular area (shire)?
Then, in the second workshop we got some of the best community development plans from around Australia and put each one at a different table. We split people up into groups and got them to spend time at each table. Their job was to note down, on flipcharts, the pros and cons of each plan. The first iteration took about half an hour – presumably because this was the first time many of them were reading a development plan. Once the first iteration was done, people moved to the next table and so on, in round robin fashion.
By the end of that exercise, everyone was a world expert in reading community development plans. By the time people got to their third plan, they were flicking through the pages pretty fast, noting down the things they agreed or disagreed with. Then they came back together and did a synthesis of the common themes were – what was good, what was bad and so on.
Finally, we dialogue mapped again, and this time the question was, “Given what we have seen in all of the other plans, what are we going to do differently to mitigate the issues we have seen with some of them?” That pretty much nailed what they were going to do with their plan.
The need to improvise
KA: From what you’re saying it seems that almost every situation you walk into is different, and you almost have to design your approach as you go along. I suppose you would make a guess or some tentative plans based on your knowledge of the make-up of the audience, but wouldn’t you also have to adjust a lot on-the-fly?
PC: Oh yeah, all the time. And in fact, that is more a help than a hindrance. I’ll tell you why…by example again.
Often groups will tell you what they aspire to do. They might say, “as a general principle, we will do this” or something along those lines. For me that’s gold because I can use it on them [laughs].
In fact, I did this a couple days ago in a workshop. Earlier in the workshop they had said, “It’s OK to make mistakes as long as we are honest with each other and upfront about it.” I totally used that on them towards the end of the workshop when I said, “Given that you guys are honest with each other, the question I’d like you to answer is – what keeps you up at night with this project?”
My colleague uses the phrase, “hang them by their own petard” when we do this kind of stuff [laughter]. I guess what we’re doing, though, is calling them out on what they espouse, and getting them to live it. If you can do that in a workshop, it is brilliant. So I’m always on the lookout for opportunities to improvise like that, particularly when it is a matter of (espoused) principle.
There are many times when I’ve been in workshops where the corporate values are hanging on a wall – in a boardroom, right – and I’m witnessing them get completely trashed in the conversation that’s happening. So I like to hold people to account to what’s stated…and these are sneaky little subtle ways in which one can do that.
KA: I’m sure you come across situations where a certain approach doesn’t go down very well – may be people start to get defensive or even question the approach. Does that happen, and how do you deal with that?
PC: I’ve never had a situation where people question the approach I use. I guess that’s because we’re able to deal with that as it happens. For example, if I’m going a bit too “hippie” on a group and I see that they need more structure then I’ll change my approach to suit the group and then gently nudge them back to where I want them to be.
I also co-facilitate with other people…and sometimes they’re the ones who design the workshops, or I co-design it with them. Often it is their tolerance for ambiguity that can be a roadblock. One facilitator I work with loves emergence. This is my crass generalization, but anyone who thinks complex systems theory is just it will be happy to let a group get mired in ambiguity. The group might be struggling, but as far as the facilitator is concerned that’s OK because he or she believes that ambiguity is necessary for an emergent outcome. What they forget is that not everyone has the same level of tolerance for ambiguity.
On the other hand, I also work with highly structured facilitators who follow a set path – “we’ll do this, then we’ll do that and so on”. This approach might not go so well with people who prefer more open-ended approaches.
These sorts of experiences have been handy. When designing my own workshops, I’m the ultimate bower bird: I cherry pick whatever I need and improvise on the fly. So I tend not to worry about the risks of people not finding the workshop of value. That probably comes from a level of confidence too: we’re reasonably confident that we know our craft and have enough experience to deal with unexpected situations.
Coda – capturing organisational knowledge
KA: Thanks for the insights into sensemaking. Now, if you don’t mind, I’d like to switch tack and talk about something that your organization – Seven Sigma – is currently involved in. I know you guys started out as a SharePoint outfit, and you’ve been doing some interesting things in SharePoint relating to knowledge management. Could you tell us a bit about that before we wrap up?
PC: Sure. To begin with, dialogue maps are a pretty good knowledge artifact. Anyone who has used the Compendium software will know this (Editor’s note: Compendium is a free software tool that can be used for dialogue mapping). I’ve used it extensively for the last five years and have an “encyclopedia of conversations” that I have mapped. When I go and look at them, they’re as vivid to me as on the day I mapped them. So I’ve always been fascinated by the power of dialogue maps as a visceral way to capture the wisdom of a group at a point in time.
Now SharePoint is a collaborative platform that’s often used for intranets, project portals, knowledge management portal and so forth. It’s a fairly versatile platform. The Compendium software on the other hand, is not a multi-user, collaborative platform. It’s more like Photoshop or Word in that you use it to create an artefact – a map – but if someone else wants to see the map then he or she has to install Compendium. And it can be a bit of a pain in the butt to install Compendium as it is a freebie, open source product that doesn’t really fit in an enterprise environment. We’ve therefore always wanted to have the ability to import maps into SharePoint and my colleague Chris [Tomich] had already started writing some code to do that around the time we first got into dialogue mapping.
However, my own Aha moment came later; and come to think of it, the fact that we’re having beers in this conversation is relevant to this story…
I was dialogue mapping a group of executives about 4-5 years ago; it was a team-building exercise built around a lessons learned workshop. The purpose of the workshop was to improve the collaborative and team maturity of this group. [As a part of this exercise] the group was reviewing some old projects, doing a sort of retrospective lessons learned. We got to this one project, and someone complained about an organisational policy that had caused an issue on the project. Now as it happened, there was this guy in the group who knew how this policy had come about (I knew this guy, by the way, and I also knew that he was about to retire). He said, “Oh yeah, well that happened about 30 years ago, and it was on so-and-so project.” He then proceeded to elaborate on it.
Well, I knew this guy was about to retire. I also knew that the organization does this “phased retirement” thing, where people who are about to retire write documentation about what they do, mentor their successors etc. before they leave. I remember thinking to myself, “there’s no way in hell that he would have written that down in his documentation.” I just knew it: he had to be in that particular conversation for him to have remembered this.
Then my next thought…and as I’m mapping, I’m having this thought … “man, someone just ought to give him a beer, sit him down, and ask him about these kinds of old projects. I could map that video…how hard could that be? If one can map live conversations then surely one can map videos.” In fact, videos should be simpler because you can pause them, which is something you can’t do with live conversations.
So that was literally the little spark. It started with me thinking about how great it would be if this guy could spend even half his time recording his reflections…and this could take different forms, may be it could be a grad asking him questions in a mentoring scenario, or it could be another person he has worked with for years and they could reminisce over various projects. The possibilities were endless. But the basic idea was simple: it was to try to capture those sorts of water cooler or pub conversations, or those that you have in conferences. That’s where we get many of our insights – it’s the stories, the war stories through which we learn. That kind of stuff never gets into the manuals or knowledge-base articles.
Indeed, stories are the key to those unwritten insights about, say, when it is OK to break the rules. That kind of stuff can never be captured in the processes, manuals or procedures. One of your pieces highlights this beautifully – it’s one of the parables you’ve written I think, where an experienced project manager suggested to the novice that he should be listening to the stories rather than focusing on the body of knowledge he was studying. And that is completely true.
So that was, to coin a pun, the “glimmer of an idea” – because the product is called Glyma. The idea was to capture expert knowledge by mapping it and storing the map in SharePoint. SharePoint has a great search engine and we already knew that dialogue maps are a great way to capture conversations in a way that makes it easy to understand and navigate rationale (or the logic of a conversation). If we could do dialogue maps live, then we sure as hell could map videos. Moreover SharePoint also offers the possibility of tagging, adding feeds etc. – the kinds of things that portals these days are good at. It occurred to us that no one had really done that before.
Sure, there are plenty of story captures, say where people capture reflections on video. But because the resulting videos tend to be quite big, they are usually edited down to 15 minute “elevator pitch” type presentation. But then all the good stuff is gone; indeed, you and I have had many of these brief conversations where you’re summarising something terrific you’ve read and I’ll go, “Yeah well, that doesn’t sound so interesting to me.” The point is: you can’t compress insight into a convenient 10 minute video with nice music. So our idea was – well, don’t do that; take the video as it is and map it. Then, if you click on a node – say an idea node or a question node –Glyma will play the video from the point in time where the idea or question came up. You don’t have to sit through the entire thing. Moreover, when you do a search and get a series of results, you can click on a result and it plays that bit straight away.
So that was really the inspiration for Glyma…and it will see the light of day very soon.
Actually we’ll be putting a beta site out early next week (Editor’s note: the site has since gone live; I urge you to check it out). By the way, Glyma has been four years in the making. One of last things on our bucket list of things to do while running a consultancy was to put an innovative new product out and to see if people like it. So that’s where we are going with that.
KA: That’s sounds very interesting. The timing should work out well because this conversation will be posted out in a week or two as well. I wish you luck with Glyma; I’ve seen some early versions of it and it looks really good. I look forward to seeing how it does in the market place and what sort of reception it receives. I certainly hope it gets the reception it deserves because it is a tremendous idea.
PC: Well thank you; I appreciate your saying that…and we’ll see if you still say that once you’ve mapped this video because that might be your homework. [laughs]
KA: [laughs] Alright, great mate. Well, thanks for your time. I think that’s been a really interesting conversation. We’ll chat about Glyma further after it’s been out for a while.
PC: Yeah absolutely.
KA: Cheers mate.
PC: See ya.
In his thought-provoking book on antifragility, Nassim Taleb makes the point that the opposite of fragility is not robustness or resilience, rather it is the ability to thrive on or benefit from uncertainty. There is no word in the English language to describe such behavior, and that is what led him to coin the term antifragile.
Nature is an excellent example of an antifragile system: whenever subjected to a cataclysmic event (like this one that occurred ~66 million years ago), nature manages not only to recover, but does so in novel and arguably better ways. Unlike nature, however, most human-made systems tend to be fragile. An example that Taleb highlights is the global financial system , not just prior to the 2008 financial crisis but even now.
The broader lesson to be learnt from the financial crisis is that it is impossible to predict the future in any detail. Systems should therefore be designed to cope with (if not take advantage of) the irreducible uncertainty associated with this lack of predictability. Human-made systems that overlook this inescapable fact tend to be brittle by design.
The above is true not only of systems, but also of future-directed activities such as strategic planning. Overlooking the role of irreducible uncertainty in planning invariably locks an organization into an inflexible course of action. Unfortunately, this is not always appreciated by those who run organisations. As Taleb puts it:
Corporations are in love with the idea of the strategic plan. They need to pay to figure out where they are going. Yet there is no evidence that strategic planning works —we even seem to have evidence against it. A management scholar, William Starbuck, has published a few papers debunking the effectiveness of planning [see this paper, for example]—it makes the corporation option-blind, as it gets locked into a non-opportunistic course of action.
The trick, as Taleb hints in the above passage (and elaborates on in his book), is to plan in such a way as to take advantage of options that we are unaware of now, but might emerge in the future.
That, of course, is easier said than done.
In this post, I draw on Taleb’s book and my own experiences to discuss how one can formulate an IT strategy that thrives on uncertainty. Although my focus is primarily on IT, the points discussed have a wider applicability to strategic planning in general.
Towards an antifragile IT strategy
Before we get into antifragility, it is useful to take a brief look at how IT strategy is usually formulated.
Although some IT leaders will contest this point, a good majority of organisations tend to view IT as a cost rather than a strategic focus area. As a consequence, the objectives of an IT strategy are generally geared towards cost reduction and increased efficiency. The obvious ways in which to do this are through strict governance, standardization and/or outsourcing. Unfortunately, these actions tend to make organisations less flexible and hence more susceptible to uncertainty…and thus more fragile.
So, the key to an antifragile strategy is flexibility…but what exactly is flexibility?
The best definition of flexibility I have come across is the one proposed by Gregory Bateson who defined it as uncommitted potential for change (see this post for more on Bateson’s definition of flexibility). Only if one is flexible in this sense can one take advantage of unexpected events when they occur. The problem of formulating an antifragile IT (or any other!) strategy thus boils down to finding ways in which one can increase one’s flexibility. With that in mind, here are some suggestions.
This one is going to raise some eyebrows because the general trend in the world of corporate IT is to move in exactly the opposite direction – i.e. towards greater centralisation. The drive to centralisation manifests itself in many different ways: from top-down decision-making to the deployment of standardized processes and pan-organisational “enterprise” applications (single instance ERP systems being an extreme example).
The justification offered by advocates of centralisation is that it increases efficiency and reduces cost by using a one-size-fits-all approach. In reality, however, such an approach almost always has undesirable features. For example:
- It overlooks the unique features of different structural units of the organization (subsidiaries in different countries, for example). Indeed, this is precisely at platform standardization fail – see my post entitled, The ERP paradox for more on this point.
- It increases coupling between different structural units. Since systems and processes have a global reach, an unexpected glitch in any of these will affect all structural units within the organization.
Decentralisation basically amounts to giving structural units the autonomy they need in order to make decisions and choices that affect them. To be sure, this must be balanced with some oversight and direction from a central authority, but the overall aim should be a federal structure rather than a centralized one. A few examples of things that can be controlled centrally include network infrastructure, security…and possibly even things such as preferred vendors, especially from the perspective of getting volume discounts on pricing. There is no black and white here: choices need to be made judiciously and revisited if they don’t work.
I use the term agile here in the sense of adaptability rather than as a reference to the slew of methodologies that go under the banner of Agile. Indeed, agility in the sense I use it here is more about a mindset than a methodology: if you are adapting to a shifting environment by changing your approach and priorities appropriately, then you are being agile in the sense of adaptability.
So, what does agility entail? Here are some things I see as being important:
- Responding to changes within and outside the organization…but only after determining that they need to be responded to. The qualifier is important: one must be able to distinguish between changes that merit a response and those that don’t. Moreover, any change should be instituted in a gradual or incremental fashion so that one can adjust one’s approach and take corrective actions if needed. Agility does not imply rapid, large-scale change.
- Sensing (or even creating!) new opportunities and taking advantage of them. The term intrapreneurial is often used to describe such a mindset. Many IT leaders are aware of the need to do this, but don’t always know how. In my experience, instituting a dedicated innovation group isn’t the best way to go about it. Instead, it may be better to focus on creating an environment in which people feel inspired to try new things. One of the ways to do this is to actively encourage staff to learn by experimenting on company time – say, one Friday afternoon per month – with no expectation of useful outcomes.
- Building flexibility into your external contracts so that you can respond to changes that weren’t foreseen when the contract was drawn up. Essentially this amounts to building a trust-based relationship with your vendors (see the last point in the present post for more on this) and factoring in transaction costs in your outsourcing deals.
An agile mindset is unlikely to thrive in an IT department that is bogged down by overly onerous rules and procedures. To be sure, rules and processes are necessary, but not at the expense of flexibility.
One of the keys to being antifragile in financial investing is to spread one’s investments over a range of different products. In analogy, one of the best ways towards developing an antifragile IT strategy is to diversify elements of your IT environment, especially those things that are likely to be negatively affected by uncertainty.
Here are some examples:
- For coverage in times of trouble, ensure that your team consists of people with overlapping sets of skills. This should be reinforced by periodic cross-training of staff in all key technologies that are used within your organisation.
- Hire people with different thinking styles. Your teams should contain a mix of people with analytic and synthetic approaches to problem solving. Most uncertain situations require both types of approaches.
- Diversify your vendor base. Among other things this means do not…and I repeat, do not…tie yourself to a single vendor by signing a multi-year, multi-million dollar contract!
- Set up small, low-cost skunkworks projects to explore technologies and ideas that have the potential to provide your business an edge.
- Seek to understand diverse viewpoints. Any important decision should be made only after soliciting and understanding viewpoints that are different from yours. Such an understanding will lead to better decisions than those made by relying on gut instinct or advice from a single source.
…and I’m sure there are many other possibilities.
Creating an environment of trust
I kept this for the last because it is possibly the hardest to put into practice. An antifragile IT strategy will only work if there is a mutual trust between all parties involved in a business relationship – be they managers and employees or IT folks and the businesses they serve. Although much has been written and spoken about trust, the fact is that it is conspicuous by its absence in the present day corporate world. Indeed, use of the word in corporate circles tends to evoke cynical reactions from the rank and file; it is seen as platitude rather than a word of significance.
Why is trust important?
Elinor Ostrom’s prize winning work established that trust is one of the core relationships that promote cooperation (see this post for more on this point). In situations of uncertainty, those who work in a high-trust environment would generally be willing to step outside their regular roles and work with others to fix the problem. In contrast those in a low-trust environment are likely to switch off or worse, start apportioning blame. On another note, people are more likely to share their ideas in a high-trust environment rather than in one that is riven by mistrust and unhealthy competition. I’m pretty sure that most readers would have experience of low-trust environments and would know from experience that such work environments are fragile in that they simply fall apart under stress.
It should be noted that that trust is also important in external relationships, such as those with vendors. Although purchasing and legal departments are quick to advise us about the importance of rock-solid contracts, in my experience it is far better to rely on trust. Indeed, it has been suggested that contracts can destroy trust!
Finally, just in case it is not clear: the onus for creating an environment of trust lies with management rather than the rank and file.
I offer the above as some suggestions aimed at making your IT environment less susceptible, even responsive, to unexpected external or internal events. Indeed, I believe that in times of uncertainty, they are likely to work much better than some of the well-worn but discredited command and control approaches that are inexplicably popular.
To sum up: IT strategy is usually focused on reducing improving efficiency and reducing cost. Typical ways to address this issue is through tighter governance and outsourcing, usually implemented in ways that reduce flexibility. As a result most IT strategies do not make any provision to deal with, let alone benefit from uncertainty. In this post I have outlined some of the key elements of an antifragile IT strategy that can correct this oversight.
When I reviewed this piece just prior to posting it, I was struck by the fact that the points I have mentioned have more to do with social or ethical matters than technology. This reminded me of Heinz von Foerster’s ethical imperative:
“Act always so as to increase the number of choices.”
And that, quite possibly, is the perfect one-line summary of an antifragile strategy.