Eight to Late

Six heresies for enterprise architecture

with 8 comments

Introduction

In a recent article in Forbes, Jason Bloomberg asks if Enterprise Architecture (EA) is “completely broken”.  He reckons it is, and that EA frameworks, such as TOGAF and the Zachman Framework, are at least partly to blame.  Here’s what he has to say about frameworks:

EA generally centers on the use of a framework like The Open Group Architecture Framework (TOGAF), the Zachman Framework™, or one of a handful of others. Yet while the use of such frameworks can successfully lead to business value, [they] tend to become self-referential… that [Enterprise Architects end up] spending all their effort working with the framework instead of solving real problems.

From experience, I’d have to agree: many architects are so absorbed by frameworks that they overlook their prime imperative, which is to deliver tangible value rather than pretty diagrams.

In this post I present six (possibly heretical!) practices that underpin an evolutionary or emergent approach to enterprise architecture. I believe that these will go some way towards addressing the issues that Bloomberg and others bemoan.

The heresies

Here are they are, in no particular order:

1. Use an incremental approach

As Bloomberg notes in the passage above, many enterprise architects spend a great deal of their time creating blueprints and plans based on frameworks.  The problem is,  this activity rarely leads to anything of genuine business value because blueprints or plans are necessarily:

  1. Incomplete – they are high-level abstractions that omit messy, ground-level details. Put another way, they are maps that should not be confused with the territory.
  2. Static – they are based on snapshots of an organization and its IT infrastructure at a point in time. Even roadmaps that are intended to describe how organisations’ processes and technologies will evolve in time are, in reality, extrapolations from information available at a point in time.

The straightforward way to address this is to shun big architectures upfront and embrace an iterative and incremental approach instead. The Agile movement has shown the way forward for software development.  There are many convincing real-life demonstrations of the use of agile approaches to enterprise-scale initiatives (Ralph Hughes’ approach to Agile data warehousing, for example).  Such examples suggest that the principles behind agile software development can be fruitfully adapted to the practice of enterprise architecture.

The key principle is to start as small as possible and scale-up from there.  Such an approach would enable architects to learn along the way, giving them a better chance of getting their heads around those messy but important details that get overlooked when one considers the entire enterprise upfront. Another benefit is that it would give them adequate time to develop an awareness of the factors that are prone to change frequently and those that will remain relatively static. Most important, though, is that it would offer several opportunities to correct errors and fine tune things at little extra cost.

2. Understand the unique features of your organisation

Enterprise architects tend to be people with a wide experience in technology. As a result they are often treated (and see themselves) as experts. The danger of this attitude is that it can lead them to believe that they have privileged insights into the ills of their organizations.   A familiarity with frameworks tends to reinforce this view because frameworks, with their focus on standardization, tend to emphasise the common features of organisations.  However, I believe that instead of focusing on similarities an enterprise architect would be better served by focusing on differences – the things that make his or her organisation unique. Only when one develops an understanding of differences can one start to build an architecture that actually supports the organization.

To be sure, there are structures and processes that are common to many organisations (for example, functions such as 1st level support or expense reimbursement processes) and these could potentially benefit from the implementation of standardized designs / practices. However, the identification of such commonalities should be an outcome of research rather than an upfront assumption. And this requires ongoing, open dialogue with key stakeholders in the business (more on that later)

3. Form your own opinions about what works and what doesn’t

The surfeit of standardized “best practice” approaches to EA tends to make enterprise architects intellectually lazy.  This is actually a symptom of a wider malaise in IT: I have noticed that professionals in the upper echelons of IT are happy to “outsource their thinking” by following someone else’s advice or procedures without much thought.  The problem with this attitude, penalty clauses notwithstanding, is that one cannot “outsource the blame” when things go wrong.

I therefore believe one of the key attributes of a good architect is to develop a critical attitude towards frameworks,  “best practices” and even consulting advice (especially if it comes from a Big 4 consultancy or guru).  If you’re really as experienced as you claim to be, you need to develop your own opinions of what works and what doesn’t and be willing to subject those opinions to scrutiny by others.

4. Understand your organisation’s constraints

Humans tend to be over-optimistic when it comes to planning: witness the developer who blithely tells his boss that it will take a week to do a certain task…and then takes a month because of problems he had not anticipated. Similarly, many architects create designs that look great on paper but are not realistic because they have overlooked potential organizational constraints.  Examples of such constraints include:

  1. Top-down reporting structures in which all decisions have to be made or approved by top management.
  2. Rigid organizational hierarchies that discourage cross-functional communication and collaboration.

Although constraints can also be technical (such as the limitations of a particular technology), the above examples illustrate that organizational constraints are considerably harder to address, primarily because they involve changing behaviour of influential people within the organization.

Architects lack the positional authority to do anything about such constraints directly. However, they need to develop an understanding of the main constraints so that they can bring them to the attention of those who can do something about them.

5. Focus on problem finding rather than problem solving

In his book, Management in Small Doses, Russell Ackoff wrote that in the real world problems are seldom given; they must be taken. This is good advice for enterprise architects to live by.  Indeed, one of the shortcomings of frameworks is that they tend to present the architect with ready-made problems – specifically, any process or technology that does not comply with the framework is seen as a problem. Consequently, many architects spend considerable effort in fixing such “deviations”.  However, non-conforming processes or technologies are seldom the most pressing problems that the organization faces.  Instead, an enterprise architect will be better served by of finding problems that actually need fixing.

6. Understand the social implications of enterprise architecture

Enterprise architecture is seen as a primarily technical undertaking. As a result architects often overlook the social implications of their designs. Here are a couple of common ways in which this happens:

  1. Enterprise architecture invariably involves trade-offs. When evaluating these, architects typically focus on economic variables such as cost, productivity, efficiency, throughput etc., ignoring human factors such as ease of use and user preferences.
  2. Any design choice will (usually) benefit certain stakeholders while disadvantaging others.

Architects need to remember that their plans and designs are going to change the way people work. Nobody likes change without consultation, so it critical to seek as wide a spectrum of opinions as possible before making important design decisions. One can never satisfy everyone, but one has a better chance of making sensible compromises if one knows where all the stakeholders stand.

In closing

To summarise: enterprise architects need to take an emergent approach to design. Such an approach proceeds in an incremental fashion, pays due attention to the unique features and constraints of an organization, focuses on real rather than imagined problems…and, above all, acknowledges that the success or failure of enterprise architecture rests neither on frameworks nor technology, but on people.  My (admittedly limited) experience suggests that such an approach would be more fruitful than the framework-driven approaches that have come to dominate the profession.

Written by K

October 8, 2014 at 9:04 pm

The cloud and the grass redux…on video

with 8 comments

I’ve been playing with Videoscribe, a nice tool for  creating whiteboard videos. As an exercise I set myself the task of “video-fying” The Cloud and the Grass, a business fable I wrote a while ago:

 

 

I’d greatly appreciate your feedback. Please be gentle though, this is a first attempt :-)

PS: I’ve just realised this is the shortest post I’ve ever written.

Written by K

September 24, 2014 at 9:09 pm

Making sense of organizational change – a conversation with Neil Preston

with 3 comments

In this instalment of my sensemakers series, I chat with Dr. Neil Preston, an Organisational Psychologist  based in Perth, about the very topical issue of organizational change. In a wide-ranging conversation, Neil draws interesting connections between myths that are deeply embedded in Western thought and the way we think about and implement change…and also how we could do it so much better.

KA: Hi Neil, thanks for being a guest on my ongoing series of interviews with sensemakers. You and I have corresponded for at least a year now via email, so it’s a real pleasure to finally meet you, albeit virtually. I’d like to kick things off by asking you to say a bit about yourself and your work.

NP: Well, I’m Dr. Neil Preston. I’m an organizational psychologist…what that means is that I’m specially registered in the area of organizational psychology, much like a clinical psychologist. My background professionally is that I originally worked in mental health, as a senior research psychologist. I’ve published 30 to 40 peer-reviewed papers in psychiatry, mental health and psychometrics,  so I know my way around empirical psychology.  My real love, however, has always been in organizational and industrial psychology, so in 2006 I decided to leave the Health Department of Western Australia and move into full time consulting.

Consulting work has led me mainly into infrastructure projects-  these are very large, complex projects where organisations from both the private and public sector have to get together and create alliances in order to get the work done. My job on these projects – as I often put it to people – is to make the Addams Family look like the Brady Bunch [laughter]. The idea is to get different value systems and organizational cultures to align, with the aim of getting to a shared understanding of project goals and a shared commitment to achieving them.

My original approach was very diagnostic – which is the way psychologists are taught their trade – but as problems have become more complex, I’ve had to resort to dialogical (rather than diagnostic) approaches. As you well know, dialogue is more commensurate with complexity than diagnosis, so dialogical approaches are more appropriate for so-called wicked problems. This approach then led me to complex systems theory which in turn led to an area of work that Paul Culmsee, I and yourself are looking into: emergent design practices. (Editor’s note: This refers to a method of problem solving in which solutions are not imposed up front but emerge from dialogue between various stakeholders.)

KA: OK, so could you tell us a bit about the kinds of problems you get called in to tackle?

NP: Very broadly speaking, I’m generally called in when organisations have goals that are incommensurate with each other. For example: a billion dollar road that has to be on time and on budget…but, by the way, the alignment of the road also takes out a nesting site of a Carnaby White Tailed Cockatoo which triggers the environmental biodiversity protection act which in turn triggers issues with local councils and so on.

Complexity in projects often arises from situations like these,  where the issue is not just about delivering on time and on budget, but also creating a sustainable habitat and ensuring alignment with local governments etc.

KA:  So very broadly, I guess one could say that your work deals with the problems associated with change. The reason I put it in this way is that change is something that most people who work in organisations would have had to deal with – either as executives who initiate the change, managers who are charged with implementing it or employees who are on the receiving end of it.  The one thing I’ve noticed through experience –initially as a consultant and then working in big organisations – is that change is formulated and implemented in a very prescriptive way.  However, the end results are often less than satisfactory because there are many unintended consequences (loss of morale, drop in productivity etc.) – much like the unintended consequences of large infrastructure projects.  I’ve long wondered about this is so: why, after decades of research and experience do we still get it so wrong?

NP:  Let me give you an answer from a psychologist’s perspective. There are a couple of sub-disciplines of psychology called depth and archetypal psychology that look at myth.  The kind of change management programs that we enact are driven by a (predominantly) Western myth of heroic intervention.

James Hillman, an archetypal psychologist once said that a myth is what is real. This is somewhat contrary to the usual sense in which the word is used because we usually think of a myth as being something that is not real. However, Hillman is right because a myth is really an archetype – an overarching way of seeing the world in a way that we believe to be true. The myth of the hero – the good guy overcoming all adversity to slay the bad guy – is essentially an interventionist one. It is based on the Graeco-Roman notion of the exercise of individual will. Does that make sense so far?

KA: Yeah absolutely. Please go on.

NP: OK, so this myth is dominant in the Western imagination. For example, any movie that a kid might go to see like, say, Star Wars is really about the exercise of the individual will. In much the same way, the paradigm in which your typical change management program operates is very much (individual) action and intervention oriented. Even going back to Homeric times – the Iliad and Odyssey are essentially stories about individuals exercising authority, power…and excellence is another word that crops up often too. The objective of all this of course is to effect dramatic, full-frontal change.

However, there is a problem with this myth, and it is that it assumes that things are not complex. It assumes that simple linear, cause-effect explanations hold – that if you do A then B will happen (if you restructure you will save costs, for example). Such models are convenient because they seem rational on the surface, perhaps because they are easy to understand. However, they overlook the little details that often trip things up. As a result, such change often has unforeseen consequences.

Unfortunately, much of the stuff that comes out of the Big 4 consultancies is based on this myth.  The thing to note is that they do it not because it works but because it is in tune with the dominant myth of the Western business world.

KA: What you are saying definitely strikes a chord. What’s strange to me, however, is that there have been people challenging this for quite a while now. You mentioned the predominantly linear approach – A causes B sort of thinking – that change management practitioners tend to adopt. Now, as you well know, systems theorists and cyberneticists have proposed alternate approaches that are more cognizant of the multifaceted nature of change, and they have done so over fifty years ago! What happened to all that? When I read some of the papers, I see that they really speak to the problems we face now, but they seem to have been all but forgotten (Editor’s note: see this post that draws on work by the prominent cyberneticist,  Heinz von Foerster, for example). One can’t help but wonder why that is so….

NP: Well that’s because myths are incredibly sticky. We are talking about  an ancient myth of the exercise of the individual human will.  And, by the way, it’s a very Western thing: I remember once hearing on the radio that the Western notion of the “squeaky wheel getting the grease” has an Eastern counterpart that goes something like, “the loudest goose is first to lose his head.”  The point is, the two cultures have a very different way of looking at the world.  That myth – the hero myth – is very much brought into the way we tell stories about organisations.

Now, why does that matter? Well, JR Hackman, an organizational psychologist said it quite brilliantly. He called our fixation on the hero myth (in the context of change) the leadership attribution error – he argues that we tend to over-attribute the success of a change process to the salient things that we can see – which is (usually) the leader. As a result we tend to overlook the hidden factors which give rise to the actual performance of the organization.  These factors usually relate to the latent conditions present in the organization rather than specific causes like a leader’s actions.

So there are two types of change: planned change and emergent change.  Planned change is the way organisations usually think about change. It is a causal view in which certain actions give rise to certain outcomes. But here is the problem: the causal approach focuses primarily on salient features, ignoring all the other things that might be going on.

Now, cybernetics and systems theory do a better job of taking into account features that are hidden. However, as you mentioned, they have not had much uptake.  I think the reason for this is that myths are incredibly sticky…that is the best answer I can give.

KA: Hmm that’s interesting…I’d never thought of it that way – the stickiness of myths as blinding us to other viewpoints.   Is there something in the nature of human thought or human minds that make us latch on to over-simplified explanations?

NP: Well, there’s this notion of cognitive bias – persistent biases in human perception or judgement   (Editors’ Note: also see this post on the role of cognitive bias in project failure). The leadership attribution error is precisely such a bias. I should point out that these biases aren’t necessarily a problem;  they just happen to be the way humans think. And there are good evolutionary reasons for the existence of biases: we can’t process every little bit of information that comes to us through our senses, and these biases offer a means to filter out what is unimportant. Unfortunately, sometimes they cause us to overlook what is important. They are heuristics and, like all heuristics, they don’t always work.

So in the case of leadership attribution bias – yes leadership does have an effect, but it is not as much as what people think. In fact, work done by Wageman (who worked with Hackman) shows that what is more important for team performance are the conditions in which the teams work rather than the qualities or abilities of the leader.

KA: From experience I would have to say that rings true: conditions trump causes any day as far as team performance is concerned.

NP: Yeah and there’s a good reason for it; and it is so simple that we often overlook it. Take the example of sending a rocket to the moon. If you set up the right conditions for the rocket – the right amount of fuel, the right load and so forth, then everything that is necessary for the performance of the rocket is already set up. The person who actually steers the rocket is not as critical to the performance as the conditions are. And the conditions are already present when the rocket is in flight.

Similarly, In the case of organizational change, we should not be looking for causes – be it leadership or planned actions or whatever– but the conditions that might give rise to emergent change.

KA: Yeah, but conditions are causes too, aren’t they.

NP: Yes they are, but the point is that they aren’t salient ones – that is, they aren’t immediately obvious. Moreover, and this is the important point: you do not know the exact outcomes of those causes except that they will in general be positive if the conditions are right and negative if they aren’t.

KA: That makes sense. Now I’d like to ask you about a related matter. When dealing with change or anything else, organisations invariably seem to operate at the limits of their capacity.  Leaders always talk about “pushing ourselves” or “pushing the envelope” and so on.  On the other hand, there’s also a great deal of talk about flexibility and the capacity for change, but we never seem to build this into our organisations. Is there a way one can do this?

NP: Yes, you can actually build in resilience. Organisations generally like to keep their systems and processes tightly coupled – that is, highly dependent on each other. This tends to make them fragile or prone to breakdown. So, one of the things organisations can do to build resilience is to keep systems and processes loosely coupled. (Editor’s note: for example, devolve decision-making authority to the lowest possible level in the organization. This increases flexibility and responsiveness while having the added benefit of reducing management overhead).

Conditions also play a role here. One of the things that organisations like to talk about is innovation. The point is you can’t put in place processes for innovation but you can create conditions that might foster it.  You can’t ask people whether they “did their 15 minutes of innovation today” but you can give them the discretionary freedom to do things that have nothing to do with their work…and they just might do something that goes above and beyond their regular jobs. But of course what underpins all this is trust. Without trust you simply cannot build in flexibility or resilience.

KA: This really strikes a chord and let me tell you why.  I read Taleb’s book a while ago. As you probably know, the book is about antifragility, which he defines as the ability to benefit from uncertainty rather than just being resilient to it. After I read the book I wrote a post on what an antifragile IT strategy might look like…and in an uncanny resonance with what you just said, I made the claim that trust would be the single important element of the strategy [laughter].

NP: Yeah, and trust is not something you receive as much as you give. So as a psychologist I know why it is so damaging to people. You know, “Et tu Brutus” – Caesar’s famous line – it was the betrayal of trust that was so damaging. Once trust is gone there’s nothing left.

KA: Indeed, I sometimes feel that the key job of a manager is to develop trust-based relationships with his or her peers and subordinates. However, what I see in the workplace is often (though definitely not always) the opposite: people simply do not trust their managers because managers are quick to pass the blame down (or  even across) the hierarchy rather than absorbing it…which arguably, and ethically, is their job. They should be taking the heat so that people can get on with actual work. Unfortunately managers who do this are not as common as they should be.

NP: We’re getting into a complex area here, and it is one that I deal with at length in my masterclass on collaborative maturity and leadership. This is the old scapegoating mechanism at work,  and it is related to the leadership attribution error and the hero myth. If the attribution is back to the individual, then the blame must also be attributable to an individual. In fact, I have this slide in one of my presentations that goes, “a scapegoat is almost as useful as the solution to a problem.” [laughter]

Now, there are two questions here. “The scapegoat” is the answer to the question “Who is responsible?” However, it is more important to look at conditions rather than causes, so the real question is, “How did this situation come about?” When you look at “Who” questions, you are immediately going into questions of character. It elicits responses like “Yeah, it’s Kailash’s fault because he is that kind of a guy…he is an INTP or whatever.” What’s happening here is that the problem is explained away because it is attributed to Kailash’s character. You see what is going on…and why it is so dangerous?

KA: Yeah, that’s really interesting.

NP: And you see, then they’ll say something like, “…so let’s take Kailash out and put Neil in”…but the point is that if the conditions remain the same, Neil will fall down the same hole.

KA: It’s interesting the way you tie both things back to the individual – the individual as hero and the individual as scapegoat.

NP:  Yes, it’s two sides of the same coin. Followership acquiesces to leadership: Kailash will follow Neil, say, to the Promised Land. If we get there, Neil gets the credit but if we don’t, he gets the blame.

KA: Very interesting, but this brings up another question. Managers and leaders might turn around and say, “It’s all very well to criticize the way we operate, but the fact is that it is impossible to involve all stakeholders in determining, say, a strategy. So in a sense, we are forced to take on the role of “heroes,” as you put it.”

So my question is: are there some ways in which org are some of the ways in which organisations can address the difficulties associated with of collective decision-making?

NP: Of course, it is often impossible to include all stakeholders in a decision-making process, particularly around matters such as organisational strategy. What you have to do first is figure out who needs to be involved so that all interests are fairly represented. Second, I’m attracted to the whole idea of divergent (open-ended) and convergent (decisive) thinking. For example, if a problem is wicked or complex, there is no point attempting to use expert knowledge or analysis exclusively (Editor’s note: because no single expert holds the answers and there isn’t enough information for a sensible, unbiased analysis). Instead, one has to use collective intelligence or the wisdom of the crowd by seeking opinions from all groups of stakeholders who have a stake in the problem. This is divergent thinking.

However, there comes a time when one has to “make an incision in reality” – i.e. stop consultation and make the best possible decision based on data and ethics. – one has to use both IQ and EQ.  This is the convergent side of the coin.

Another problem is that one often has the data one needs to make the right decision, but the decision does not get made for reasons of ideology. Then it becomes a question of power rather than collective intelligence: a solution is imposed rather than allowed to emerge.

KA: Well that happens often enough – this “short-circuiting” of the decision-making process by those in positions of power.

NP:  Yes, and it is why I think deliberative decision-making which comes from the Western notion of deliberative democracy – i.e. decision-making based on dialogue and consultation is the best way forward but it can be a challenge to implement. Democracy is slow, but it is generally more accurate…

KA: Yes, that’s true, but it can also meander.

NP: Sure, everything is bound by certain limitations (like time)  and that’s why you have to know when to intervene. One of the important things for a leader to have in this connection is negative capability – which is not “negative” in the usual sense of the word, but rather the ability to know how to be comfortable with ambiguity and be able to intervene in ambiguous situations in a way that gets some kind of useful outcome.

Of course, acting in such situations also means that one has to have good feedback mechanisms in place; one must know how things are actually working on the ground so that one can take corrective actions if needed.  But, in the end, the success of this way of working depends critically on having the right conditions in place.  If you don’t set up the right conditions, any intervention can have catastrophic consequences.

If I may talk politically for a minute – the current situation in the Middle East is a classic example of a planned intervention:  direct, frontal, dramatic, causal, linear and supposedly rational.  However, if the right conditions are not in place, such interventions can have unforeseen consequences that completely overshadow the alleged benefits. And that is exactly what we have seen.

In general I would say that emergent change is more likely to succeed than large-scale, direct, planned change. The example one hears all the time is that of continuous improvement – where small changes are put in place and then adjusted based on feedback on how they are working.

KA: This is a matter of some frustration for me: in general people will agree that collaboration and collective decision-making are good, but when the time comes, they revert to their old, top-down ways of working.

NP: Yes, well when I go into a consulting engagement on collaborative maturity, one of the first things I ask people is whether they want to use the collaborative process to inform people or to influence them.  Often I find that they only want to use it to inform people. There is a big difference between the two: influencing is emergent, informing isn’t.

KA: This begs a question: say you walk into an organization where people say that they want to use collaborative processes to influence rather than inform, but you see that the culture is all wrong and it isn’t going to work. Do you actually tell them, “hey, this is not going to work in your organization?”

NP: Well if people don’t feel safe to speak their truth then it isn’t going to work. That’s why I’m so interested in Hackman’s work on conditions over causes. Coming to your question I don’t necessarily tell people that it’s not going to work because I believe it is more productive to invite them to explore the implications of doing things in a certain way. That way, they get to see for themselves how some of the things they are doing might actually be improved. One doesn’t preach but one hands things back to them.

In psychology there are these terms, transference and countertransference. In this context transference would be where a consultant thinks, “I’m a consultant so I’m going to assume a consultant persona  by acting and behaving like I have all the answers”, and countertransference would be where the client reinforces this by saying something like, “you are the expert and you have all the answers.”  Handing back stops this transference-countertransference cycle. So what we do is to get people to explore the consequences of their actions and thus see things that might have been hidden from their view. It is not to say “I told you so,” but rather “what are the implications of going down this path.” The idea is to appeal to the ethical or good side in human beings…and I believe that human beings are fundamentally good rather than not.

KA: I like your use of the word “ethical” here. I think that is really important and is what is often missing. One hears a lot about ethics in business these days, but it is most often taught and talked about in a very superficial way. The reality, however, is that the resolution of most wicked problems involves ethical considerations rather than logic and rationality…and this is something that many people do not understand. It isn’t about doing things right, rather it is about doing the right things.

NP: Yes, and this is related to what I call “meaning over motivation” – the idea being is that instead of attempting to motivate people to do something, try providing them with meaning. When you do this you will often find that change comes for free.  And it is worth noting that meaning has both an emotional and rational component – or, put a little bit differently, an ethical and logical one. In one of his books, Daniel Pink makes the point that uncoupling ethics from profit can have catastrophic consequences…and we have good examples of that in recent history.

The broad lesson here is that if the conditions aren’t right then it is inevitable that unethical behavior will dominate.

KA: Yeah well human nature will ensure that won’t it?

NP: [laughs] Yeah, and you don’t need a psychologist to tell you that.

KA: [laughs] Indeed…and I think that would be a good note on which to bring this conversation to a close. Neil, thanks so much for your time and insights. It’s been a pleasure to chat with you  and I look forward to catching up with you again…hopefully in person, in the not too distant future.

NP: Yeah, Singapore and Perth are not that far apart…

Written by K

September 9, 2014 at 9:52 pm

Ironies of enterprise information technology

leave a comment »

Introduction

On one of my random walks through Google Scholar, I stumbled on an interesting paper entitled, Ironies of Automation.  The main message of the paper is nicely summarized in its first few lines:

The classic aim of automation is to replace human manual control, planning and problem solving by automatic devices and computers.  However… even highly automated systems, such as electric power networks, need human beings for supervision, adjustment, maintenance, expansion and improvement. Therefore one can draw the paradoxical conclusion that    automated   systems   still   are man-machine systems, for which both technical and human factors are important. This paper suggests that the increased interest in human factors among engineers reflects the irony that the more advanced a control system is, so the more crucial may be the contribution of the human operator.

These lines were written over thirty years ago, but are ever more apt today – such paradoxes are rife, not only in automation, but in any field in which technology plays an important part. To illustrate my point, I highlight a couple of ironies drawn from a domain that is likely to be familiar to many readers of this blog: the world of enterprise IT. I also present a brief discussion of how these ironies of enterprise IT can be avoided.

Ironies of enterprise IT

In the last few decades information technology has found its way into diverse organisational functions. This trend has been accompanied by an explosive growth in new technologies. As a result of this, corporate IT infrastructures have become  ever more complex and the costs of maintaining them have burgeoned.  Quite naturally, the focus has thus turned to taming both complexity and cost. The favoured approaches to tackling this problem are standardisation and/or outsourcing. However, as I discuss below, both   often lead to ironic outcomes.

An  irony of standardisation

Enterprise IT environments tend to evolve rapidly, reflecting the many demands made on them by the organisational functions they support. This is good because it means that IT is doing what it should be doing: supporting the work of the parent organisation.  On the other hand, this can result in unwieldy environments that are difficult (not to mention, expensive) to maintain. One way to address this is to impose of standards relating to processes  (such as ITIL) and infrastructure (such as SAP or any enterprise application). 

The question is, how well does such standardisation work in practice?

In his book entitled, From Control to Drift, Claudio Ciborra pointed out that IT infrastuctures in organisations tend to drift – i.e. they escape processes, plans and standards, and take on a life of their own. The reason they drift is that they are subject to unpredictable forces within and outside the hosting organisation. The imposition of standards may slow the drift but cannot arrest it entirely. Infrastructures are therefore best seen as ever-evolving constructs consisting of systems, people and processes that interact with each other in often unforeseen ways.  As he put it:

Corporate information infrastructures are puzzles, or better collages, and so are the design and implementation processes that lead to their construction and operation. They are embedded in larger, contextual puzzles and collages. Interdependence, intricacy, and interweaving of people, systems, and processes are the culture bed of infrastructure. Patching, alignment of heterogeneous actors and making do are the most frequent approaches…irrespective of whether management [is] planning or strategy oriented, or inclined to react to contingencies.

The essential message here is that standards and processes overlook the fact that enterprises are complex social systems that are subject to internal and external influences which cannot always be foreseen.   Dealing with these, more often than not, entails the implementation of hacks and workarounds that violate the imposed standards and thus nullify the benefits of standardisation.

In summary, “standardised” IT environments often end up have a plethora of non-standard hacks and workarounds that are necessary, but are generally messy and expensive to maintain.

An irony of outsourcing

One of the main reasons for outsourcing IT is to reduce costs. Yes, I am aware that many decision-makers claim that their primary reason is to reduce complexity rather than cost, but the choices they make often belie their claims.  The irony is that in their eagerness to control costs, they often end up increasing them because they overlook hidden factors.  I explain this in brief below, drawing on my post on the transaction costs of outsourcing.

The basic idea is simple – it is that the upfront fee quoted by the vendor is but a fraction of  the total cost that will be incurred by the customer.  Some of the costs that are generally not included in upfront costs are:

  1. Search /selection costs: these are the costs associated with searching for and shortlisting vendors.
  2. Bargaining costs: these are costs associated with negotiations for a mutually acceptable contract.
  3. Costs of coordinating work: these are costs associated with coordinating external and internal work. This is particularly important in the case of software-as-a-service because the effort required to interface cloud applications with in-house systems is often underestimated.
  4. Costs of enforcement and change: These are costs associated with enforcing the terms of the contract and those associated with change.

The point to note is these costs are rarely if ever mentioned by the vendor, but almost always show up in one form or another. It is therefore important for the customer to try and get a handle on these before entering into any commercial agreements.  The problem is, some of these costs (particularly 3 and 4 above) are hard if not impossible to figure out upfront. For example, if the relationship turns sour the only solution might be to switch vendors. The cost associated with this is often significant and is borne entirely by the customer.  A lack of awareness of such costs associated with outsourcing will invariably result in ironical outcomes.

In summary: attempts to control costs by outsourcing IT can have the contrary effect of increasing them.

Avoiding ironical outcomes

So how does one avoid ironical outcomes?

I have only one piece of advice to offer here: when planning IT architectures or outsourcing initiatives, use an incremental or emergent approach that avoids big designs or commitments upfront. Using an emergent approach not only limits risk, it also provides opportunities for learning.  Most important, it enables one to verify that the envisaged benefits are not just wishful architect or manager-level thinking

Below I outline what such an approach might entail for the two ironies discussed earlier:

  1. For infrastructures/systems:   avoid grandiose system designs that attempt to span the “enterprise” – remember that one size will not fit all of your users. . Consequently, enterprise architectures and governance systems should provide guidelines, rather than detailed prescriptions.  As Anders Jensen-Waud  puts it in this post: they should foster resilience and adaptability rather than conformance,
  2. For outsourcing:  start small, possibly with a small project or system. This will help you get a sense for how outsourcing would work in your environment and help you figure out whether the vendor you have selected is really right for you.  Remember, no two environments are identical so others’ lessons learned may be considerably less useful than you think. Finally, if you’re going to the cloud, be sure to factor in costs and technical challenges associated with interfacing external apps with in-house ones.

Yes, there’s nothing particularly profound here, it is just common sense…but you know what they say about the commonality of common sense.

Conclusion

In this post I have highlighted some ironies of enterprise information systems and have briefly outlined an emergent approach to avoiding them. I believe but cannot prove that ironical outcomes are almost guaranteed if one takes a monolithic, enterprise-style approach or a let’s-outsource-it-all attitude to enterprise information technology. Such a view overlooks the messy little details and differences that trip up big designs and grandiose plans. In the end, the only way to avoid ironical outcomes is to start small, learn from experience and incorporate that learning in an incremental manner in whatever you’re building or doing. Yes, you might end up with something you did not envisage at the start, but you will have learnt much along the way. More important, perhaps, is that you will be able to rest assured that it works.

Written by K

August 26, 2014 at 8:40 pm

Heraclitus and Parmenides – a metalogue about organizational change

leave a comment »

Organizations are Heraclitian, but Parmenides is invariably in charge.” –Stafford Beer (paraphrased)

Heraclitus: Hello Parmenides, it’s been a while!  What have you been up to since we last met?

Parmenides: Heraclitus, it is good to see you my old friend. You’re not going to believe it, but I’ve been doing some consulting work on managing change in organizations.

Heraclitus:  [laughs] You’re right, that is beyond belief, particularly in view of your philosophical position on change. So, have you recanted? Have you now come around to the truth that everything changes and nothing stands still?

Parmenides: Ah, yes I am familiar with your views on change my friend, but I hate to disappoint you.  My position remains the same as before:  I still believe that the world is essentially unchanging. The key word here is “essentially” – by which I mean that the changes we see around us are superficial and that the essential properties of the world do not change. Indeed, as paradoxical as it may sound, understanding this unchanging essence enables us to manage superficial changes such as those that happen in organizations.

Heraclitus:  I’m not sure I understand what you mean by unchanging essence and superficial change...

Parmenides:  OK, let me try explaining this using an example. Let us consider the case of a physical law and a real world situation to which it applies. A concrete instance of this would be Newton’s Law of Gravitation and the motion of a spacecraft.  The former represents the unchanging essence while the latter represents one of its manifestations. The point is this:  the real world (as represented by a moving spacecraft) appears to be ever changing, but the underlying unity of the world (as represented by Newton’s law) does not change. If one understands the underlying unchanging laws then one has the power to predict or control the superficial changes.

Heraclitus:  Hmm….I don’t see how it relates to organizations.  Can you give me a more down to earth illustration from your work? For example: what is the “unchanging essence” in organizational change?

Parmenides:  That’s easy: the unchanging essence is the concept of an organization and the principles by which they evolve.  Consultants like me help organizations improve performance by influencing or adjusting certain aspects of their structure and interactions. However, the changes we facilitate do not affect the essence of the entities we work with. Organizations remain organizations, and they evolve according to universal laws despite the changes we wrought within them.

Heraclitus: Ah Parmenides, you are mistaken: concepts and principles evolve in time; they do not remain constant. Perhaps I can convince you of this by another means.  Tell me, when you go into an organization to do your thing, how do you know what to change?

Parmenides:  Well, we carry out a detailed study by talking to key stakeholders and then determine what needs to be done.  There are a host of change models that have come out of painstaking research and practice.  We use these to guide our actions.

Heraclitus: Are these models  akin to the physical laws you mentioned earlier?

Parmenides:  Yes, they are.

Heraclitus: But all such models are tentative; they are always being revised in the light of new knowledge. Theory building in organizational research (or any other area) is an ongoing process. Indeed, even physics, the most exact of sciences, has evolved dramatically over the last two millennia – consider how  our conception of the solar system has changed from Ptolemy to Copernicus. For that matter, even our understanding of gravity is no longer the same as it was in Newton’s time. The “unchanging essence” – as you call it – is but a figment of your imagination.

Parmenides:  I concede that our knowledge of the universe evolves over time. However, the principles that underlie its functioning don’t change.  Indeed, the primary rationale behind all scientific inquiry is to find those eternal principles or truths.

Heraclitus: It is far from clear that the principles are unchanging, even in a so-called exact discipline like physics.  For example, a recent proposal suggests that the laws of physics evolve in time.  This seems even more likely for social systems: the theory and practice of management in the early twentieth century is very different from what it is now, and with good reason too – contemporary organizations are nothing like those of a century ago.  In other words, the “laws” that were valid then (if one can call them that) are different from the ones in operation now.

Parmenides:   You’re seduced by superficial change – you must look beneath surface appearances!  As for the proposal that the laws of physics evolve in time, I must categorically state that it is a minority view that many physicists disagree with  (Editor’s note: see this rebuttal for example)

Heraclitus: I take your point about the laws of physics…but I should mention that history is replete with “minority views” that were later proven to be right.  However, I cannot agree with your argument about superficial change because it is beyond logic. You can always deem any change as being superficial, however deep it may be. So let me try to get my point across in yet another way. You had mentioned that you use management principles and models to guide your actions. Could you tell me a bit more about how this works in practice?

Parmenides:  Sure, let me tell you about an engagement that we recently did for a large organization. The problem they came to us with was that their manufacturing department was simply not delivering what their customers expected.  We did a series of interviews with senior and mid-level managers from the organisation as well as a wide spectrum of staff and customers and found that the problem was a systemic one – it had  more to do with the lack of proper communication channels across the organisation  rather than an issue with a specific department. Based on this we made some recommendations to restructure the organisation according to best practices drawn from organisational theory.  We then helped them implement our recommendations.

Heraclitus: So you determined the change that needed to be made and then implemented the change over a period of time. Is that right?

Parmenides: Well, yes…

Heraclitus: And would I be right in assuming that the change took many months to implement?

Parmenides: Yes, about a year actually…but why does that matter?

Heraclitus:  Bear with me for a minute. Were there any significant surprises along the way? There must have been things that happened that you did not anticipate.

Parmenides: Of course, that goes with the territory; one cannot foresee everything.

Heraclitus: Yet you persisted in implementing the changes you had originally envisioned them.

Parmenides: Naturally! We had determined what needed to be done, so we went ahead and did it. But what are you getting at?

Heraclitus: It’s quite simple really. The answer lies in a paradox formulated by your friend Zeno: you assumed that the organization remains static over the entire period over which you implemented your recommendations.

Parmenides:  I did not say that!

Heraclitus:  You did not say it, but you assumed it.  Your recommendations for restructuring were based on information that was gathered at a particular point in time – a snapshot so to speak. Such an approach completely overlooks the fact that organisations are dynamic entities that change in unforeseen ways that models and theories cannot predict. Indeed, by your own admission, there were significant but unanticipated events and changes that occurred along the way.  Now you might claim that those changes were superficial, but that won’t wash because you did not foresee those changes at the start and therefore could not have known whether they would be superficial or not.

Parmenides:   Well, I’m not sure I agree with your logic my dear Heraclitus. And in any case, my approach has the advantage of being easy to understand. I don’t think decision-makers would trust a consultant who refuses to take action because every little detail about the future cannot be predicted.

Heraclitus: Admitting ignorance about the future is the first step towards doing something about it.

Parmenides: Yes, but you need to have a coherent plan, despite an uncertain future.

Heraclitus: True, but a coherent plan can be incremental…or better, emergent –  where planned actions are adjusted in response to unexpected events that occur as one goes along. Such an approach is better than one based on a snapshot of an organisation at a particular point in time.

Parmenides:  Try selling that approach to a CEO, my friend!

Heraclitus: I know, organizations are ever-changing, but those who run them are intent on maintaining a certain status quo. So they preach change, but do not change the one thing that needs changing the most – themselves.

Parmenides: [shakes his head] Ah, Heraclitus, I do not wish to convert you to my way of thinking, but I should mention that our differences are not of theoretical interest alone:  they spell the difference between being a cashed-up consultant and a penurious philosopher.

Heraclitus: [laughs] At last we have something we can agree on.

Further reading:

Beer, Stafford (1997), “The culpabliss error: A calculus of ethics for a systemic world,” Systems Practice, Vol 10, No. 4. Pp. 365-380. Available online at: http://rd.springer.com/article/10.1007/BF02557886

Note: the quote at the start of this piece is a paraphrasing of the following line from the paper: “Society is Heraclitian; but Parmenides is in charge.”

Written by K

August 14, 2014 at 7:52 pm

Scapegoats and systems: contrasting approaches to managing human error in organisations

with 4 comments

Much can be learnt about an organization by observing what management does when things go wrong.  One reaction is to hunt for a scapegoat, someone who can be held responsible for the mess.  The other is to take a systemic view that focuses on finding the root cause of the issue and figuring out what can be done in order to prevent it from recurring.  In a highly cited paper published in 2000, James Reason compared and contrasted the two approaches to error management in organisations. This post is an extensive summary of the paper.

The author gets to the point in the very first paragraph:

The human error problem can be viewed in two ways: the person approach and the system approach. Each has its model of error causation and each model gives rise to quite different philosophies of error management. Understanding these differences has important practical implications for coping with the ever present risk of mishaps in clinical practice.

Reason’s paper was published in the British Medical Journal and hence his focus on the practice of medicine. His arguments and conclusions, however, have a much wider relevance as evidenced by the diverse areas in which his paper has been cited.

The person approach – which, I think is more accurately called the scapegoat approach – is based on the belief that any errors can and should be traced back to an individual or a group, and that the party responsible should then be held to account for the error. This is the approach taken in organisations that are colloquially referred to as having a “blame culture.”

To an extent, looking around for a scapegoat is a natural emotional reaction to an error. The oft unstated reason behind scapegoating, however, is to avoid management responsibility.  As the author tells us:

People are viewed as free agents capable of choosing between safe and unsafe modes of behaviour.  If something goes wrong, it seems obvious that an individual (or group of individuals) must have been responsible. Seeking as far as possible to uncouple a person’s unsafe acts from any institutional responsibility is clearly in the interests of managers. It is also legally more convenient

However, the scapegoat approach has a couple of serious problems that hinder effective risk management.

Firstly, an organization depends on its frontline staff to report any problems or lapses. Clearly, staff will do so only if they feel that it is safe to do so – something that is simply not possible in an organization that takes scapegoat approach. The author suggests that the Chernobyl disaster can be attributed to the lack of a “reporting culture” within the erstwhile Soviet Union.

Secondly, and perhaps more important, is that the focus on a scapegoat leaves the underlying cause of the error unaddressed. As the author puts it, “by focusing on the individual origins of error it [the scapegoat approach] isolates unsafe acts from their system context.” As a consequence, the scapegoat approach overlooks systemic features of errors – for example, the empirical fact that the same kinds of errors tend to recur within a given system.

The system approach accepts that human errors will happen. However, in contrast to the scapegoat approach, it views these errors as being triggered by factors that are built into the system. So, when something goes wrong, the system approach focuses on the procedures that were used rather than the people who were executing them. This difference from the scapegoat approach makes a world of difference.

The system approach looks for generic reasons why errors or accidents occur. Organisations usually have a series of measures in place to prevent errors – e.g. alarms, procedures, checklists, trained staff etc. Each of these measures can be looked upon as a “defensive layer” against error. However, as the author notes, each defensive layer has holes which can let errors “pass through” (more on how the holes arise a bit later).  A good way to visualize this is as a series of slices of Swiss Cheese (see Figure 1).

The important point is that the holes on a given slice are not at a fixed position; they keep opening, closing and even shifting around, depending on the state of the organization.  An error occurs when the ephemeral holes on different layers temporarily line up to “let an error through”.

There are two reasons why holes arise in defensive layers:

  1. Active errors: These are unsafe acts committed by individuals. Active errors could be violations of set procedures or momentary lapses. The scapegoat approach focuses on identifying the active error and the person responsible for it. However, as the author points out, active errors are almost always caused by conditions built into the system, which brings us to…
  2. Latent conditions: These are flaws that are built into the system. The author uses the term resident pathogens to describe these – a nice metaphor that I have explored in a paper review I wrote some years ago. These “pathogens” are usually baked into the system by poor design decisions and flawed procedures on the one hand, and ill-thought-out management decisions on the other. Manifestations of the former include faulty alarms, unrealistic or inconsistent procedures or poorly designed equipment; manifestations of the latter include things such as unrealistic targets, overworked staff and the lack of  funding for appropriate equipment.

The important thing to note is that latent conditions can lie dormant for a long period before they are noticed. Typically a latent condition comes to light only when an error caused by it occurs…and only if the organization does a root cause analysis of the error – something that is simply not done in an organization takes a scapegoat approach.

The author draws a nice analogy that clarifies the link between active errors and latent conditions:

…active failures are like mosquitoes. They can be swatted one by one, but they still keep coming. The best remedies are to create more effective defences and to drain the swamps in which they breed. The swamps, in this case, are the ever present latent conditions.

“Draining the swamp” is not a simple task.  The author draws upon studies of high performance organisations (combat units, nuclear power plants and air traffic control centres) to understand how they minimised active errors by reducing system flaws. He notes that these organisations:

  1. Accept that errors will occur despite standardised procedures, and train their staff to deal with and learn from them.
  2. Practice responses to known error scenarios and try to imagine new ones on a regular basis.
  3. Delegate responsibility and authority, especially in crisis situations.
  4. Do a root cause analysis of any error that occurs and address the underlying problem by changing the system if needed.

In contrast, an organisation that takes a scapegoat approach assumes that standardisation will eliminate errors, ignores the possibility of novel errors occurring, centralises control and, above all, focuses on finding scapegoats instead of fixing the system.

Acknowledgement:

Figure 1 was taken from the Patient Safety Education website of Duke University Hospital.

Further reading:

The Swiss Cheese model was first proposed in 1991. It has since been applied in many areas. Here are a couple of recent applications and extensions of the model to project management:

  1. Stephen Duffield and Jon Whitty use the Swiss Cheese model as a basis for their model of Systemic Lessons Learned and Knowledge Captured (SLLKC model) in projects.
  2. In this post, Paul Culmsee extends the SLLKC model to incorporate aspects relating to teams and collaboration.

Written by K

July 29, 2014 at 8:43 pm

The dilemmas of enterprise IT

with 2 comments

Information technology (IT) is an integral part of any modern day business. Indeed, as Bill Gates once put it, “Information technology and business are becoming inextricably interwoven. I don’t think anybody can talk meaningfully about one without the talking about the other.” Although this is true, decision makers often display ambivalent, even contradictory attitudes towards enterprise IT.  For example, depending on the context, an executive might view IT as a cost of doing business or as a strategic advantage: the former view is common when budgets are being drawn up whereas the latter may come to the fore when a bold new e-marketing initiative is being discussed.

In this post I discuss some of these dilemmas of IT and show how the opposing viewpoints embodied in them need to be managed rather than resolved.  I illustrate my point by describing one way in which this can be done.

The dilemmas in brief

Many of the dilemmas of IT are consequences of conflicting views of what IT is and/or how it should be managed. I’ll describe some of these in brief below, leaving a discussion of their implications to the next section:

  1. IT as a cost of doing business versus IT as strategic asset: This distinction highlights the ambivalent attitudes that senior executives have towards IT. On the one hand, IT is seen as offering strategic advantages to the organization (for example a custom built application for customer segmentation). On the other, it is seen as an operational necessity (for example, core banking systems in the financial industry).
  2. Centralised IT versus Autonomous IT:  This refers to the debate about whether an organisation’s IT environment should be tightly controlled from head office or whether subsidiaries should be given a degree of autonomy.  This is essentially a debate between top-down versus bottom-up approaches to IT planning.
  3. Planning versus Improvisation: This refers to the tension between the structure offered by a plan and process-driven approach to IT and the necessity to step outside of plans and processes in order to come up with improvised solutions suited to the situation at hand. I have written about this paradox in a post on planning and improvisation.

There are other dilemmas – for example, technology driven IT versus business driven IT. However, for the purpose of this discussion the three listed above will suffice.

The poles of a dilemma

In his book entitled Polarity Management, Barry Johnson described how complex organizational issues can often be analysed in terms of their mutually contradictory facets. He termed these facets poles or polarities.  In this and the next section, I elaborate on Johnson’s notion of polarity and show how it offers a means to understand and manage the dilemmas of enterprise IT.

The key features of poles are as follows:

Each pole has associated positives and negatives. For example, the up side of viewing IT as a cost is that the organisation focuses on IT efficiency and value for money; the downside is that exploration and experimentation that is necessary for IT innovation would likely be seen as risky. On the other hand, the positive side of IT as a strategic asset is that it is seen as a means to enable an organisation’s growth and development; the negative is that it can encourage unproven technologies (since new technologies are more likely to offer competitive advantages) and uncontrolled experimentation along with their attendant costs.

Most organisations oscillate between poles.  At any given time the organisation will be “living” in one pole. In such situations, some stakeholders will perceive the negatives of that pole strongly and will thus see the other pole as being more desirable (the “grass is greener on the other” side syndrome).  Johnson labels such stakeholders crusaders” – those who want to rush off into the new world. On the other hand, there are tradition bearers, those who want to stay put.  When an organisation has spent a fair bit of time in one pole, the influence of crusaders tends wax while that of the tradition bearers weakens because the negatives become apparent to more and more people.

A concrete example may help clarify this point:

Consider a situation where all subsidiaries of a multinational have autonomous IT units (and have had these for a while).  The main benefits of such a model are responsiveness and relevance:  local IT units will able to respond quickly to local needs and will also be able to deliver solutions that are tailored to the specific needs of the local business. However, this model has many negative aspects: for example, high costs, duplication of effort, massive software portfolio and attendant costs, high cost of interfacing between subsidiaries etc.

When the model has been in operation for a while, it is quite likely that IT decision makers will perceive the negatives of this pole more clearly than they see the positives. They will then initiate a reform to centralize IT because they perceive the positives of that pole –i.e. low costs, centralization of services etc. – as being worth striving for.  However, when the new world is in place and has been operating for a while, the organisation will begin to see its downside: bureaucracy, lack of flexibility, applications that don’t meet specific local business needs etc. They will then start to delegate responsibility back to the subsidiaries…and thus goes the polarity merry-go-round.

Managing enterprise IT dilemmas

As discussed above, any option will have its supporters and detractors. For example, finance folks may see IT as a cost of doing business whereas those in IT will consider it to be a strategic asset.   What’s important, however, is that most organisations “resolve” such contradictions by taking sides. That is, one side “wins” and their point of view gets implemented as a “solution.”  The concerns of the “losing” side are overlooked entirely.

Although such a “solution” appears to solve the problem, it does not take long for the negative aspects of the other pole to manifest itself; the rumbles of discontent from those whose concerns have been ignored grow louder with time.  In this sense, issues that can be defined in terms of polarities are wicked problems – they are perceived in different ways by different stakeholders and so are difficult to define, let alone solve.

As we have seen above, however, the poles of a dilemma are but different facets of a single reality.  Hence, the first step towards managing a dilemma lies in realizing that it cannot be resolved definitively; regardless of the path chosen, there will always be a group whose concerns remain unaddressed. The best one can do is to be aware of the positives and negatives of each pole and ensure that the entire spectrum of stakeholders is aware of these. A shared awareness can help the group in figuring out ways to mitigate the worst effects of the negatives.

One which this can be done is via a facilitated session, involving people who represent the two sides of the issue.   To begin with, the facilitator helps the group identify the poles. She then helps the group create a polarity map which shows the contradictory aspects of the issue along with their positives and negatives. A rudimentary polarity map for the autonomous/centralized IT dilemma is shown in Figure 1 below.

Figure 1: Polarity map for centralised / autonomous IT dilemma

Figure 1: Polarity map for centralised / autonomous IT dilemma

To ensure completeness of the map, the group must include stakeholders who represent both sides of the dilemma (and also those who hold views that lie between).

As mentioned in the previous section, organisations are not static, they oscillate between poles. Moreover, Johnson claimed that they follow a specific path in the map.  Quoting from the book I wrote with Paul Culmsee:

According to Johnson, organisations tended to oscillate between poles. If you accept the notion of a wicked problem as a polarity, the overall pattern traced as one moves between these poles resembles an infinity symbol. The typical path is L- to R+, to R-, across to L+ and Johnson argued that the trajectory could not be avoided. All we can do is focus on minimizing our time spent in the lower quadrants.

Again, it is worth emphasizing that the conflict between the two groups of stakeholders cannot be resolved definitively. The best one can do is to get the two sides to understand each other’s’ point of view and hence attempt to minimize the downsides of each option.

Finally, polarity management is but one way to manage the dilemmas associate with enterprise IT or any other organizational decision. There are many others – and <advertisement> I highly recommend my book if you’re interested in finding out more about these </advertisement>.  In the end, though, the point I wished to make in this post is less about any particular technique and more about the need to air and acknowledge differing perspectives on issues pertaining to enterprise IT or any other decision with organization-wide implications.

Wrapping up

The dilemmas of enterprise IT are essentially consequences of mutually contradictory, yet equally valid perspectives. Is IT a cost of doing business or is it a strategic asset? The answer depends on the perspective one takes…and there is no objectively right or wrong answer.  Given this, it is important to be aware of both the up and down side of each perspective (or pole) before one makes a decision.  Unfortunately, most often decisions are made on the basis of the up side of one option and the down side of the other.  As should be evident now, a decision that is based on such a selective consideration of viewpoints invariably invites conflict and leads to undesirable outcomes.

Written by K

July 2, 2014 at 9:52 pm

Follow

Get every new post delivered to your Inbox.

Join 283 other followers

%d bloggers like this: