Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Wicked Problems’ Category

The dark side of data science

with 3 comments

Data scientists are sometimes blind to the possibility that the predictions of their algorithms can have unforeseen negative effects on people. Ethical or social implications are easy to overlook when one finds interesting new patterns in data, especially if they promise significant financial gains. The Centrelink debt recovery debacle, recently reported in the Australian media, is a case in point.

Here is the story in brief:

Centrelink is an Australian Government organisation responsible for administering welfare services and payments to those in need. A major challenge such organisations face is ensuring that their clients are paid no less and no more than what is due to them. This is difficult because it involves crosschecking client income details across multiple systems owned by different government departments, a process that necessarily involves many assumptions. In July 2016, Centrelink unveiled an automated compliance system that compares income self-reported by clients to information held by the taxation office.

The problem is that the algorithm is flawed: it makes strong (and incorrect!) assumptions regarding the distribution of income across a financial year and, as a consequence, unfairly penalizes a number of legitimate benefit recipients.  It is very likely that the designers and implementers of the algorithm did not fully understand the implications of their assumptions. Worse, from the errors made by the system, it appears they may not have adequately tested it either.  But this did not stop them (or, quite possibly, their managers) from unleashing their algorithm on an unsuspecting public, causing widespread stress and distress.  More on this a bit later.

Algorithms like the one described above are the subject of Cathy O’Neil’s aptly titled book, Weapons of Math Destruction.  In the remainder of this article I discuss the main themes of the book.  Just to be clear, this post is more riff than review. However, for those seeking an opinion, here’s my one-line version: I think the book should be read not only by data science practitioners, but also by those who use or are affected by their algorithms (which means pretty much everyone!).

Abstractions and assumptions

‘O Neil begins with the observation that data algorithms are mathematical models of reality, and are necessarily incomplete because several simplifying assumptions are invariably baked into them. This point is important and often overlooked so it is worth illustrating via an example.

When assessing a person’s suitability for a loan, a bank will want to know whether the person is a good risk. It is impossible to model creditworthiness completely because we do not know all the relevant variables and those that are known may be hard to measure. To make up for their ignorance, data scientists typically use proxy variables, i.e. variables that are believed to be correlated with the variable of interest and are also easily measurable. In the case of creditworthiness, proxy variables might be things like gender, age, employment status, residential postcode etc.  Unfortunately many of these can be misleading, discriminatory or worse, both.

The Centrelink algorithm provides a good example of such a “double-whammy” proxy. The key variable it uses is the difference between the client’s annual income reported by the taxation office and self-reported annual income stated by the client. A large difference is taken to be an indicative of an incorrect payment and hence an outstanding debt. This simplistic assumption overlooks the fact that most affected people are not in steady jobs and therefore do not earn regular incomes over the course of a financial year (see this article by Michael Griffin, for a detailed example).  Worse, this crude proxy places an unfair burden on vulnerable individuals for whom casual and part time work is a fact of life.

Worse still, for those wrongly targeted with a recovery notice, getting the errors sorted out is not a straightforward process. This is typical of a WMD. As ‘O Neil states in her book, “The human victims of WMDs…are held to a far higher standard of evidence than the algorithms themselves.”  Perhaps this is because the algorithms are often opaque. But that’s a poor excuse.  This is the only technical field where practitioners are held to a lower standard of accountability than those affected by their products.

‘O Neil’s sums it up rather nicely when she calls algorithms like the Centrelink one  weapons of math destruction (WMD).

Self-fulfilling prophecies and feedback loops

A characteristic of WMD is that their predictions often become self-fulfilling prophecies. For example a person denied a loan by a faulty risk model is more likely to be denied again when he or she applies elsewhere, simply because it is on their record that they have been refused credit before. This kind of destructive feedback loop is typical of a WMD.

An example that ‘O Neil dwells on at length is a popular predictive policing program. Designed for efficiency rather than nuanced judgment, such algorithms measure what can easily be measured and act by it, ignoring the subtle contextual factors that inform the actions of experienced officers on the beat. Worse, they can lead to actions that can exacerbate the problem. For example, targeting young people of a certain demographic for stop and frisk actions can alienate them to a point where they might well turn to crime out of anger and exasperation.

As Goldratt famously said, “Tell me how you measure me and I’ll tell you how I’ll behave.”

This is not news: savvy managers have known about the dangers of managing by metrics for years. The problem is now exacerbated manyfold by our ability to implement and act on such metrics on an industrial scale, a trend that leads to a dangerous devaluation of human judgement in areas where it is most needed.

A related problem – briefly mentioned earlier – is that some of the important variables are known but hard to quantify in algorithmic terms. For example, it is known that community-oriented policing, where officers on the beat develop relationships with people in the community, leads to greater trust. The degree of trust is hard to quantify, but it is known that communities that have strong relationships with their police departments tend to have lower crime rates than similar communities that do not.  Such important but hard-to-quantify factors are typically missed by predictive policing programs.

Blackballed!

Ironically, although WMDs can cause destructive feedback loops, they are often not subjected to feedback themselves. O’Neil gives the example of algorithms that gauge the suitability of potential hires.  These programs often use proxy variables such as IQ test results, personality tests etc. to predict employability.  Candidates who are rejected often do not realise that they have been screened out by an algorithm. Further, it often happens that candidates who are thus rejected go on to successful careers elsewhere. However, this post-rejection information is never fed back to the algorithm because it impossible to do so.

In such cases, the only way to avoid being blackballed is to understand the rules set by the algorithm and play according to them. As ‘O Neil so poignantly puts it, “our lives increasingly depend on our ability to make our case to machines.” However, this can be difficult because it assumes that a) people know they are being assessed by an algorithm and 2) they have knowledge of how the algorithm works. In most hiring scenarios neither of these hold.

Just to be clear, not all data science models ignore feedback. For example, sabermetric algorithms used to assess player performance in Major League Baseball are continually revised based on latest player stats, thereby taking into account changes in performance.

Driven by data

In recent years, many workplaces have gradually seen the introduction to data-driven efficiency initiatives. Automated rostering, based on scheduling algorithms is an example. These algorithms are based on operations research techniques that were developed for scheduling complex manufacturing processes. Although appropriate for driving efficiency in manufacturing, these techniques are inappropriate for optimising shift work because of the effect they have on people. As O’ Neil states:

Scheduling software can be seen as an extension of just-in-time economy. But instead of lawn mower blades or cell phone screens showing up right on cue, it’s people, usually people who badly need money. And because they need money so desperately, the companies can bend their lives to the dictates of a mathematical model.

She correctly observes that an, “oversupply of low wage labour is the problem.” Employers know they can get away with treating people like machine parts because they have a large captive workforce.  What makes this seriously scary is that vested interests can make it difficult to outlaw such exploitative practices. As ‘O Neil mentions:

Following [a] New York Times report on Starbucks’ scheduling practices, Democrats in Congress promptly drew up bills to rein in scheduling software. But facing a Republican majority fiercely opposed to government regulations, the chances that their bill would become law were nil. The legislation died.

Commercial interests invariably trump social and ethical issues, so it is highly unlikely that industry or government will take steps to curb the worst excesses of such algorithms without significant pressure from the general public. A first step towards this is to educate ourselves on how these algorithms work and the downstream social effects of their predictions.

Messing with your mind

There is an even more insidious way that algorithms mess with us. Hot on the heels of the recent US presidential election, there were suggestions that fake news items on Facebook may have influenced the results.  Mark Zuckerberg denied this, but as this Casey Newton noted in this trenchant tweet, the denial leaves Facebook in “the awkward position of having to explain why they think they drive purchase decisions but not voting decisions.”

Be that as it may, the fact is Facebook’s own researchers have been conducting experiments to fine tune a tool they call the “voter megaphone”. Here’s what ‘O Neil says about it:

The idea was to encourage people to spread the word that they had voted. This seemed reasonable enough. By sprinkling people’s news feeds with “I voted” updates, Facebook was encouraging Americans – more that sixty-one million of them – to carry out their civic duty….by posting about people’s voting behaviour, the site was stoking peer pressure to vote. Studies have shown that the quiet satisfaction of carrying out a civic duty is less likely to move people than the possible judgement of friends and neighbours…The Facebook started out with a constructive and seemingly innocent goal to encourage people to vote. And it succeeded…researchers estimated that their campaign had increased turnout by 340,000 people. That’s a big enough crowd to swing entire states, and even national elections.

And if that’s not scary enough, try this:

For three months leading up to the election between President Obama and Mitt Romney, a researcher at the company….altered the news feed algorithm for about two million people, all of them politically engaged. The people got a higher proportion of hard news, as opposed to the usual cat videos, graduation announcements, or photos from Disney world….[the researcher] wanted to see  if getting more [political] news from friends changed people’s political behaviour. Following the election [he] sent out surveys. The self-reported results that voter participation in this group inched up from 64 to 67 percent.

This might not sound like much, but considering the thin margins of recent presidential elections, it could be enough to change a result.

But it’s even more insidious.  In a paper published in 2014, Facebook researchers showed that users’ moods can be influenced by the emotional content of their newsfeeds. Here’s a snippet from the abstract of the paper:

In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.

As you might imagine, there was a media uproar following which  the lead researcher issued a clarification and  Facebook officials duly expressed regret (but, as far as I know, not an apology).  To be sure, advertisers have been exploiting this kind of “mind control” for years, but a public social media platform should (expect to) be held to a higher standard of ethics. Facebook has since reviewed its internal research practices, but the recent fake news affair shows that the story is to be continued.

Disarming weapons of math destruction

The Centrelink debt debacle, Facebook mood contagion experiments and the other case studies mentioned in the book illusrate the myriad ways in which Big Data algorithms have a pernicious effect on our day-to-day lives. Quite often people remain unaware of their influence, wondering why a loan was denied or a job application didn’t go their way. Just as often, they are aware of what is happening, but are powerless to change it – shift scheduling algorithms being a case in point.

This is not how it was meant to be. Technology was supposed to make life better for all, not just the few who wield it.

So what can be done? Here are some suggestions:

  • To begin with, education is the key. We must work to demystify data science, create a general awareness of data science algorithms and how they work. O’ Neil’s book is an excellent first step in this direction (although it is very thin on details of how the algorithms work)
  • Develop a code of ethics for data science practitioners. It is heartening to see that IEEE has recently come up with a discussion paper on ethical considerations for artificial intelligence and autonomous systems and ACM has proposed a set of principles for algorithmic transparency and accountability.  However, I should also tag this suggestion with the warning that codes of ethics are not very effective as they can be easily violated. One has to – somehow – embed ethics in the DNA of data scientists. I believe, one way to do this is through practice-oriented education in which data scientists-in-training grapple with ethical issues through data challenges and hackathons. It is as Wittgenstein famously said, “it is clear that ethics cannot be articulated.” Ethics must be practiced.
  • Put in place a system of reliable algorithmic audits within data science departments, particularly those that do work with significant social impact.
  • Increase transparency a) by publishing information on how algorithms predict what they predict and b) by making it possible for those affected by the algorithm to access the data used to classify them as well as their classification, how it will be used and by whom.
  • Encourage the development of algorithms that detect bias in other algorithms and correct it.
  • Inspire aspiring data scientists to build models for the good.

It is only right that the last word in this long riff should go to ‘O Neil whose work inspired it. Towards the end of her book she writes:

Big Data processes codify the past. They do not invent the future. Doing that requires moral imagination, and that’s something that only humans can provide. We have to explicitly embed better values into our algorithms, creating Big Data models that follow our ethical lead. Sometimes that will mean putting fairness ahead of profit.

Excellent words for data scientists to live by.

Written by K

January 17, 2017 at 8:38 pm

The Heretic’s Guide to Management – understanding ambiguity in the corporate world

with 7 comments

I am delighted to announce that my new business book, The Heretic’s Guide to Management: The Art of Harnessing Ambiguity, is now available in e-book and print formats. The book, co-written with Paul Culmsee, is a loose sequel to our previous tome, The Heretics Guide to Best Practices.

Many reviewers liked the writing style of our first book, which combined rigour with humour. This book continues in the same vein, so if you enjoyed the first one we hope you might like this one too. The new book is half the size of the first one and I considerably less idealistic too. In terms of subject matter, I could say “Ambiguity, Teddy Bears and Fetishes” and leave it at that…but that might leave you thinking that it’s not the kind of book you would want anyone to see on your desk!

Rest assured, The Heretic’s Guide to Management is not a corporate version of Fifty Shades of Grey. Instead, it aims to delve into the complex but fascinating ways in which ambiguity affects human behaviour. More importantly, it discusses how ambiguity can be harnessed in ways that achieve positive outcomes.  Most management techniques (ranging from strategic planning to operational budgeting) attempt to reduce ambiguity and thereby provide clarity. It is a profound irony of modern corporate life that they often end up doing the opposite: increasing ambiguity rather than reducing it.

On the surface, it is easy enough to understand why: organizations are complex entities so it is unreasonable to expect management models, such as those that fit neatly into a 2*2 matrix or a predetermined checklist, to work in the real world. In fact, expecting them to work as advertised is like colouring a paint-by-numbers Mona Lisa, expecting to recreate Da Vinci’s masterpiece. Ambiguity therefore invariably remains untamed, and reality reimposes itself no matter how alluring the model is.

It turns out that most of us have a deep aversion to situations that involve even a hint of ambiguity. Recent research in neuroscience has revealed the reason for this: ambiguity is processed in the parts of the brain which regulate our emotional responses. As a result, many people associate it with feelings of anxiety. When kids feel anxious, they turn to transitional objects such as teddy bears or security blankets. These objects provide them with a sense of stability when situations or events seem overwhelming. In this book, we show that as grown-ups we don’t stop using teddy bears – it is just that the teddies we use take a different, more corporate, form. Drawing on research, we discuss how management models, fads and frameworks are actually akin to teddy bears. They provide the same sense of comfort and certainty to corporate managers and minions as real teddies do to distressed kids.

A plain old Teddy

A Plain Teddy

Most children usually outgrow their need for teddies as they mature and learn to cope with their childhood fears. However, if development is disrupted or arrested in some way, the transitional object can become a fetish – an object that is held on to with a pathological intensity, simply for the comfort that it offers in the face of ambiguity. The corporate reliance on simplistic solutions for the complex challenges faced is akin to little Johnny believing that everything will be OK provided he clings on to Teddy.

When this happens, the trick is finding ways to help Johnny overcome his fear of ambiguity.

Ambiguity is a primal force that drives much of our behaviour. It is typically viewed negatively, something to be avoided or to be controlled.

A Sith Teddy

A Sith Teddy

The truth, however, is that ambiguity is a force that can be used in positive ways too. The Force that gave the Dark Side their power in the Star Wars movies was harnessed by the Jedi in positive ways.

A Jedi Teddy

A Jedi Teddy

Our book shows you how ambiguity, so common in the corporate world, can be harnessed to achieve the results you want.

The e-book is available via popular online outlets. Here are links to some:

Amazon Kindle

Google Play

Kobo

For those who prefer paperbacks, the print version is available here.

Thanks for your support 🙂

Written by K

July 12, 2016 at 10:30 pm

Three types of uncertainty you (probably) overlook

with 5 comments

Introduction – uncertainty and decision-making

Managing uncertainty deciding what to do in the absence of reliable information – is a significant part of project management and many other managerial roles. When put this way, it is clear that managing uncertainty is primarily a decision-making problem. Indeed, as I will discuss shortly, the main difficulties associated with decision-making are related to specific types of uncertainties that we tend to overlook.

Let’s begin by looking at the standard approach to decision-making, which goes as follows:

  1. Define the decision problem.
  2. Identify options.
  3. Develop criteria for rating options.
  4. Evaluate options against criteria.
  5. Select the top rated option.

As I have pointed out in this post, the above process is too simplistic for some of the complex, multifaceted decisions that we face in life and at work (switching jobs, buying a house or starting a business venture, for example). In such cases:

  1. It may be difficult to identify all options.
  2. It is often impossible to rate options meaningfully because of information asymmetry – we know more about some options than others. For example, when choosing whether or not to switch jobs, we know more about our current situation than the new one.
  3. Even when ratings are possible, different people will rate options differently – i.e. different people invariably have different preferences for a given outcome. This makes it difficult to reach a consensus.

Regular readers of this blog will know that the points listed above are characteristics of wicked problems.  It is fair to say that in recent years, a general awareness of the ubiquity of wicked problems has led to an appreciation of the limits of classical decision theory. (That said,  it should be noted that academics have been aware of this for a long time: Horst Rittel’s classic paper on the dilemmas of planning, written in 1973, is a good example. And there are many others that predate it.)

In this post  I look into some hard-to-tackle aspects of uncertainty by focusing on the aforementioned shortcomings of classical decision theory. My discussion draws on a paper by Richard Bradley and Mareile Drechsler.

This article is organised as follows: I first present an overview of the standard approach to dealing with uncertainty and discuss its limitations. Following this, I elaborate on three types of uncertainty that are discussed in the paper.

Background – the standard view of uncertainty

The standard approach to tackling uncertainty was  articulated by Leonard Savage in his classic text, Foundations of Statistics. Savage’s approach can be summarized as follows:

  1. Figure out all possible states (outcomes)
  2. Enumerate actions that are possible
  3. Figure out the consequences of actions for all possible states.
  4. Attach a value (aka preference) to each consequence
  5. Select the course of action that maximizes value (based on an appropriately defined measure, making sure to factor in the likelihood of achieving the desired consequence)

(Note the close parallels between this process and the standard approach to decision-making outlined earlier.)

To keep things concrete it is useful to see how this process would work in a simple real-life example. Bradley and Drechsler quote the following example from Savage’s book that does just that:

…[consider] someone who is cooking an omelet and has already broken five good eggs into a bowl, but is uncertain whether the sixth egg is good or rotten. In deciding whether to break the sixth egg into the bowl containing the first five eggs, to break it into a separate saucer, or to throw it away, the only question this agent has to grapple with is whether the last egg is good or rotten, for she knows both what the consequence of breaking the egg is in each eventuality and how desirable each consequence is. And in general it would seem that for Savage once the agent has settled the question of how probable each state of the world is, she can determine what to do simply by averaging the utilities (Note: utility is basically a mathematical expression of preference or value) of each action’s consequences by the probabilities of the states of the world in which they are realised…

In this example there are two states (egg is good, egg is rotten), three actions (break egg into bowl, break egg into separate saucer to check if it rotten, throw egg away without checking) and three consequences (spoil all eggs, save eggs in bowl and save all eggs if last egg is not rotten, save eggs in bowl and potentially waste last egg). The problem then boils down to figuring out our preferences for the options (in some quantitative way) and the probability of the two states.  At first sight, Savage’s approach seems like a reasonable way to deal with uncertainty.  However, a closer look reveals major problems.

Problems with the standard approach

Unlike the omelet example, in real life situations it is often difficult to enumerate all possible states or foresee all consequences of an action. Further, even if states and consequences are known, we may not what value to attach to them – that is, we may not be able to determine our preferences for those consequences unambiguously. Even in those situations  where we can,  our preferences for may be subject to change  – witness the not uncommon situation where lottery winners end up wishing they’d never wonThe standard prescription works therefore works only in situations where all states, actions and consequences are known – i.e. tame situations, as opposed to wicked ones.

Before going any further, I should mention that Savage was cognisant of the limitations of his approach. He pointed out that it works only in what he called small world situations–  i.e. situations in which it is possible to enumerate and evaluate all options.  As Bradley and Drechsler put it,

Savage was well aware that not all decision problems could be represented in a small world decision matrix. In Savage’s words, you are in a small world if you can “look before you leap”; that is, it is feasible to enumerate all contingencies and you know what the consequences of actions are. You are in a grand world when you must “cross the bridge when you come to it”, either because you are not sure what the possible states of the world, actions and/or consequences are…

In the following three sections  I elaborate on the complications mentioned above emphasizing, once again, that many real life situations are prone to such complications.

State space uncertainty

The standard view of uncertainty assumes that all possible states are given as a part of the problem definition – as in the omelet example discussed earlier.  In real life, however, this is often not the case.

Bradley and Drechsler identify two distinct cases of state space uncertainty. The first one is when we are unaware that we’re missing states and/or consequences. For example, organisations that embark on a restructuring program are so focused on the cost-related consequences that they may overlook factors such as loss of morale and/or loss of talent (and the consequent loss of productivity). The second, somewhat rarer, case is when we are aware that we might be missing something but we don’t quite know what it is. All one can do here, is make appropriate contingency plans based on  guesses regarding possible consequences.

Figuring out possible states and consequences is largely a matter of scenario envisioning based on knowledge and practical experience. It stands to reason that this is best done by leveraging the collective experience and wisdom of people from diverse backgrounds. This is pretty much the rationale behind collective decision-making techniques such as Dialogue Mapping.

Option uncertainty

The standard approach to tackling uncertainty assumes that the connection between actions and consequences is well defined. This is often not the case, particularly for wicked problems.  For example, as I have discussed in this post, enterprise transformation programs with well-defined and articulated objectives often end up having a host of unintended consequences. At an even more basic level, in some situations it can be difficult to identify sensible options.

Option uncertainty is a fairly common feature in real-life decisions. As Bradley and Drechsler put it:

Option uncertainty is an endemic feature of decision making, for it is rarely the case that we can predict consequences of our actions in every detail (alternatively, be sure what our options are). And although in many decision situations, it won’t matter too much what the precise consequence of each action is, in some the details will matter very much.

…and unfortunately, the cases in which the details matter are precisely those problems in which they are the hardest to figure out – i.e. in wicked problems.

Preference uncertainty

An implicit assumption in the standard approach is that once states and consequences are known, people will be able to figure out their relative preferences for these unambiguously. This assumption is incorrect, as there are at least two situations in which people will not be able to determine their preferences. Firstly, there may be  a lack of factual information about one or more of the states. Secondly, even when one is able to get the required facts, it is hard to figure out how we would value the consequences.

A common example of the aforementioned situation is the job switch dilemma. In many (most?) cases in which one is debating whether or not to switch jobs, one lacks enough factual information about the new job – for example, the new boss’ temperament, the work environment etc. Further, even if one is able to get the required information, it is impossible to know how it would be to actually work there.  Most people would have struggled with this kind of uncertainty at some point in their lives. Bradley and Drechsler term this ethical uncertainty. I prefer the term preference uncertainty, as it has more to do with preferences than ethics.

Some general remarks

The first point to note is that the three types of uncertainty noted above map exactly on to the three shortcomings of classical decision theory discussed in the introduction.  This suggests a connection between the types of uncertainty and wicked problems. Indeed, most wicked problems are exemplars of one or more of the above uncertainty types.  For example, the paradigm-defining super-wicked problem of climate change displays all three types of uncertainty.

The three types of uncertainty discussed above are overlooked by the standard approach to managing uncertainty.  This happens in a number of ways. Here are two common ones:

  1. The standard approach assumes that all uncertainties can somehow be incorporated into a single probability function describing all possible states and/or consequences. This is clearly false for state space and option uncertainty: it is impossible to define a sensible probability function when one is uncertain about the possible states and/or outcomes.
  2. The standard approach assumes that preferences for different consequences are known. This is clearly not true in the case of preference uncertainty…and even for state space and option uncertainty for that matter.

In their paper, Bradley and Dreschsler arrive at these three types of uncertainty from considerations different from the ones I have used above. Their approach, while more general, is considerably more involved. Nevertheless, I would recommend that readers who are interested should take a look at it because they cover a lot of things that I have glossed over or ignored altogether.

Just as an example, they show how the aforementioned uncertainties can be reduced. There is a price to be paid, however: any reduction in uncertainty results in an increase in its severity. An example might help illustrate how this comes about. Consider a situation of state space uncertainty. One can reduce- or even, remove – this by defining a catch-all state (labelled, say, “all other outcomes”). It is easy to see that although one has formally reduced state space uncertainty to zero, one has increased the severity of the uncertainty because the catch-all state is but a reflection of our ignorance and our refusal to do anything about it!

There are many more implications of the above. However, I’ll point out just one more that serves to illustrate the very practical implications of these uncertainties. In a post on the shortcomings of enterprise risk management, I pointed out that the notion of an organisation-wide risk appetite is problematic because it is impossible to capture the diversity of viewpoints through such a construct. Moreover,  rule or process based approaches to risk management tend to focus only on those uncertainties that can be quantified, or conversely they assume that all uncertainties can somehow be clumped into a single probability distribution as prescribed by the standard approach to managing uncertainty. The three types of uncertainty discussed above highlight the limitations of such an approach to enterprise risk.

Conclusion

The standard approach to managing uncertainty assumes that all possible states, actions and consequences are known or can be determined. In this post I have discussed why this is not always so.  In particular, it often happens that we do not know all possible outcomes (state space uncertainty), consequences (option uncertainty) and/or our preferences for consequences (preference or ethical uncertainty).

As I was reading the paper, I felt the authors were articulating issues that I had often felt uneasy about but chose to overlook (suppress?).  Generalising from one’s own experience is always a fraught affair, but  I reckon we tend to deny these uncertainties because they are inconvenient – that is, they are difficult if not impossible to deal with within the procrustean framework of the standard approach.  What is needed as a corrective is a recognition that the pseudo-quantitative approach that is commonly used to manage uncertainty may not the panacea it is claimed to be. The first step towards doing this is to acknowledge the existence of the uncertainties that we (probably) overlook.

Written by K

February 25, 2015 at 9:08 pm

From information to knowledge: the what and whence of issue based information systems

with 3 comments

Issue Based Information System (IBIS) is a notation invented by Horst Rittel and Werner Kunz in the early 1970s.  IBIS is best known for its use in dialogue mapping, a collaborative approach to tackling wicked problems (i.e. contentious issues) in organisations. It has a range of other applications as well – capturing knowledge is a good example, and I’ll have much more to say about that later in this post.

Over the last five years or so, I have written a fair bit on IBIS on this blog and in a book that I co-authored with the dialogue mapping expert, Paul Culmsee.   The present post reprises an article I wrote five years ago on the “what” and “whence” of the notation:  its practical aspects – notation, grammar etc -, as well as its origins, advantages and limitations. My motivations for revisiting the piece are to revise and update the original discussion and, more important, to cover some recent developments in IBIS technology that open up interesting possibilities in the area of knowledge management.

To appreciate the power of the IBIS, it is best to begin by understanding the context in which the notation was invented.  I’ll therefore start with a discussion of the origins of the notation followed by an introduction to it. Finally, I’ll cover its development through the last 40 odd years, focusing on the recent developments that I mentioned above.

Wicked origins

A good place to start is where it all started. IBIS was first described in a paper entitled, Issues as elements of Information Systems; written by Horst Rittel (the man who coined the term wicked problem) and Werner Kunz in July 1970. They state the intent behind IBIS in the very first line of the abstract of their paper:

 Issue-Based Information Systems (IBIS) are meant to support coordination and planning of political decision processes. IBIS guides the identification, structuring, and settling of issues raised by problem-solving groups, and provides information pertinent to the discourse.

Rittel’s preoccupation was the area of public policy and planning – which is also the context in which he defined the term wicked problem originally. Given the above background it is no surprise that Rittel and Kunz foresaw IBIS to be the:

…type of information system meant to support the work of cooperatives like governmental or administrative agencies or committees, planning groups, etc., that are confronted with a problem complex in order to arrive at a plan for decision…

The problems tackled by such cooperatives are paradigm-defining examples of wicked problems. From the start, then, IBIS was intended as a tool to facilitate a collaborative approach to solving…or better, managing a wicked problem by helping develop a shared perspective on it.

A brief introduction to IBIS

The IBIS notation consists of the following three elements:

  1. Issues(or questions): these are issues that are being debated. Typically, issues are framed as questions on the lines of “What should we do about X?” where X is the issue that is of interest to a group. For example, in the case of a group of executives, X might be rapidly changing market condition whereas in the case of a group of IT people, X could be an ageing system that is hard to replace.
  2. Ideas(or positions): these are responses to questions. For example, one of the ideas of offered by the IT group above might be to replace the said system with a newer one. Typically the whole set of ideas that respond to an issue in a discussion represents the spectrum of participant perspectives on the issue.
  3. Arguments: these can be Pros (arguments for) or Cons (arguments against) an issue. The complete set of arguments that respond to an idea represents the multiplicity of viewpoints on it.

Compendium is a freeware tool that can be used to create IBIS maps– it can be downloaded here.

In Compendium, the IBIS elements described above are represented as nodes as shown in Figure 1: issues are represented by blue-green question marks; positions by yellow light bulbs; pros by green + signs and cons by red – signs.  Compendium supports a few other node types, but these are not part of the core IBIS notation. Nodes can be linked only in ways specified by the IBIS grammar as I discuss next.

 

Figure 1: IBIS elements

Figure 1: IBIS elements

The IBIS grammar can be summarized in three simple rules:

  1. Issues can be raised anew or can arise from other issues, positions or arguments. In other words, any IBIS element can be questioned.  In Compendium notation:  a question node can connect to any other IBIS node.
  2. Ideas can only respond to questions– i.e. in Compendium “light bulb” nodes can only link to question nodes. The arrow pointing from the idea to the question depicts the “responds to” relationship.
  3. Arguments  can only be associated with ideas– i.e. in Compendium “+” and “–“  nodes can only link to “light bulb” nodes (with arrows pointing to the latter)

The legal links are summarized in Figure 2 below.

Figure 2: Legal links in IBIS

Figure 2: Legal links in IBIS

Yes, it’s as simple as that.

The rules are best illustrated by example-   follow the links below to see some illustrations of IBIS in action:

  1. See this postfor a simple example of dialogue mapping.
  2. See this postor this one for examples of argument visualisation. (Note: using IBIS to map out the structure of written arguments is called issue mapping.
  3. See this one for an example Paul did with his children. This example also features in our book. that made an appearance in our book.

Now that we know how IBIS works and have seen a few examples of it in action, it’s time to trace the history of the notation from its early days the present.

Operation of early systems

When Rittel and Kunz wrote their paper, there were three IBIS-type systems in operation: two in government agencies (in the US, one presumes) and one in a university environment (quite possibly Berkeley, where Rittel worked). Although it seems quaint and old-fashioned now, it is no surprise that these were manual, paper-based systems; the effort and expense involved in computerizing such systems in the early 70s would have been prohibitive and the pay-off questionable.

The Rittel-Kunz paper introduced earlier also offers a short description of how these early IBIS systems operated:

An initially unstructured problem area or topic denotes the task named by a “trigger phrase” (“Urban Renewal in Baltimore,” “The War,” “Tax Reform”). About this topic and its subtopics a discourse develops. Issues are brought up and disputed because different positions (Rittel’s word for ideas or responses) are assumed. Arguments are constructed in defense of or against the different positions until the issue is settled by convincing the opponents or decided by a formal decision procedure. Frequently questions of fact are directed to experts or fed into a documentation system. Answers obtained can be questioned and turned into issues. Through this counterplay of questioning and arguing, the participants form and exert their judgments incessantly, developing more structured pictures of the problem and its solutions. It is not possible to separate “understanding the problem” as a phase from “information” or “solution” since every formulation of the problem is also a statement about a potential solution.

 Even today, forty years later, this is an excellent description of how IBIS is used to facilitate a common understanding of complex problems.  Moreover, the process of reaching a shared understanding (whether using IBIS or not) is one of the key ways in which knowledge is created within organizations. To foreshadow a point I will elaborate on later, using IBIS to capture the key issues, ideas and arguments, and the connections between them, results in a navigable map of the knowledge that is generated in a discussion.

 Fast forward a couple decades (and more!)

In a paper published in 1988 entitled, gIBIS: A hypertext tool for exploratory policy discussion, Conklin and Begeman describe a prototype of a graphical, hypertext-based  IBIS-type system (called gIBIS) and its use in capturing design rationale (yes, despite the title of the paper, it is more about capturing design rationale than policy discussions). The development of gIBIS represents a key step between the original Rittel-Kunz version of IBIS and its more recent version as implemented in Compendium.  Amongst other things, IBIS was finally off paper and on to disk, opening up a world of new possibilities.

gIBIS aimed to offer users:

  1. The ability to capture design rationale – the options discussed (including the ones rejected) and the discussion around the pros and cons of each.
  2. A platform for promoting computer-mediated collaborativedesign work – ideally in situations where participants were located at sites remote from each other.
  3. The ability to store a large amount of information and to be able to navigate through it in an intuitive way.

The gIBIS prototype proved successful enough to catalyse the development of Questmap, a commercially available software tool that supported IBIS.  In a recent conversation Jeff Conklin mentioned to me that Questmap was one of the earliest Windows-based groupware tools available on the market…and it won a best-of-show award in that category. It is interesting to note that in contrast to Questmap (which no longer exists), Compendium is a single-user, desktop software.

The primary application of Questmap was in the area of sensemaking which is all about helping groups reach a collective understanding of complex situations that might otherwise lead them into tense or adversarial conditions. Indeed, that is precisely how Rittel and Kunz intended IBIS to be used.  The key advantage offered by computerized IBIS systems was that one could map dialogues in real-time, with the map representing the points raised in the conversation along with their logical connections. This proved to be a big step forward in the use of IBIS to help groups achieve a shared understanding of complex issues.

That said, although there were some notable early successes in the real-time use of IBIS in industry environments (see this paper, for example), these were not accompanied by widespread adoption of the technique. It is worth exploring the reasons for this briefly.

 The tacitness of IBIS mastery

The reasons for the lack of traction of IBIS-type techniques for real-time knowledge capture are discussed in a paper by Shum et. al. entitled, Hypermedia Support for Argumentation-Based Rationale: 15 Years on from gIBIS and QOC.  The reasons they give are:

  1. For acceptance, any system must offer immediate value to the person who is using it. Quoting from the paper, “No designer can be expected to altruistically enter quality design rationale solely for the possible benefit of a possibly unknown person at an unknown point in the future for an unknown task. There must be immediate value.” Such immediate value is not obvious to novice users of IBIS-type systems.
  2. There is some effort involved in gaining fluency in the use of IBIS-based software tools. It is only after this that users can gain an appreciation of the value of such tools in overcoming the limitations of mapping design arguments on paper, whiteboards etc.

While the rules of IBIS are simple to grasp, the intellectual effort – or cognitive overhead in using IBIS in real time involves:

  1. Teasing out issues, ideas and arguments from the dialogue.
  2. Classifying points raised into issues, ideas and arguments.
  3. Naming (or describing) the point succinctly.
  4. Relating (or linking) the point to the existing map (or anticipating how it will fit in later)
  5. Developing a sense for conversational patterns.

Expertise in these skills can only be developed through sustained practice, so it is no surprise that beginners find it hard to use IBIS to map dialogues.   Indeed, the use of IBIS for real-time conversation mapping is a tacit skill, much like riding a bike or swimming – it can only be mastered by doing.

Making sense through IBIS 

Despite the difficulties of mastering IBIS, it is easy to see that it offers considerable advantages over conventional methods of documenting discussions. Indeed, Rittel and Kunz were well aware of this. Their paper contains a nice summary of the advantages, which I paraphrase below:

  1. IBIS can bridge the gap between discussions and records of discussions (minutes, audio/video transcriptions etc.). IBIS sits between the two, acting as a short-term memory. The paper thus foreshadows the use of issue-based systems as an aid to organizational or project memory.
  2. Many elements (issues, ideas or arguments) that come up in a discussion have contextual meanings that are different from any pre-existing definitions. That is, the interpretation of points made or questions raised depends on the circumstances surrounding the discussion. What is more important is that contextual meaning is more important than formal meaning. IBIS captures the former in a very clear way – for example a response to a question “What do we mean by X?” elicits the meaning of X in the context of the discussion, which is then subsequently captured as an idea (position)”. I’ll have much more to say about this towards the end of this article.
  3. The reasoning used in discussions is made transparent, as is the supporting (or opposing) evidence.
  4. The state of the argument (discussion) at any time can be inferred at a glance (unlike the case in written records). See this post for more on the advantages of visual documentation over prose.
  5. Often times it happens that the commonality of issues with other, similar issues might be more important than its precise meaning. To quote from the paper, “…the description of the subject matter in terms of librarians or documentalists (sic) may be less significant than the similarity of an issue with issues dealt with previously and the information used in their treatment…”  This is less of an issue now because of search of technologies. However, search technologies are still largely based on keywords rather than context. A properly structured, context-searchable IBIS-based archive would be more useful than a conventional document-based system. As I’ll discuss in the next section, the technology for this is now available.

To sum up, then: although IBIS offers a means to map out arguments what is lacking is the ability to make these maps available and searchable across an organization.

IBIS in the enterprise

It is interesting to note that Compendium, unlike its predecessor, Questmap, is a single-user, desktop tool – it does not, by itself, enable the sharing of maps across the enterprise. To be sure, it is possible work around this limitation but the workarounds are somewhat clunky.  A recent advance in IBIS technology addresses this issue rather elegantly: Seven Sigma, an Australian consultancy founded by Paul Culmsee, Chris Tomich and Peter Chow, has developed Glyma (pronounced “glimmer”): a product that makes IBIS available on collaboration platforms like Microsoft SharePoint. This is a game-changer because it enables sharing and searching of IBIS maps across the enterprise. Moreover, as we shall see below, the implications of this go beyond sharing and search.

Full Disclosure: As regular readers of this blog know, Paul is a good friend and we have jointly written a book and a few papers. However, at the time of writing, I have no commercial association with Seven Sigma.  My comments below are based on playing with beta version of the product that Paul was kind enough to give me to access to as well as some discussions that I have had with him.

Figure 3: IBIS in Glyma

Figure 3: IBIS in Glyma (Click to see larger picture)

The look and feel of Glyma is much the same as Compendium (see Fig 3 above) – and the keystrokes and shortcuts are quite similar. I have trialled Glyma for a few weeks and feel that the overall experience is actually a bit better than in Compendium. For example one can navigate through a series of maps and sub-maps using a breadcrumb trail. Another example: documents and videos are embedded within the map – so one does not need to leave the map in order to view tagged media (unless of course one wants to see it at a higher resolution).

I won’t go into any detail about product features etc. since that kind of information is best accessed at source – i.e. the product website and documentation. Instead, I will now focus on how Glyma addresses a longstanding gap in knowledge management systems.

Revisiting the problem of context

In most organisations, knowledge artefacts (such as documents and audio-visual media) are stored in hierarchical or relational structures (for example, a folder within a directory structure or a relational database). To be sure, stored artefacts are often tagged with metadata indicating when, where and by whom they were created, but experience tells me that such metadata is not as useful as it should be.  The problem is that the context in which an artefact was created is typically not captured. Anyone who has read a document and wondered, “What on earth were the authors thinking when they wrote this?” has encountered the problem of missing context.

Context, though hard to capture, is critically important in understanding the content of a knowledge artefact. Any artefact, when accessed without an appreciation of the context in which it was created is liable to be misinterpreted or only partially understood.

 Capturing context in the enterprise

Glyma addresses the issue of context rather elegantly at the level of the enterprise.  I’ll illustrate this point an inspiring case study on the innovative use of SharePoint in education that Paul has written about some time ago.

The case study

Here is the backstory in Paul’s words:

Earlier this year, I met Louis Zulli Jnr – a teacher out of Florida who is part of a program called the Centre of Advanced Technologies. We were co-keynoting at a conference and he came on after I had droned on about common SharePoint governance mistakes…The majority of Lou’s presentation showcased a whole bunch of SharePoint powered solutions that his students had written. The solutions were very impressive…We were treated to examples like:

  • IOS, Android and Windows Phone apps that leveraged SharePoint to display teacher’s assignments, school events and class times;
  • Silverlight based application providing a virtual tour of the campus;
  • Integration of SharePoint with Moodle (an open source learning platform)
  • An Academic Planner web application allowing students to plan their classes, submit a schedule, have them reviewed, track of the credits of the classes selected and whether a student’s selections meet graduation requirements;

All of this and more was developed by 16 to 18 year olds and all at a level of quality that I know most SharePoint consultancies would be jealous of…

Although the examples highlighted by Louis were very impressive, what Paul found more interesting were the anecdotes that Lou related about the dedication and persistence that students displayed in their work. Quoting again from Paul,

So the demos themselves were impressive enough, but that is actually not what impressed me the most. In fact, what had me hooked was not on the slide deck. It was the anecdotes that Lou told about the dedication of his students to the task and how they went about getting things done. He spoke of students working during their various school breaks to get projects completed and how they leveraged each other’s various skills and other strengths. Lou’s final slide summed his talk up brilliantly:

  • Students want to make a difference! Give them the right project and they do incredible things.
  • Make the project meaningful. Let it serve a purpose for the campus community.
  • Learn to listen. If your students have a better way, do it. If they have an idea, let them explore it.
  • Invest in success early. Make sure you have the infrastructure to guarantee uptime and have a development farm.
  • Every situation is different but there is no harm in failure. “I have not failed. I’ve found 10,000 ways that won’t work” – Thomas A. Edison

In brief:  these points highlight the fact that Lou’s primary role as director of the center is to create the conditions that make it possible for students to do great work.  The commercial-level quality of work turned out by students suggests that Lou’s knowledge on how to build high-performing teams is definitely worth capturing.

The question is: what’s the best way to do this (short of getting him to visit you and talk about his experiences)?

Seeing the forest for the trees

Paul recently interviewed Lou with the intent of documenting Lou’s experiences. The conversation was recorded on video and then “Glymafied” it – i.e the video was mapped using IBIS (see Figure 4 below).

Figure 4: Knowledge capture via Glyma

Figure 4: Knowledge capture via Glyma (Click to see larger picture)

There are a few points worth noting here:

  1. The content of the entire conversation is mapped out so one can “see” the conversation at a glance.
  2. The context in which a particular point (i.e. the content of a node) is made is clarified by the connections between a node and its neighbours. Moving left from a node gives a higher level picture, moving right drills down into details.

Of course, the reader will have noted that these are core IBIS capabilities that are available in Compendium (or any other IBIS tool).  Glyma offers much more. Consider the following:

  1. Relevant documents or audio visual media can be tagged to specific nodes to provide supplementary material. In this case the video recording was tagged to specific nodes that related to points made in the video. Clicking on the play icon attached to such a node plays the segment in which the content of the node is being discussed. This is a really nice feature as it saves the user from having to watch the whole video (or play an extended game of ffwd-rew to get to the point of interest). Moreover, this provides additional context that cannot (or is not) captured by in the map. For example, one can attach papers, links to web pages, Slideshare presentations etc. to fill in background and context.
  2. Glyma is integrated with an enterprise content management system by design. One can therefore link map and video content to the powerful built-in search and content aggregation features of these systems. For example, users would be able enter a search from their intranet home page and retrieve not only traditional content such as documents, but also stories, reflections and anecdotes from experts such as Lou.
  3. Another critical aspect to intranet integration is the ability to provide maps as contextual navigation. Amazon’s ability to sell books that people never intended to buy is an example of the power of such navigation. The ability to execute the kinds of queries outlined in the previous point, along with contextual information such as user profile details, previous activity on the intranet and the area of an intranet the user is browsing, makes it possible to present recommendations of nodes or maps that may be of potential interest to users. Such targeted recommendations might encourage users to explore video (and other rich media) content.

Technical Aside: An interesting under-the-hood feature of Glyma is that it uses an implementation of a hypergraph database to store maps. (Note: this is a database that can store representations of graphs (maps) in which an edge can connect to more than 2 vertices). These databases enable the storing of very general graph structures. What this means is that Glyma can be extended to store any kind of map (Mind Maps, Concept Maps, Argument Maps or whatever)…and nodes can be shared across maps. This feature has not been developed as yet, but I mention it because it offers some exciting possibilities in the future.

To summarise: since Glyma stores all its data in an enterprise-class database, maps can be made available across an organization.  It offers the ability to tag nodes with pretty much any kind of media (documents, audio/video clips etc.), and one can tag specific parts of the media that are relevant to the content of the node (a snippet of a video, for example). Moreover, the sophisticated search capabilities of the platform enable context aware search.  Specifically, we can search for nodes by keywords or tags, and once a node of interest is located, we can also view the map(s) in which it appears. The map(s) inform us of the context(s) relating to the node. The ability to display the “contextual environment” of a piece of information is what makes Glyma really interesting.

In metaphorical terms, Glyma enables us to see the forest for the trees.

…and so, to conclude

My aim in this post has been to introduce readers to the IBIS notation and trace its history from its origins in issue mapping to recent developments in knowledge management.  The history of a technique is valuable because it gives insight into the rationale behind its creation, which leads to a better understanding of the different ways in which it can be used. Indeed, it would not be an exaggeration to say that the knowledge management applications discussed above are but an extension of Rittel’s original reasons for inventing IBIS.

I would like to close this piece with a couple of observations from my experience with IBIS:

Firstly, the real surprise for me has been that the technique can capture most written arguments and conversations, despite having only three distinct elements and a very simple grammar. Yes, it does require some thought to do this, particularly when mapping discussions in real time. However, this cognitive overhead is good because it forces the mapper to think about what’s being said instead of just writing it down blind. Thoughtful transcription is the aim of the game. When done right, this results in a map that truly reflects an understanding of a complex issue.

Secondly, although most current discussions of IBIS focus on its applications in dialogue mapping, it has a more important role to play in mapping organizational knowledge. Maps offer a powerful means to navigate the complex network of knowledge within an organisation. The (aspirational) end-goal of such an effort would be a “global” knowledge map that highlights interconnections between different kinds of knowledge that exists within an organization. To be sure, such a map will make little sense to the human eye, but powerful search capabilities could make it navigable. To the extent that this is a feasible direction, I foresee IBIS becoming an important skill in the repertoire of knowledge management professionals.

Emergent design in enterprise IT

with 7 comments

Introduction

Over the last few months, I’ve published a number of posts in which the term emergent design makes a cameo appearance (see this article or this interview for example). Some readers may have noticed that although the term is used in various contexts in the articles/interviews, it is not explicitly defined anywhere. This is deliberate. Emergent design is…well, emergent, so a detailed definition is neither necessary nor useful – providing one can describe a set of guidelines for its practice.  My main aim in this post is to do just that. To keep things concrete I will discuss the guidelines in the context of the often bizarre world of enterprise IT, a domain that epitomizes top-down, plan-based design.

(Note: Before going any further a couple of clarifications are in order. Firstly, the word emergent as used here has nothing to do with emergence in complex systems. Secondly, the guidelines provided here are a starting point, not a comprehensive list.)

The wickedness of enterprise IT

Most IT initiatives in large organisations are planned, designed and executed in a top-down manner, with little or no attempt to understand the existing culture and / or on-the-ground realities. This observation applies not only to enterprise software projects, such as those involving Collaboration or   Customer Relationship Management platforms, but also to design and process-driven IT functions like architecture and service management.

Top down approaches are liable to fail because enterprise IT displays many of the characteristics of wicked problems.  In particular, organization-wide IT initiatives:

  1. Are one-shot operations – for example, an ERP system is simply too expensive to implement over and over again.
  2. Have no stopping rule – enterprise IT systems are never completely done; there are always things to be fixed and additional features to be implemented.
  3. Are highly contentious – whether or not an initiative is good, or even necessary, depends on who you ask.
  4. Could be done in other, possibly “better”, ways – and the problem is that one person’s “better” is another one’s “worse”!
  5. Are essentially unique – and don’t let vendors or Big $$$ consultants tell you otherwise!

These characteristics make enterprise IT a socially complex problem – that is, different stakeholder groups have different perceptions of the problem that the initiative is intended to address.   The most important implication of social complexity is that it cannot be tackled using rational methods of planning, design and implementation that are taught in schools, propagated in books, or evangelized by standards authorities and assorted snake oil salespersons.

Enter emergent design

The term emergent design was coined by David Cavallo in the context of technology-driven education reforms in indigenous cultures (the original paper by David Cavallo can be accessed here). Cavallo observed that traditional systems engineering approaches that attempt to change an educational system in a top-down manner fail primarily because they do not take into account the unique features of local cultures. Instead, he found that using the existing culture as a starting point from which to work towards systemic change offered a much better chance of the new ways taking root. In his words, “[the] adoption and implementation of new methodologies needs to be based in, and grow from, the existing culture.”

Cavallo’s words hold the key to understanding emergent design in the context of enterprise IT. It is that any enterprise IT initiative necessarily affects many stakeholders, and should therefore start by taking their concerns seriously.

“Ah, so it’s like Agile software development,” concludes the Agilista.

Not quite. As I will discuss in the remainder of this post, although emergent design shares a few number features with Agile methods, there’s considerably more to it than that. That said, chances are that good Agile coaches are emergent design practitioners without knowing it. This is something that will become apparent as we go on.

Guidelines for emergent design

I have, for a while, been thinking about what emergent design means in the context of enterprise IT. Among other things, I have been looking at how it might be applied to a wide variety initiatives that are traditionally planned upfront – things such as offshoring and enterprise-wide projects such as data warehouse or enterprise resource planning initiatives.

In one of those serendipitous occurrences, last week I happened re-read an old series of articles entitled Confessions of a post-SharePoint Architect written by  my friend, the ace sensemaker and emergent entrepreneur, Paul Culmsee. Although the series focuses on emergent design principles in the context of the Microsoft SharePoint platform, many of the points that Paul makes apply to enterprise IT in general.  In addition to material drawn from Paul’s blog, I also borrow from a few posts on my blog. In the interests of space I have provided only a brief overview of the points because they have been elaborated elsewhere. The original pieces fill in a lot of relevant detail, so please do follow the links if you have the inclination and the time.

With that said, let’s get to it.

Be a midwife rather than an expert

In a paper entitled, On the planning crisis, Horst Rittel  (the man who coined the term wicked problem) wrote:

You do not learn in school how to deal with wicked problems…expertise and ignorance is distributed over all participants in a wicked problem. There is a symmetry of ignorance among those who participate because nobody knows better by virtue of his degrees or his status. There are no experts (which is irritating for experts), and if experts there are, they are only experts in guiding the process of dealing with a wicked problem, but not for the subject matter of the problem.

The first guideline of emergent design is to realize that no one is an expert – not you nor your Big $$$ consultant. The only way to build a robust and lasting system or process is for everyone to put their heads together and work towards it in a collaborative fashion, dispensing with the pretense that one can outsource one’s thinking to the “expert”. Indeed, the role of the “expert” is to create and foster the conditions for such collaboration to occur. Paul and I elaborate on this point at length in our book and this paper (summarized in this post).

In brief, the knowledge required to successfully implement an enterprise system is distributed across all stakeholders (analysts, consultants, architects and, above all, users). Pulling all this together into a coherent whole has more to do with facilitation and people skills than technology.

Ensure that governance is about enablement rather than control

Most organisations have onerous procedures to ensure that people do the right thing – the poor system lead drowns in a ream of documentation that she is required to “read and understand”; things have to be documented according to certain standards etc. etc. All these procedures are aimed at keeping people on the straight and narrow path of IT righteousness.

I submit that most governance regimes within organisations encourage a checkbox-based compliance mentality aimed at ensuring that people comply in letter, but not necessarily in spirit (actually, never in spirit).  As Paul mentions in this post  governance ought to be about enablement rather than compliance or control.

There’s a very simple test to tell one from the other: when you come across a procedure such as an SOP or a methodology that you are required to follow, ask yourself this question: does this help me do my job?

If the answer positive, the procedure is an enabler; if not, it is likely a control that is primarily intended as a CYA mechanism.

Do not penalize people for learning

The main rationale behind iterative and incremental approaches to software development is that they encourage (and take advantage) of continuous learning. Incremental increases in functionality are easier to test exhaustively and errors are also easier to correct. Reviews and retrospectives also tend to be more focused leading to a better chance of lessons actually being learnt.  Thanks to the Agile movement, this is now well known and understood in mainstream IT.

However, learning is not just a matter of using iterative/incremental methodologies; one also needs to build an environment that encourages it. This is trickier matter because it depends on things that are outside an individual manager’s control; indeed it has more to do with the entire IT function or even the organization. In an organisation with a strong blame culture, the culture tends to win against pretty much any methodology, agile or otherwise. Blame cultures preclude learning because mistakes are punished and people are scapegoated as a result. Check out this article on learning organizations for more on this topic, and this post for a more nuanced (realistic?) view.

With that said for the importance of learning, it is also important to note that there are some situations where learning is less important. This is the case for work for that can be planned and scripted in detail up front.  It is important to be able to distinguish between the two types of situations…which brings us to the next point.

Understand the difference between complicated and complex initiatives

Requirements analysis is one of the first activities in traditional system development. Most enterprise IT initiatives that are driven by a vendor or consultant will have many sessions for this at the front-end of an engagement. Enterprise wisdom tells us that things need to be specified in detail at the start. The rationale behind this is to set requirements in stone so that the entire project can be planned in detail before actual implementation begins.   Such an approach is fine if one knows for sure:

  1. How the future is going to unfold and has appropriate contingencies in place for adverse events;
  2. That users have a clear idea of what they want, and
  3. That requirements will not change (or will change minimally).

It is obvious that this approach will be disastrous if any of the above assumptions are incorrect. Unfortunately it is more often the case that the assumptions do not hold, as evidenced by innumerable IT project that have failed for a lack of adequate risk management, scope clarity and/or uncontrolled change.

So how does one distinguish between initiatives that can be planned in detail upfront and those that can’t?

The distinction is best illustrated via an example: consider a project to replace a fixed line phone system by VoIP versus an ERP project. The first project has a fixed set of requirements across different groups. The second one, in contrast, involves diverse stakeholder groups, each with their own unique expectations of the system. Both projects are complicated from the technology point of view, but the second one has elements of wickedness arising from social complexity.  Consequently, the two projects cannot be run in the same way.  In particular, the first one can be planned in detail upfront while the second one cannot.  Borrowing from David Snowden’s Cynefin framework, we call the first type of project complicated and the second one complex. You need to understand which kind of initiative you are dealing with before deciding which project management approach would be appropriate.

Beware of platitudinous goals

The enterprise IT marketplace is one that is largely buzzword driven. The in-vogue buzzwords at the time this piece was written is the cloud and big data. Buzzwords, while sounding “right”, are actually platitudes – words that are devoid of meaning because different people interpret and use them differently.  The use of platitudes, therefore, results in confusion rather than clarity. For example, your information security guy may be wary of the cloud because he sees it as a potential security risk whereas a business user may view it positively because it liberates her from the clutches of a ponderous IT department. (Check out this video for a cautionary fable regarding a poorly thought out cloud strategy)

People tend to use platitudes as mental shortcuts to avoid thinking things through and coming up with their own opinions. It is therefore pointless to ask a person who uses a platitude to clarify what he or she means: they have not thought it through and will therefore be unable to give you a good answer.

The best way to deconstruct a platitude is via an oblique approach that is best illustrated through an example.

Say someone tells you that they want to improve efficiency (a rather common platitude). Asking them to define efficiency is not a good way to go because the answer you get is likely to be couched in generalities such as higher productivity and performance, words that are world class platitudes in their own right!  Instead, it is better to ask them what difference would be apparent if efficiency were to improve. To answer this question, they would have to come down from platitude-land and start thinking about concrete, measurable outcomes.

Use open questions to broaden perspectives

A more general point to note from the foregoing is that the framing of questions matters, particularly when one wants people to come up with ideas. For example, instead of asking people what they like (or dislike) about a particular approach, it is generally better to ask them what aspects of the approach are important to them. The latter question is neutrally framed so it does not impose any constraints on their thinking

Another good way to get people thinking about possibilities is to ask them what they would like to do if there were no constraints (such as budget or time, say).  Conversely, if you encounter a constraining factor (like a company policy), it is sometimes helpful to ask what the intent behind the policy is.

If posed in the right way and in the right environment, answers to such questions get people to think beyond their immediate concerns and focus on purposes and outcomes instead.

Check out Paul’s posts on powerful questions to find out more about these perspective-expanding questions.

Understand the need for different types of thinking

One of the ironies of enterprise IT initiatives is that the most important decisions on projects have to be made when the information available is the least. As I wrote in the introduction to this paper,

The early stages of projects are fraught with ambiguity. Yet, it is at this “front end” of projects that the most important decisions have to be made. Front-end decisions are hard because there is:

  • uncertainty about scope, i.e. about what needs to be done;
  • uncertainty about rationale, i.e. why it needs to be done; and
  • uncertainty about approach, i.e. how it should be done.

Arguably, the lack of clarity regarding any of these can sow the seeds of failure in the early stages of a project.

The standard approach is to treat uncertainty as a problem that can be solved through convergent thinking – i.e. the kind of thinking that assumes a problem has a single “correct answer.” However, project uncertainty has a social dimension; different people have different perceptions of things like scope and rationale, as well as different coping mechanisms for ambiguity. So, project uncertainty is a wicked problem that has no single “right answer.” This can cause anxiety for some. One therefore needs to begin with divergent thinking, which is largely about generating ideas and options and move to convergent thinking only when:

  1. The group has a shared understanding of the main issues.
  2. An adequate set of options have been generated.

As I alluded to above, people tend to show a preference for one type of thinking over the other. The strength of collaborative problem solving lies precisely in the fact that a group is likely to have a mix of the two types of thinkers.

It is perhaps obvious, but still worth mentioning that the other standard way to deal with uncertainty is to impose a solution by diktat or governance. Clearly such approaches are doomed to fail because they suppress the problem instead of solving it.

Consider long term consequences

It is an unfortunate fact of life that cost tends to be the ultimate arbiter on IT decisions. Vendors know this, and take advantage of it when crafting their proposals. The contract goes to the lowest bidder and the rest, as they say, is tragedy. Although cost is an important criterion in IT decisions, making it the overriding concern is a recipe for future disappointment.

The general lesson to draw from this is that one must consider the longer-term consequences of one’s choices. This can be hard to do because the distant future is less salient than the present or the immediate future. A good way to look beyond immediate concerns (such as cost) is to use the solution after next principle proposed by Gerald Nadler and Shozo Hibino in their book, Breakthrough Thinking. The basic idea behind the principle is to get people to focus on the goals that lie beyond the immediate goal. The process of thinking about and articulating longer-term goals can often provide insights into potential problems with the current goals and/or how they are being achieved.

Build in spare capacity

In his book on antifragility, Nicholas Taleb points out that the opposite of fragility is not robustness or resilience, rather it is the ability to thrive on or benefit from uncertainty. There is no word in the English language to describe such behavior, and that is what led him to coin the term antifragile.

In a post inspired by the book, I outlined the elements of an antifragile IT strategy.  One of the key points of such a strategy is the assumption that, despite our best laid plans, it is inevitable that something important will have been overlooked. It is therefore important to  build in some spare capacity to deal with the unexpected events and demands. Unfortunately, experience tells me that many enterprise IT systems operate at the limits of their capacity, with little or nothing in reserve.  This is a disaster waiting to happen.

Design so as to increase your future choices

This is perhaps the most important point in my list because it encapsulates all the other points. I have adapted it from Heinz von Foerster’s ethical imperative which states that one should always act so as to increase the number of choices in the future.  This principle is useful as a tiebreaker between two designs that are close in all other respects. However, there is more to it than just that.  I have found it particularly useful in making decisions regarding IT outsourcing and software as a service. There is very little critical scrutiny of the benefits of these as claimed by vendors and advisories. This principle can help you see through the fog of marketing rhetoric and advisory hype.

Parting thoughts

One of the paradoxes of life is that the harder we strive for something – money, happiness or whatever – the more unattainable it seems to become. Indeed, some of the most financially successful people (Bill Gates and Warren Buffett, for example) became rich by doing what they loved. Their financial success was a happy byproduct of their engagement in their work. The economist John Kay formalized this paradoxical notion in his concept of obliquity – that certain goals are best attained via indirect means.

If you have been patient enough to read through this piece, you will have noted that some of the guidelines listed above have a hint of obliquity about them. This is no surprise; indeed it is inevitable in a design approach that values people over processes and improvisation (or even serendipity) over planning.

I usually conclude my posts with a summary of the main message. For reasons that should be obvious I will not do that here. Instead, I will end by pointing out yet another feature of emergent design that you have likely figured out already: the guidelines listed above are actually domain neutral; they can be applied to pretty much any area of endeavour. This is no surprise: wicked problems aren’t domain specific and therefore neither are the techniques to deal with them.  For example, see my interview with Neil Preston for a perspective on emergent design in organizational change and <advertisement> my book co-authored with Paul </advertisement> for ways in which it can be applied to domains as diverse as town planning and knowledge management.

…and now I really must stop for I have gone on too long.  Many thanks for your patience!

Acknowledgement

My thanks go out to Paul Culmsee for his feedback on a draft version of this post.

Written by K

October 28, 2014 at 9:04 pm

Making sense of organizational change – a conversation with Neil Preston

with 6 comments

In this instalment of my sensemakers series, I chat with Dr. Neil Preston, an Organisational Psychologist  based in Perth, about the very topical issue of organizational change. In a wide-ranging conversation, Neil draws interesting connections between myths that are deeply embedded in Western thought and the way we think about and implement change…and also how we could do it so much better.

KA: Hi Neil, thanks for being a guest on my ongoing series of interviews with sensemakers. You and I have corresponded for at least a year now via email, so it’s a real pleasure to finally meet you, albeit virtually. I’d like to kick things off by asking you to say a bit about yourself and your work.

NP: Well, I’m Dr. Neil Preston. I’m an organizational psychologist…what that means is that I’m specially registered in the area of organizational psychology, much like a clinical psychologist. My background professionally is that I originally worked in mental health, as a senior research psychologist. I’ve published 30 to 40 peer-reviewed papers in psychiatry, mental health and psychometrics,  so I know my way around empirical psychology.  My real love, however, has always been in organizational and industrial psychology, so in 2006 I decided to leave the Health Department of Western Australia and move into full time consulting.

Consulting work has led me mainly into infrastructure projects-  these are very large, complex projects where organisations from both the private and public sector have to get together and create alliances in order to get the work done. My job on these projects – as I often put it to people – is to make the Addams Family look like the Brady Bunch [laughter]. The idea is to get different value systems and organizational cultures to align, with the aim of getting to a shared understanding of project goals and a shared commitment to achieving them.

My original approach was very diagnostic – which is the way psychologists are taught their trade – but as problems have become more complex, I’ve had to resort to dialogical (rather than diagnostic) approaches. As you well know, dialogue is more commensurate with complexity than diagnosis, so dialogical approaches are more appropriate for so-called wicked problems. This approach then led me to complex systems theory which in turn led to an area of work that Paul Culmsee, I and yourself are looking into: emergent design practices. (Editor’s note: This refers to a method of problem solving in which solutions are not imposed up front but emerge from dialogue between various stakeholders.)

KA: OK, so could you tell us a bit about the kinds of problems you get called in to tackle?

NP: Very broadly speaking, I’m generally called in when organisations have goals that are incommensurate with each other. For example: a billion dollar road that has to be on time and on budget…but, by the way, the alignment of the road also takes out a nesting site of a Carnaby White Tailed Cockatoo which triggers the environmental biodiversity protection act which in turn triggers issues with local councils and so on.

Complexity in projects often arises from situations like these,  where the issue is not just about delivering on time and on budget, but also creating a sustainable habitat and ensuring alignment with local governments etc.

KA:  So very broadly, I guess one could say that your work deals with the problems associated with change. The reason I put it in this way is that change is something that most people who work in organisations would have had to deal with – either as executives who initiate the change, managers who are charged with implementing it or employees who are on the receiving end of it.  The one thing I’ve noticed through experience –initially as a consultant and then working in big organisations – is that change is formulated and implemented in a very prescriptive way.  However, the end results are often less than satisfactory because there are many unintended consequences (loss of morale, drop in productivity etc.) – much like the unintended consequences of large infrastructure projects.  I’ve long wondered about this is so: why, after decades of research and experience do we still get it so wrong?

NP:  Let me give you an answer from a psychologist’s perspective. There are a couple of sub-disciplines of psychology called depth and archetypal psychology that look at myth.  The kind of change management programs that we enact are driven by a (predominantly) Western myth of heroic intervention.

James Hillman, an archetypal psychologist once said that a myth is what is real. This is somewhat contrary to the usual sense in which the word is used because we usually think of a myth as being something that is not real. However, Hillman is right because a myth is really an archetype – an overarching way of seeing the world in a way that we believe to be true. The myth of the hero – the good guy overcoming all adversity to slay the bad guy – is essentially an interventionist one. It is based on the Graeco-Roman notion of the exercise of individual will. Does that make sense so far?

KA: Yeah absolutely. Please go on.

NP: OK, so this myth is dominant in the Western imagination. For example, any movie that a kid might go to see like, say, Star Wars is really about the exercise of the individual will. In much the same way, the paradigm in which your typical change management program operates is very much (individual) action and intervention oriented. Even going back to Homeric times – the Iliad and Odyssey are essentially stories about individuals exercising authority, power…and excellence is another word that crops up often too. The objective of all this of course is to effect dramatic, full-frontal change.

However, there is a problem with this myth, and it is that it assumes that things are not complex. It assumes that simple linear, cause-effect explanations hold – that if you do A then B will happen (if you restructure you will save costs, for example). Such models are convenient because they seem rational on the surface, perhaps because they are easy to understand. However, they overlook the little details that often trip things up. As a result, such change often has unforeseen consequences.

Unfortunately, much of the stuff that comes out of the Big 4 consultancies is based on this myth.  The thing to note is that they do it not because it works but because it is in tune with the dominant myth of the Western business world.

KA: What you are saying definitely strikes a chord. What’s strange to me, however, is that there have been people challenging this for quite a while now. You mentioned the predominantly linear approach – A causes B sort of thinking – that change management practitioners tend to adopt. Now, as you well know, systems theorists and cyberneticists have proposed alternate approaches that are more cognizant of the multifaceted nature of change, and they have done so over fifty years ago! What happened to all that? When I read some of the papers, I see that they really speak to the problems we face now, but they seem to have been all but forgotten (Editor’s note: see this post that draws on work by the prominent cyberneticist,  Heinz von Foerster, for example). One can’t help but wonder why that is so….

NP: Well that’s because myths are incredibly sticky. We are talking about  an ancient myth of the exercise of the individual human will.  And, by the way, it’s a very Western thing: I remember once hearing on the radio that the Western notion of the “squeaky wheel getting the grease” has an Eastern counterpart that goes something like, “the loudest goose is first to lose his head.”  The point is, the two cultures have a very different way of looking at the world.  That myth – the hero myth – is very much brought into the way we tell stories about organisations.

Now, why does that matter? Well, JR Hackman, an organizational psychologist said it quite brilliantly. He called our fixation on the hero myth (in the context of change) the leadership attribution error – he argues that we tend to over-attribute the success of a change process to the salient things that we can see – which is (usually) the leader. As a result we tend to overlook the hidden factors which give rise to the actual performance of the organization.  These factors usually relate to the latent conditions present in the organization rather than specific causes like a leader’s actions.

So there are two types of change: planned change and emergent change.  Planned change is the way organisations usually think about change. It is a causal view in which certain actions give rise to certain outcomes. But here is the problem: the causal approach focuses primarily on salient features, ignoring all the other things that might be going on.

Now, cybernetics and systems theory do a better job of taking into account features that are hidden. However, as you mentioned, they have not had much uptake.  I think the reason for this is that myths are incredibly sticky…that is the best answer I can give.

KA: Hmm that’s interesting…I’d never thought of it that way – the stickiness of myths as blinding us to other viewpoints.   Is there something in the nature of human thought or human minds that make us latch on to over-simplified explanations?

NP: Well, there’s this notion of cognitive bias – persistent biases in human perception or judgement   (Editors’ Note: also see this post on the role of cognitive bias in project failure). The leadership attribution error is precisely such a bias. I should point out that these biases aren’t necessarily a problem;  they just happen to be the way humans think. And there are good evolutionary reasons for the existence of biases: we can’t process every little bit of information that comes to us through our senses, and these biases offer a means to filter out what is unimportant. Unfortunately, sometimes they cause us to overlook what is important. They are heuristics and, like all heuristics, they don’t always work.

So in the case of leadership attribution bias – yes leadership does have an effect, but it is not as much as what people think. In fact, work done by Wageman (who worked with Hackman) shows that what is more important for team performance are the conditions in which the teams work rather than the qualities or abilities of the leader.

KA: From experience I would have to say that rings true: conditions trump causes any day as far as team performance is concerned.

NP: Yeah and there’s a good reason for it; and it is so simple that we often overlook it. Take the example of sending a rocket to the moon. If you set up the right conditions for the rocket – the right amount of fuel, the right load and so forth, then everything that is necessary for the performance of the rocket is already set up. The person who actually steers the rocket is not as critical to the performance as the conditions are. And the conditions are already present when the rocket is in flight.

Similarly, In the case of organizational change, we should not be looking for causes – be it leadership or planned actions or whatever– but the conditions that might give rise to emergent change.

KA: Yeah, but conditions are causes too, aren’t they.

NP: Yes they are, but the point is that they aren’t salient ones – that is, they aren’t immediately obvious. Moreover, and this is the important point: you do not know the exact outcomes of those causes except that they will in general be positive if the conditions are right and negative if they aren’t.

KA: That makes sense. Now I’d like to ask you about a related matter. When dealing with change or anything else, organisations invariably seem to operate at the limits of their capacity.  Leaders always talk about “pushing ourselves” or “pushing the envelope” and so on.  On the other hand, there’s also a great deal of talk about flexibility and the capacity for change, but we never seem to build this into our organisations. Is there a way one can do this?

NP: Yes, you can actually build in resilience. Organisations generally like to keep their systems and processes tightly coupled – that is, highly dependent on each other. This tends to make them fragile or prone to breakdown. So, one of the things organisations can do to build resilience is to keep systems and processes loosely coupled. (Editor’s note: for example, devolve decision-making authority to the lowest possible level in the organization. This increases flexibility and responsiveness while having the added benefit of reducing management overhead).

Conditions also play a role here. One of the things that organisations like to talk about is innovation. The point is you can’t put in place processes for innovation but you can create conditions that might foster it.  You can’t ask people whether they “did their 15 minutes of innovation today” but you can give them the discretionary freedom to do things that have nothing to do with their work…and they just might do something that goes above and beyond their regular jobs. But of course what underpins all this is trust. Without trust you simply cannot build in flexibility or resilience.

KA: This really strikes a chord and let me tell you why.  I read Taleb’s book a while ago. As you probably know, the book is about antifragility, which he defines as the ability to benefit from uncertainty rather than just being resilient to it. After I read the book I wrote a post on what an antifragile IT strategy might look like…and in an uncanny resonance with what you just said, I made the claim that trust would be the single important element of the strategy [laughter].

NP: Yeah, and trust is not something you receive as much as you give. So as a psychologist I know why it is so damaging to people. You know, “Et tu Brutus” – Caesar’s famous line – it was the betrayal of trust that was so damaging. Once trust is gone there’s nothing left.

KA: Indeed, I sometimes feel that the key job of a manager is to develop trust-based relationships with his or her peers and subordinates. However, what I see in the workplace is often (though definitely not always) the opposite: people simply do not trust their managers because managers are quick to pass the blame down (or  even across) the hierarchy rather than absorbing it…which arguably, and ethically, is their job. They should be taking the heat so that people can get on with actual work. Unfortunately managers who do this are not as common as they should be.

NP: We’re getting into a complex area here, and it is one that I deal with at length in my masterclass on collaborative maturity and leadership. This is the old scapegoating mechanism at work,  and it is related to the leadership attribution error and the hero myth. If the attribution is back to the individual, then the blame must also be attributable to an individual. In fact, I have this slide in one of my presentations that goes, “a scapegoat is almost as useful as the solution to a problem.” [laughter]

Now, there are two questions here. “The scapegoat” is the answer to the question “Who is responsible?” However, it is more important to look at conditions rather than causes, so the real question is, “How did this situation come about?” When you look at “Who” questions, you are immediately going into questions of character. It elicits responses like “Yeah, it’s Kailash’s fault because he is that kind of a guy…he is an INTP or whatever.” What’s happening here is that the problem is explained away because it is attributed to Kailash’s character. You see what is going on…and why it is so dangerous?

KA: Yeah, that’s really interesting.

NP: And you see, then they’ll say something like, “…so let’s take Kailash out and put Neil in”…but the point is that if the conditions remain the same, Neil will fall down the same hole.

KA: It’s interesting the way you tie both things back to the individual – the individual as hero and the individual as scapegoat.

NP:  Yes, it’s two sides of the same coin. Followership acquiesces to leadership: Kailash will follow Neil, say, to the Promised Land. If we get there, Neil gets the credit but if we don’t, he gets the blame.

KA: Very interesting, but this brings up another question. Managers and leaders might turn around and say, “It’s all very well to criticize the way we operate, but the fact is that it is impossible to involve all stakeholders in determining, say, a strategy. So in a sense, we are forced to take on the role of “heroes,” as you put it.”

So my question is: are there some ways in which org are some of the ways in which organisations can address the difficulties associated with of collective decision-making?

NP: Of course, it is often impossible to include all stakeholders in a decision-making process, particularly around matters such as organisational strategy. What you have to do first is figure out who needs to be involved so that all interests are fairly represented. Second, I’m attracted to the whole idea of divergent (open-ended) and convergent (decisive) thinking. For example, if a problem is wicked or complex, there is no point attempting to use expert knowledge or analysis exclusively (Editor’s note: because no single expert holds the answers and there isn’t enough information for a sensible, unbiased analysis). Instead, one has to use collective intelligence or the wisdom of the crowd by seeking opinions from all groups of stakeholders who have a stake in the problem. This is divergent thinking.

However, there comes a time when one has to “make an incision in reality” – i.e. stop consultation and make the best possible decision based on data and ethics. – one has to use both IQ and EQ.  This is the convergent side of the coin.

Another problem is that one often has the data one needs to make the right decision, but the decision does not get made for reasons of ideology. Then it becomes a question of power rather than collective intelligence: a solution is imposed rather than allowed to emerge.

KA: Well that happens often enough – this “short-circuiting” of the decision-making process by those in positions of power.

NP:  Yes, and it is why I think deliberative decision-making which comes from the Western notion of deliberative democracy – i.e. decision-making based on dialogue and consultation is the best way forward but it can be a challenge to implement. Democracy is slow, but it is generally more accurate…

KA: Yes, that’s true, but it can also meander.

NP: Sure, everything is bound by certain limitations (like time)  and that’s why you have to know when to intervene. One of the important things for a leader to have in this connection is negative capability – which is not “negative” in the usual sense of the word, but rather the ability to know how to be comfortable with ambiguity and be able to intervene in ambiguous situations in a way that gets some kind of useful outcome.

Of course, acting in such situations also means that one has to have good feedback mechanisms in place; one must know how things are actually working on the ground so that one can take corrective actions if needed.  But, in the end, the success of this way of working depends critically on having the right conditions in place.  If you don’t set up the right conditions, any intervention can have catastrophic consequences.

If I may talk politically for a minute – the current situation in the Middle East is a classic example of a planned intervention:  direct, frontal, dramatic, causal, linear and supposedly rational.  However, if the right conditions are not in place, such interventions can have unforeseen consequences that completely overshadow the alleged benefits. And that is exactly what we have seen.

In general I would say that emergent change is more likely to succeed than large-scale, direct, planned change. The example one hears all the time is that of continuous improvement – where small changes are put in place and then adjusted based on feedback on how they are working.

KA: This is a matter of some frustration for me: in general people will agree that collaboration and collective decision-making are good, but when the time comes, they revert to their old, top-down ways of working.

NP: Yes, well when I go into a consulting engagement on collaborative maturity, one of the first things I ask people is whether they want to use the collaborative process to inform people or to influence them.  Often I find that they only want to use it to inform people. There is a big difference between the two: influencing is emergent, informing isn’t.

KA: This begs a question: say you walk into an organization where people say that they want to use collaborative processes to influence rather than inform, but you see that the culture is all wrong and it isn’t going to work. Do you actually tell them, “hey, this is not going to work in your organization?”

NP: Well if people don’t feel safe to speak their truth then it isn’t going to work. That’s why I’m so interested in Hackman’s work on conditions over causes. Coming to your question I don’t necessarily tell people that it’s not going to work because I believe it is more productive to invite them to explore the implications of doing things in a certain way. That way, they get to see for themselves how some of the things they are doing might actually be improved. One doesn’t preach but one hands things back to them.

In psychology there are these terms, transference and countertransference. In this context transference would be where a consultant thinks, “I’m a consultant so I’m going to assume a consultant persona  by acting and behaving like I have all the answers”, and countertransference would be where the client reinforces this by saying something like, “you are the expert and you have all the answers.”  Handing back stops this transference-countertransference cycle. So what we do is to get people to explore the consequences of their actions and thus see things that might have been hidden from their view. It is not to say “I told you so,” but rather “what are the implications of going down this path.” The idea is to appeal to the ethical or good side in human beings…and I believe that human beings are fundamentally good rather than not.

KA: I like your use of the word “ethical” here. I think that is really important and is what is often missing. One hears a lot about ethics in business these days, but it is most often taught and talked about in a very superficial way. The reality, however, is that the resolution of most wicked problems involves ethical considerations rather than logic and rationality…and this is something that many people do not understand. It isn’t about doing things right, rather it is about doing the right things.

NP: Yes, and this is related to what I call “meaning over motivation” – the idea being is that instead of attempting to motivate people to do something, try providing them with meaning. When you do this you will often find that change comes for free.  And it is worth noting that meaning has both an emotional and rational component – or, put a little bit differently, an ethical and logical one. In one of his books, Daniel Pink makes the point that uncoupling ethics from profit can have catastrophic consequences…and we have good examples of that in recent history.

The broad lesson here is that if the conditions aren’t right then it is inevitable that unethical behavior will dominate.

KA: Yeah well human nature will ensure that won’t it?

NP: [laughs] Yeah, and you don’t need a psychologist to tell you that.

KA: [laughs] Indeed…and I think that would be a good note on which to bring this conversation to a close. Neil, thanks so much for your time and insights. It’s been a pleasure to chat with you  and I look forward to catching up with you again…hopefully in person, in the not too distant future.

NP: Yeah, Singapore and Perth are not that far apart…

Written by K

September 9, 2014 at 9:52 pm

The consultant’s dilemma – a business fable

with 14 comments

It felt like a homecoming. That characteristic university smell  (books,  spearmint gum and a hint of cologne) permeated the hallway. It brought back memories of his student days: the cut and thrust of classroom debates, all-nighters before exams and near-all-nighters at Harry’s Bar on the weekends. He was amazed at how evocative that smell was.

Rich checked the directory near the noticeboard and found that the prof was still  in the same shoe-box office that he was ten years ago. He headed down the hallway wondering why the best teachers seemed to get the least desirable offices. Perhaps it was inevitable in a university system that rated grantsmanship over teaching.

It was good of the prof to see him at short notice. He had taken a chance really, calling on impulse because he had a few hours to kill before his flight home. There was too much travel in this job, but he couldn’t complain: he knew what he was getting into when he signed up.  No, his problem was deeper. He no longer believed in what he did. The advice he gave and the impressive, highly polished reports  he wrote for clients were useless…no, worse, they were dangerous.

He knew he was at a crossroad. Maybe, just maybe, the prof would be able to point him in the right direction.

Nevertheless, he was assailed by doubt as he approached the prof’s office. He didn’t have any right to burden the prof with his problems …he could still call and make an excuse for not showing up. Should he leave?

He shook his head. No, now that he was here he might as well at least say hello. He knocked on the door.

“Come in,” said the familiar voice.

He went in.

“Ah, Rich, it is good to see you after all these years. You’re looking well,” said the prof, getting up and shaking his hand warmly.

After a brief exchange of pleasantries, he asked Rich to take a seat.

“Just give me a minute, I’m down to the last paper in this pile,” said the prof, gesturing at a heap of term papers in front of him. “If I don’t do it now, I never will.”

“Take your time prof,” said Rich, as he sat down.

Rich cast his eye over the bookshelf behind the prof’s desk.  The titles on the shelf reflected the prof’s main interest: twentieth century philosophy. A title by  Habermas caught his eye.

Habermas!

Rich recalled a class in which the prof had talked about Habermas’ work on communicative rationality and its utility in making sense of ambiguous issues in management. It was in that lecture that the prof had introduced them to the evocative term that captured ambiguity in management (and other fields) so well, wicked problems.

There were many things the prof spoke of, but ambiguity and uncertainty were his overarching themes.  His lectures stood in stark contrast to those of his more illustrious peers: the prof dealt with reality in all its messiness, the other guys lived in a fantasy  world in which their neat models worked and  things went according to plan.

Rich had learnt  from the prof that philosophy was not an arcane subject, but one that held important lessons for everyone (including hotshot managers!). Much of what he learnt in that single term of philosophy had stayed with him. Indeed, it was what had brought him back to the prof’s door after all these years.

“All done,” said the prof, putting his pen down and flicking the marked paper into the pile in front of him.  He looked up at Rich: “Tell you what, let’s go to the café. The air-conditioning there is so much better,” he added, somewhat apologetically.

As they walked out of the prof’s office, Rich couldn’t help but wonder why the prof stuck around in a place where he remained unrecognized and unappreciated.

The café was busy. Though it was only mid-afternoon, the crowd was  already in Friday evening mode. Rich and the prof ordered their coffees and found a spot at the quieter end of the cafe.

After some small talk, the prof looked him and said, “Pardon my saying so, Rich, but you seem preoccupied. Is there something you want to talk about?”

“Yes, there is…well, there was, but I’m not so sure now.”

“You might as well ask,” said the prof. “My time is not billable….unlike yours.” His face crinkled into a smile that said, no offence intended.

“Well, as I mentioned when I called you this morning, I’m a management consultant with Big Consulting. By all measures, I’m doing quite well: excellent pay, good ratings from my managers and clients, promotions etc. The problem is, over the last month or so I’ve been feeling like a faker who plays on clients’ insecurities, selling them advice and solutions that are simplistic and cause more problems than they solve,” said Rich.

“Hmmm,” said the prof, “I’m curious. What triggered these thoughts after a decade in the game?”

“Well, I reckon it was an engagement that I completed a couple of months ago. I was the principal consultant for a big change management initiative at a multinational.  It was my first gig as a lead consultant for a change program this size. I was responsible for managing all aspects of the engagement – right from the initial discussions with the client,  to advising them on the change process and finally implementing it.” He folded his hands behind his head and leaned back in his chair as he continued,  “In theory I’m supposed to offer independent advice. In reality, though, there is considerable pressure to use our standard, trademarked solutions. Have you heard of our 5 X Model of Change Management?”

“Yes, I have,” nodded the prof.

“Well, I could see that the prescriptions of 5 X would not work for that organization. But, as I said, I had no choice in the matter.”

“Uh-huh, and then?”

“As I had foreseen,” said Rich, “the change was a painful, messy one for the organization. It even hit their bottom line significantly.  They are trying to cover it up, but everyone in the organization knows that the change is the real reason for the drop in earnings.  Despite this, Big Consulting has emerged unscathed. A bunch of middle managers on the client’s side have taken the rap.” He shook his head ruefully. “They were asked to leave,” he said.

“That’s terrible,” said the prof, “I can well understand how you feel.”

“Yes, I should not have prescribed 5 X. It is a lemon. The question is: what should I do now?” queried Rich.

“That’s for you to decide. You can’t change the past, but you might be able to influence the future,” said the prof with a smile.

“I was hoping you could advise me.”

“I have no doubt that you have reflected on the experience. What did you conclude?”

“That I should get out of this line of work,” said Rich vehemently.

“What would that achieve?” asked the prof gently.

“Well, at least I won’t be put into such situations again. I’m not worried about finding work, I’m sure I can find a job with the Big Consulting name on my resume,” said Rich.

“That’s true,” said the prof, “but is that all there is to it? There are other things to consider. For instance, Big Consulting will continue selling snake oil. How would you feel about that?”

“Yeah, that is a problem – damned if I do, damned if I don’t,” replied Rich. “You know, when I was sitting in your office, I recalled that you had spoken about such dilemmas in one of your classes. You said that the difficulty with such wicked issues is that they cannot be decided based on facts alone, because the facts themselves are either scarce or contested…or both!”

“That’s right,” said the prof, “and this is a wicked problem of a kind that is very common, not just in professional work but also in life.  Even relatively mundane issues such as  whether or not to switch jobs have wicked elements. What we forget sometimes, though, is that our decisions on such matters or rather, our consequent actions, might also affect others.”

“So you’re saying I’m not the only stakeholder (if I can use that term) in my problem. Is that right?”

“That’s right, there are other people to consider,” said the prof, “but the problem is you don’t know who they are .They are all the people who will be affected in the future by the decision you make now. If you quit, Big Consulting will go on selling this solution and many more people might be adversely affected. On the other hand, if you stay, you could try to influence the future direction of Big Consulting, but that might involve some discomfort for yourself. This makes your wicked problem an ethical one.  I suspect this is why you’re having a hard time going with the “quit” option.”

There was a brief silence. The prof could see that Rich was thinking things through.

“Prof, I’ve got to hand it to you,” said Rich shaking his head with a smile, “I was so absorbed by the quit/don’t quit dilemma from my personal perspective that I didn’t realize there are other angles to consider.  Thanks, you’ve helped immensely. I’m not sure what I will do, but I do know that what you have just said will help me make a more considered choice.  Thank you!”

“You’re welcome, Rich”

…And as he boarded his flight later that evening,  Rich finally understood why the prof continued to teach at a place where he remained unrecognized and unappreciated

Written by K

February 13, 2014 at 10:31 pm

%d bloggers like this: