Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Decision Making’ Category

The improbability of success

leave a comment »

Anyone who has tidied up after a toddler intuitively understands that making a mess is far easier than creating order. The fundamental reason for this is that the number of messy states in the universe (or a toddler’s room) far outnumbers the ordered ones.  As this point might not be obvious, I’ll demonstrate it via a simple thought experiment involving marbles:

Throw three marbles onto a flat surface.  When the marbles come to rest, you are most likely to end up with a random configuration  as in Figure 1.

Figure 1: A random configuration of 3 marbles

Indeed, you’d be extremely surprised if the three ended up being collinear as in Figure 2.   Note that Figure 2 is just one example of many collinear possibilities, but the point I’m making is that if the marbles are thrown randomly, they are more likely to end up in a random state than a lined-up one.

Figure 2: an unlikely (ordered) configuration

This raises a couple of questions:

Question: On what basis can one claim that the collinear configuration is tidier or more ordered than the non-collinear one?

Naive answer:  It looks more ordered. Yes, tidiness is in the eye of the beholder so it is necessarily subjective. However, I’ll wager that if one took a poll, an overwhelming number of people would say that the configuration in Figure 2 is more ordered than the one in Figure 1.

More sophisticated answer : The “state” of collinear marbles can be described using 2 parameters, the slope and intercept of the straight line that three marbles lie on (in any coordinate system) whereas the description of the nonlinear state requires 3 parameters. The first state is tidier because it requires fewer parameters.  Another way to think about is that the line can be described by two marbles; the third one is redundant as far as the description of the state is concerned.

Question: Why is a tidier configuration less likely than a messy one?

Answer:  May be you see this intuitively and need no proof, but here’s one just in case. Imagine rolling the three marbles one after the other. The first two, regardless of where they end up, will necessarily lie along a line (two points lie on the straight line joining them). Now, I think it is easy to see that if we throw the third marble randomly, it is highly unlikely end up on that line. Indeed, for the third marble to end up exactly on the same straight line requires a coincidence of near cosmic proportions.

I know, I know, this is not a proof, but I trust it makes the point.

Now, although it is near impossible to get to a collinear end state via random throws, it is possible to approximate it by changing the way we throw the marbles. Here’s how:

  1. Throw the marbles consecutively rather than in one go.
  2. When throwing the third marble, adjust its initial speed and direction in a way that takes into account the positions of the two marbles that are already on the surface. Remember these two already define a straight line.

The third throw is no longer random because it is designed to maximise the chance that the last marble will get as close as possible to the straight line defined by the first two. Done right, you’ll end up with something closer to the configuration in Figure 3 rather than the one in Figure 2.

Figure 3: an “approximately ordered” state

Now you’re probably wondering what this has to do with success. I’ll make the connection via an example that will be familiar to many readers of this blog: an organisation’s strategy. However, as I will reiterate later, the arguments I present are very general and can be applied to just about any initiative or situation.

Typically, a strategy sets out goals for an organisation and a plan to achieve them in a specified timeframe. The goals define a number of desirable outcomes, or states which, by design, are constrained to belong to a (very) small subset of all possible states the organisation can end up in.  In direct analogy with the simple model discussed above it is clear that, left to its own devices, the organisation is more likely to end up in one of the much overwhelmingly larger number of “failed states” than one of the successful ones.  Notwithstanding the popular quote about there being many roads to success, in reality there are a great many more roads to failure.

Of course, that’s precisely why organisations are never “left to their own devices.” Indeed, a strategic plan specifies actions that are intended to make a successful state more likely than an unsuccessful one. However, no plan can guarantee success; it can, at best, make it more likely. As in the marble game, success is ultimately a matter of chance, even when we take actions to make it more likely.

If we accept this, the key question becomes: how can one design a strategy that improves the odds of success?  The marble analogy suggests a way to do this is to:

  1. Define success in terms of an end state that is a natural extension of your current state.
  2. Devise a plan to (approximately) achieve that end state. Such a plan will necessarily build on the current state rather than change it wholesale. Successful change is an evolutionary process rather than a revolutionary one.

My contention is that these points are often ignored by management strategists. More often than not, they will define an end state based on a textbook idealisation, consulting model or (horror!) best practice. The marble analogy shows why copying others is unlikely to succeed.

Figure 4 shows a variant of the marble game in which we have two sets of marbles (or organisations!), one blue, as before, and the other red.


Figure 4: Two distinct configurations of marbles (or organisations)

Now, it is considerably harder to align an additional marble with both sets of marbles than the blue one alone. Here’s why…

To align with both sets, the new marble has to end up close to the point that lies at the intersection of the blue and red lines in Figure 5. In contrast, to align with the blue set alone, all that’s needed is for it to get close to any point on the blue line.

QED!

Figure 5: Why copying others is not a good idea (see text for explanation)

Finally, on a broader note, it should be clear that the arguments made above go beyond organisational strategies. They apply to pretty much any planned action, whether at work or in one’s personal life.

So, to sum up: when developing an organisational (or personal) strategy, the first step is to understand where you are and then identify the minimal actions you need to take in order to get to an “improved” state that is consistent with  your current one. Yes, this is akin to the incremental and evolutionary approach that Agilistas and Leaners have been banging on about for years. However, their prescriptions focus on specific areas: software development and process improvement.  My point is that the basic principles are way broader because they are a direct consequence of a fundamental fact regarding the relative likelihood of order and disorder in a toddler’s room, an organisation, or even the universe at large.

Written by K

April 4, 2017 at 9:16 pm

Uncertainty, ambiguity and the art of decision making

with 2 comments

A common myth about decision making in organisations is that it is, by and large, a rational process.   The term rational refers to decision-making methods that are based on the following broad steps:

  1. Identify available options.
  2. Develop criteria for rating options.
  3. Rate options according to criteria developed.
  4. Select the top-ranked option.

Although this appears to be a logical way to proceed it is often difficult to put into practice, primarily because of uncertainty about matters relating to the decision.

Uncertainty can manifest itself in a variety of ways: one could be uncertain about facts, the available options, decision criteria or even one’s own preferences for options.

In this post, I discuss the role of uncertainty in decision making and, more importantly, how one can make well-informed decisions in such situations.

A bit about uncertainty

It is ironic that the term uncertainty is itself vague when used in the context of decision making. There are at least five distinct senses in which it is used:

  1. Uncertainty about decision options.
  2. Uncertainty about one’s preferences for options.
  3. Uncertainty about what criteria are relevant to evaluating the options.
  4. Uncertainty about what data is needed (data relevance).
  5. Uncertainty about the data itself (data accuracy).

Each of these is qualitatively different: uncertainty about data accuracy (item 5 above) is very different from uncertainty regarding decision options (item 1). The former can potentially be dealt with using statistics whereas the latter entails learning more about the decision problem and its context, ideally from different perspectives. Put another way, the item 5 is essentially a technical matter whereas item 1 is a deeper issue that may have social, political and – as we shall see – even behavioural dimensions. It is therefore reasonable to expect that the two situations call for vastly different approaches.

Quantifiable uncertainty

A common problem in project management is the estimation of task durations. In this case, what’s requested is a “best guess” time (in hours or days) it will take to complete a task. Many project schedules represent task durations by point estimates, i.e.  by single numbers. The Gantt Chart shown in Figure 1 is a common example. In it, each task duration is represented by its expected duration. This is misleading because the single number conveys a sense of certainty that is unwarranted.  It is far more accurate, not to mention safer, to quote a range of possible durations.

Figure 1: Gantt Chart (courtesy Wikimedia)

Figure 1: Gantt Chart (courtesy Wikimedia)

In general, quantifiable uncertainties, such as those conveyed in estimates, should always be quoted as ranges – something along the following lines: task A may take anywhere between 2 and 8 days, with a most likely completion time of 4 days (Figure 2).

Figure 2: Task completion likelihood (3 point estimates)

Figure 2: Task completion likelihood (3 point estimates)

In this example, aside from stating that the task will finish sometime between 2 and 4 days, the estimator implicitly asserts that the likelihood of finishing before 2 days or after 8 days is zero.  Moreover, she also implies that some completion times are more likely than others. Although it may be difficult to quantify the likelihood exactly, one can begin by making simple (linear!) approximations as shown in Figure 3.

Figure 3: Simple probability distribution based on the estimates in Figure 2

Figure 3: Simple probability distribution based on the estimates in Fig 2

The key takeaway from the above is that quantifiable uncertainties are shapes rather than single numbers.  See this post and this one for details for how far this kind of reasoning can take you. That said, one should always be aware of the assumptions underlying the approximations. Failure to do so can be hazardous to the credibility of estimators!

Although I haven’t explicitly said so, estimation as described above has a subjective element. Among other things, the quality of an estimate depends on the judgement and experience of the estimator. As such, it is prone to being affected by errors of judgement and cognitive biases.  However, provided one keeps those caveats in mind, the probability-based approach described above is suited to situations in which uncertainties are quantifiable, at least in principle. That said, let’s move on to more complex situations in which uncertainties defy quantification.

Introducing ambiguity

The economist Frank Knight was possibly the first person to draw the distinction between quantifiable and unquantifiable uncertainties.  To make things really confusing, he called the former risk and the latter uncertainty. In his doctoral thesis, published in 1921, wrote:

…it will appear that a measurable uncertainty, or “risk” proper, as we shall call the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all. We shall accordingly restrict the term “uncertainty” to cases of the non-quantitative type (p.20)

Terminology has moved on since Knight’s time, the term uncertainty means lots of different things, depending on context. In this piece, we’ll use the term uncertainty to refer to quantifiable uncertainty (as in the task estimate of the previous section) and use ambiguity to refer to nonquantifiable uncertainty. In essence, then, we’ll use the term uncertainty for situations where we know what we’re measuring (i.e. the facts) but are uncertain about its numerical or categorical values whereas we’ll use the word ambiguity to refer to situations in which we are uncertain about what the facts  are or which facts are relevant.

As a test of understanding, you may want to classify each of the five points made in the second section of this post as either uncertain or ambiguous (Answers below)

Answer: 1 through 4 are ambiguous and 5 is uncertain.

How ambiguity manifests itself in decision problems

The distinction between uncertainty and ambiguity points to a problem with quantitative decision-making techniques such as cost-benefit analysis, multicriteria decision making methods or analytic hierarchy process. All these methods assume that decision makers are aware of all the available options, their preferences for them, the relevant evaluation criteria and the data needed. This is almost never the case for consequential decisions. To see why, let’s take a closer look at the different ways in which ambiguity can play out in the rational decision making process mentioned at the start of this article.

  1. The first step in the process is to identify available options. In the real world, however, options often cannot be enumerated or articulated fully. Furthermore, as options are articulated and explored, new options and sub-options tend to emerge. This is particularly true if the options depend on how future events unfold.
  2. The second step is to develop criteria for rating options. As anyone who has been involved in deciding on a contentious issue will confirm, it is extremely difficult to agree on a set of decision criteria for issues that affect different stakeholders in different ways.  Building a new road might improve commute times for one set of stakeholders but result in increased traffic in a residential area for others. The two criteria will be seen very differently by the two groups. In this case, it is very difficult for the two groups to agree on the relative importance of the criteria or even their legitimacy. Indeed, what constitutes a legitimate criterion is a matter of opinion.
  3. The third step is to rate options. The problem here is that real-world options often cannot be quantified or rated in a meaningful way. Many of life’s dilemmas fall into this category. For example, a decision to accept or decline a job offer is rarely made on the basis of material gain alone. Moreover, even where ratings are possible, they can be highly subjective. For example, when considering a job offer, one candidate may give more importance to financial matters whereas another might consider lifestyle-related matters (flexi-hours, commuting distance etc.) to be paramount. Another complication here is that there may not be enough information to settle the matter conclusively. As an example, investment decisions are often made on the basis of quantitative information that is based on questionable assumptions.

A key consequence of the above is that such ambiguous decision problems are socially complex – i.e. different stakeholders could have wildly different perspectives on the problem itself.   One could say the ambiguity experienced by an individual is compounded by the group.

Before going on I should point out that acute versions of such ambiguous decision problems go by many different names in the management literature. For example:

All these terms are more or less synonymous:  the root cause of the difficulty in every case is ambiguity (or unquantifiable uncertainty), which prevents a clear formulation of the problem.

Social complexity is hard enough to tackle as it is, but there’s another issue that makes things even harder: ambiguity invariably triggers negative emotions such as fear and anxiety in individuals who make up the group.  Studies in neuroscience have shown that in contrast to uncertainty, which evokes logical responses in people, ambiguity tends to stir up negative emotions while simultaneously suppressing the ability to think logically.  One can see this playing out in a group that is debating a contentious decision: stakeholders tend to get worked up over issues that touch on their values and identities, and this seems to limit their ability to look at the situation objectively.

Tackling ambiguity

Summarising the discussion thus far: rational decision making approaches are based on the assumption that stakeholders have a shared understanding of the decision problem as well as the facts and assumptions around it. These conditions are clearly violated in the case of ambiguous decision problems. Therefore, when confronted with a decision problem that has even a hint of ambiguity, the first order of the day is to help the group reach a shared understanding of the problem.  This is essentially an exercise in sensemaking, the art of collaborative problem formulation. However, this is far from straightforward because ambiguity tends to evoke negative emotions and attendant defensive behaviours.

The upshot of all this is that any approach to tackle ambiguity must begin by taking the concerns of individual stakeholders seriously.  Unless this is done, it will be impossible for the group to coalesce around a consensus decision. Indeed, ambiguity-laden decisions in organisations invariably fail when they overlook concerns of specific stakeholder groups.  The high failure rate of organisational change initiatives (60-70% according to this Deloitte report) is largely attributable to this point

There are a number of techniques that one can use to gather and synthesise diverse stakeholder viewpoints and thus reach a shared understanding of a complex or ambiguous problem. These techniques are often referred to as problem structuring methods (PSMs). I won’t go into these in detail here; for an example check out Paul Culmsee’s articles on dialogue mapping and Barry Johnson’s introduction to polarity management. There are many more techniques in the PSM stable. All of them are intended to help a group reconcile different viewpoints and thus reach a common basis from which one can proceed to the next step (i.e., make a decision on what should be done). In other words, these techniques help reduce ambiguity.

But there’s more to it than a bunch of techniques.  The main challenge is to create a holding environment that enables such techniques to work. I am sure readers have been involved in a meeting or situation where the outcome seems predetermined by management or has been undermined by self- interest. When stakeholders sense this, no amount of problem structuring is going to help.  In such situations one needs to first create the conditions for open dialogue to occur. This is precisely what a holding environment provides.

Creating such a holding environment is difficult in today’s corporate world, but not impossible. Note that this is not an idealist’s call for an organisational utopia. Rather, it involves the application of a practical set of tools that address the diverse, emotion-laden reactions that people often have when confronted with ambiguity.   It would take me too far afield to discuss PSMs and holding environments any further here. To find out more, check out my papers on holding environments and dialogue mapping in enterprise IT projects, and (for a lot more) the Heretic’s Guide series of books that I co-wrote with Paul Culmsee.

The point is simply this: in an ambiguous situation, a good decision – whatever it might be – is most likely to be reached by a consultative process that synthesises diverse viewpoints rather than by an individual or a clique.  However, genuine participation (the hallmark of a holding environment) in such a process will occur only after participants’ fears have been addressed.

Wrapping up

Standard approaches to decision making exhort managers and executives to begin with facts, and if none are available, to gather them diligently prior to making a decision. However, most real-life decisions are fraught with uncertainty so it may be best to begin with what one doesn’t know, and figure out how to make the possible decision under those “constraints of ignorance.” In this post I’ve attempted to outline what such an approach would entail. The key point is to figure out the kind uncertainty one is dealing with and choosing an approach that works for it. I’d argue that most decision making debacles stem from a failure to appreciate this point.

Of course, there’s a lot more to this approach than I can cover in the span of a post, but that’s a story for another time.

Note: This post is written as an introduction to the Data and Decision Making subject that is part of the core curriculum of the Master of Data Science and Innovation program, run by the Connected Intelligence Centre at UTS. I’m coordinating the subject this semester, and am honoured to be co-teaching it with my erstwhile colleague Sean Heffernan and my longtime collaborator Paul Culmsee.

Written by K

March 9, 2017 at 10:04 am

The dark side of data science

with 7 comments

Data scientists are sometimes blind to the possibility that the predictions of their algorithms can have unforeseen negative effects on people. Ethical or social implications are easy to overlook when one finds interesting new patterns in data, especially if they promise significant financial gains. The Centrelink debt recovery debacle, recently reported in the Australian media, is a case in point.

Here is the story in brief:

Centrelink is an Australian Government organisation responsible for administering welfare services and payments to those in need. A major challenge such organisations face is ensuring that their clients are paid no less and no more than what is due to them. This is difficult because it involves crosschecking client income details across multiple systems owned by different government departments, a process that necessarily involves many assumptions. In July 2016, Centrelink unveiled an automated compliance system that compares income self-reported by clients to information held by the taxation office.

The problem is that the algorithm is flawed: it makes strong (and incorrect!) assumptions regarding the distribution of income across a financial year and, as a consequence, unfairly penalizes a number of legitimate benefit recipients.  It is very likely that the designers and implementers of the algorithm did not fully understand the implications of their assumptions. Worse, from the errors made by the system, it appears they may not have adequately tested it either.  But this did not stop them (or, quite possibly, their managers) from unleashing their algorithm on an unsuspecting public, causing widespread stress and distress.  More on this a bit later.

Algorithms like the one described above are the subject of Cathy O’Neil’s aptly titled book, Weapons of Math Destruction.  In the remainder of this article I discuss the main themes of the book.  Just to be clear, this post is more riff than review. However, for those seeking an opinion, here’s my one-line version: I think the book should be read not only by data science practitioners, but also by those who use or are affected by their algorithms (which means pretty much everyone!).

Abstractions and assumptions

‘O Neil begins with the observation that data algorithms are mathematical models of reality, and are necessarily incomplete because several simplifying assumptions are invariably baked into them. This point is important and often overlooked so it is worth illustrating via an example.

When assessing a person’s suitability for a loan, a bank will want to know whether the person is a good risk. It is impossible to model creditworthiness completely because we do not know all the relevant variables and those that are known may be hard to measure. To make up for their ignorance, data scientists typically use proxy variables, i.e. variables that are believed to be correlated with the variable of interest and are also easily measurable. In the case of creditworthiness, proxy variables might be things like gender, age, employment status, residential postcode etc.  Unfortunately many of these can be misleading, discriminatory or worse, both.

The Centrelink algorithm provides a good example of such a “double-whammy” proxy. The key variable it uses is the difference between the client’s annual income reported by the taxation office and self-reported annual income stated by the client. A large difference is taken to be an indicative of an incorrect payment and hence an outstanding debt. This simplistic assumption overlooks the fact that most affected people are not in steady jobs and therefore do not earn regular incomes over the course of a financial year (see this article by Michael Griffin, for a detailed example).  Worse, this crude proxy places an unfair burden on vulnerable individuals for whom casual and part time work is a fact of life.

Worse still, for those wrongly targeted with a recovery notice, getting the errors sorted out is not a straightforward process. This is typical of a WMD. As ‘O Neil states in her book, “The human victims of WMDs…are held to a far higher standard of evidence than the algorithms themselves.”  Perhaps this is because the algorithms are often opaque. But that’s a poor excuse.  This is the only technical field where practitioners are held to a lower standard of accountability than those affected by their products.

‘O Neil’s sums it up rather nicely when she calls algorithms like the Centrelink one  weapons of math destruction (WMD).

Self-fulfilling prophecies and feedback loops

A characteristic of WMD is that their predictions often become self-fulfilling prophecies. For example a person denied a loan by a faulty risk model is more likely to be denied again when he or she applies elsewhere, simply because it is on their record that they have been refused credit before. This kind of destructive feedback loop is typical of a WMD.

An example that ‘O Neil dwells on at length is a popular predictive policing program. Designed for efficiency rather than nuanced judgment, such algorithms measure what can easily be measured and act by it, ignoring the subtle contextual factors that inform the actions of experienced officers on the beat. Worse, they can lead to actions that can exacerbate the problem. For example, targeting young people of a certain demographic for stop and frisk actions can alienate them to a point where they might well turn to crime out of anger and exasperation.

As Goldratt famously said, “Tell me how you measure me and I’ll tell you how I’ll behave.”

This is not news: savvy managers have known about the dangers of managing by metrics for years. The problem is now exacerbated manyfold by our ability to implement and act on such metrics on an industrial scale, a trend that leads to a dangerous devaluation of human judgement in areas where it is most needed.

A related problem – briefly mentioned earlier – is that some of the important variables are known but hard to quantify in algorithmic terms. For example, it is known that community-oriented policing, where officers on the beat develop relationships with people in the community, leads to greater trust. The degree of trust is hard to quantify, but it is known that communities that have strong relationships with their police departments tend to have lower crime rates than similar communities that do not.  Such important but hard-to-quantify factors are typically missed by predictive policing programs.

Blackballed!

Ironically, although WMDs can cause destructive feedback loops, they are often not subjected to feedback themselves. O’Neil gives the example of algorithms that gauge the suitability of potential hires.  These programs often use proxy variables such as IQ test results, personality tests etc. to predict employability.  Candidates who are rejected often do not realise that they have been screened out by an algorithm. Further, it often happens that candidates who are thus rejected go on to successful careers elsewhere. However, this post-rejection information is never fed back to the algorithm because it impossible to do so.

In such cases, the only way to avoid being blackballed is to understand the rules set by the algorithm and play according to them. As ‘O Neil so poignantly puts it, “our lives increasingly depend on our ability to make our case to machines.” However, this can be difficult because it assumes that a) people know they are being assessed by an algorithm and 2) they have knowledge of how the algorithm works. In most hiring scenarios neither of these hold.

Just to be clear, not all data science models ignore feedback. For example, sabermetric algorithms used to assess player performance in Major League Baseball are continually revised based on latest player stats, thereby taking into account changes in performance.

Driven by data

In recent years, many workplaces have gradually seen the introduction to data-driven efficiency initiatives. Automated rostering, based on scheduling algorithms is an example. These algorithms are based on operations research techniques that were developed for scheduling complex manufacturing processes. Although appropriate for driving efficiency in manufacturing, these techniques are inappropriate for optimising shift work because of the effect they have on people. As O’ Neil states:

Scheduling software can be seen as an extension of just-in-time economy. But instead of lawn mower blades or cell phone screens showing up right on cue, it’s people, usually people who badly need money. And because they need money so desperately, the companies can bend their lives to the dictates of a mathematical model.

She correctly observes that an, “oversupply of low wage labour is the problem.” Employers know they can get away with treating people like machine parts because they have a large captive workforce.  What makes this seriously scary is that vested interests can make it difficult to outlaw such exploitative practices. As ‘O Neil mentions:

Following [a] New York Times report on Starbucks’ scheduling practices, Democrats in Congress promptly drew up bills to rein in scheduling software. But facing a Republican majority fiercely opposed to government regulations, the chances that their bill would become law were nil. The legislation died.

Commercial interests invariably trump social and ethical issues, so it is highly unlikely that industry or government will take steps to curb the worst excesses of such algorithms without significant pressure from the general public. A first step towards this is to educate ourselves on how these algorithms work and the downstream social effects of their predictions.

Messing with your mind

There is an even more insidious way that algorithms mess with us. Hot on the heels of the recent US presidential election, there were suggestions that fake news items on Facebook may have influenced the results.  Mark Zuckerberg denied this, but as this Casey Newton noted in this trenchant tweet, the denial leaves Facebook in “the awkward position of having to explain why they think they drive purchase decisions but not voting decisions.”

Be that as it may, the fact is Facebook’s own researchers have been conducting experiments to fine tune a tool they call the “voter megaphone”. Here’s what ‘O Neil says about it:

The idea was to encourage people to spread the word that they had voted. This seemed reasonable enough. By sprinkling people’s news feeds with “I voted” updates, Facebook was encouraging Americans – more that sixty-one million of them – to carry out their civic duty….by posting about people’s voting behaviour, the site was stoking peer pressure to vote. Studies have shown that the quiet satisfaction of carrying out a civic duty is less likely to move people than the possible judgement of friends and neighbours…The Facebook started out with a constructive and seemingly innocent goal to encourage people to vote. And it succeeded…researchers estimated that their campaign had increased turnout by 340,000 people. That’s a big enough crowd to swing entire states, and even national elections.

And if that’s not scary enough, try this:

For three months leading up to the election between President Obama and Mitt Romney, a researcher at the company….altered the news feed algorithm for about two million people, all of them politically engaged. The people got a higher proportion of hard news, as opposed to the usual cat videos, graduation announcements, or photos from Disney world….[the researcher] wanted to see  if getting more [political] news from friends changed people’s political behaviour. Following the election [he] sent out surveys. The self-reported results that voter participation in this group inched up from 64 to 67 percent.

This might not sound like much, but considering the thin margins of recent presidential elections, it could be enough to change a result.

But it’s even more insidious.  In a paper published in 2014, Facebook researchers showed that users’ moods can be influenced by the emotional content of their newsfeeds. Here’s a snippet from the abstract of the paper:

In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.

As you might imagine, there was a media uproar following which  the lead researcher issued a clarification and  Facebook officials duly expressed regret (but, as far as I know, not an apology).  To be sure, advertisers have been exploiting this kind of “mind control” for years, but a public social media platform should (expect to) be held to a higher standard of ethics. Facebook has since reviewed its internal research practices, but the recent fake news affair shows that the story is to be continued.

Disarming weapons of math destruction

The Centrelink debt debacle, Facebook mood contagion experiments and the other case studies mentioned in the book illusrate the myriad ways in which Big Data algorithms have a pernicious effect on our day-to-day lives. Quite often people remain unaware of their influence, wondering why a loan was denied or a job application didn’t go their way. Just as often, they are aware of what is happening, but are powerless to change it – shift scheduling algorithms being a case in point.

This is not how it was meant to be. Technology was supposed to make life better for all, not just the few who wield it.

So what can be done? Here are some suggestions:

  • To begin with, education is the key. We must work to demystify data science, create a general awareness of data science algorithms and how they work. O’ Neil’s book is an excellent first step in this direction (although it is very thin on details of how the algorithms work)
  • Develop a code of ethics for data science practitioners. It is heartening to see that IEEE has recently come up with a discussion paper on ethical considerations for artificial intelligence and autonomous systems and ACM has proposed a set of principles for algorithmic transparency and accountability.  However, I should also tag this suggestion with the warning that codes of ethics are not very effective as they can be easily violated. One has to – somehow – embed ethics in the DNA of data scientists. I believe, one way to do this is through practice-oriented education in which data scientists-in-training grapple with ethical issues through data challenges and hackathons. It is as Wittgenstein famously said, “it is clear that ethics cannot be articulated.” Ethics must be practiced.
  • Put in place a system of reliable algorithmic audits within data science departments, particularly those that do work with significant social impact.
  • Increase transparency a) by publishing information on how algorithms predict what they predict and b) by making it possible for those affected by the algorithm to access the data used to classify them as well as their classification, how it will be used and by whom.
  • Encourage the development of algorithms that detect bias in other algorithms and correct it.
  • Inspire aspiring data scientists to build models for the good.

It is only right that the last word in this long riff should go to ‘O Neil whose work inspired it. Towards the end of her book she writes:

Big Data processes codify the past. They do not invent the future. Doing that requires moral imagination, and that’s something that only humans can provide. We have to explicitly embed better values into our algorithms, creating Big Data models that follow our ethical lead. Sometimes that will mean putting fairness ahead of profit.

Excellent words for data scientists to live by.

Written by K

January 17, 2017 at 8:38 pm

Improving decision-making in projects

with 5 comments

An irony of organisational life is that the most important decisions on projects (or any other initiatives) have to be made at the start, when ambiguity is at its highest and information availability lowest. I recently gave a talk at the Pune office of BMC Software on improving decision-making in such situations.

The talk was recorded and simulcast to a couple of locations in India. The folks at BMC very kindly sent me a copy of the recording with permission to publish it on Eight to Late. Here it is:


Based on the questions asked and the feedback received, I reckon that a number of people found the talk  useful. I’d welcome your comments/feedback.

Acknowledgements: My thanks go out to Gaurav Pal, Manish Gadgil and Mrinalini Wankhede for giving me the opportunity to speak at BMC, and to Shubhangi Apte for putting me in touch with them. Finally, I’d like to thank the wonderful audience at BMC for their insightful questions and comments.

The Risk – a dialogue mapping vignette

with one comment

Foreword

Last week, my friend Paul Culmsee conducted an internal workshop in my organisation on the theme of collaborative problem solving. Dialogue mapping is one of the tools  of the tools he introduced during the workshop.  This piece, primarily intended as a follow-up for attendees,  is an introduction to dialogue mapping via a vignette that illustrates its practice (see this post for another one). I’m publishing it here as I thought it might be useful for those who wish to understand what the technique is about.

Dialogue mapping uses a notation called Issue Based Information System (IBIS), which I have discussed at length in this post. For completeness, I’ll begin with a short introduction to the notation and then move on to the vignette.

A crash course in IBIS

The IBIS notation consists of the following three elements:

  1. Issues(or questions): these are issues that are being debated. Typically, issues are framed as questions on the lines of “What should we do about X?” where X is the issue that is of interest to a group. For example, in the case of a group of executives, X might be rapidly changing market condition whereas in the case of a group of IT people, X could be an ageing system that is hard to replace.
  2. Ideas(or positions): these are responses to questions. For example, one of the ideas of offered by the IT group above might be to replace the said system with a newer one. Typically the whole set of ideas that respond to an issue in a discussion represents the spectrum of participant perspectives on the issue.
  3. Arguments: these can be Pros (arguments for) or Cons (arguments against) an issue. The complete set of arguments that respond to an idea represents the multiplicity of viewpoints on it.

Compendium is a freeware tool that can be used to create IBIS maps– it can be downloaded here.

In Compendium, IBIS elements are represented as nodes as shown in Figure 1: issues are represented by blue-green question markspositions by yellow light bulbspros by green + signs and cons by red – signs.  Compendium supports a few other node types, but these are not part of the core IBIS notation. Nodes can be linked only in ways specified by the IBIS grammar as I discuss next.

Figure 1: Elements of IBIS

Figure 1: IBIS node types

The IBIS grammar can be summarized in three simple rules:

  1. Issues can be raised anew or can arise from other issues, positions or arguments. In other words, any IBIS element can be questioned.  In Compendium notation:  a question node can connect to any other IBIS node.
  2. Ideas can only respond to questions– i.e. in Compendium “light bulb” nodes can only link to question nodes. The arrow pointing from the idea to the question depicts the “responds to” relationship.
  3. Arguments  can only be associated with ideas– i.e. in Compendium “+” and “–“  nodes can only link to “light bulb” nodes (with arrows pointing to the latter)

The legal links are summarized in Figure 2 below.

Figure 2: Legal links in IBIS

Figure 2: Legal links in IBIS

 

…and that’s pretty much all there is to it.

The interesting (and powerful) aspect of IBIS is that the essence of any debate or discussion can be captured using these three elements. Let me try to convince you of this claim via a vignette from a discussion on risk.

 The Risk – a Dialogue Mapping vignette

“Morning all,” said Rick, “I know you’re all busy people so I’d like to thank you for taking the time to attend this risk identification session for Project X.  The objective is to list the risks that we might encounter on the project and see if we can identify possible mitigation strategies.”

He then asked if there were any questions. The head waggles around the room indicated there were none.

“Good. So here’s what we’ll do,”  he continued. “I’d like you all to work in pairs and spend 10 minutes thinking of all possible risks and then another 5 minutes prioritising.  Work with the person one your left. You can use the flipcharts in the breakout area at the back if you wish to.”

Twenty minutes later, most people were done and back in their seats.

“OK, it looks as though most people are done…Ah, Joe, Mike have you guys finished?” The two were still working on their flip-chart at the back.

“Yeah, be there in a sec,” replied Mike, as he tore off the flip-chart page.

“Alright,” continued Rick, after everyone had settled in. “What I’m going to do now is ask you all to list your top three risks. I’d also like you tell me why they are significant and your mitigation strategies for them.” He paused for a second and asked, “Everyone OK with that?”

Everyone nodded, except Helen who asked, “isn’t it important that we document the discussion?”

“I’m glad you brought that up. I’ll make notes as we go along, and I’ll do it in a way that everyone can see what I’m writing. I’d like you all to correct me if you feel I haven’t understood what you’re saying. It is important that  my notes capture your issues, ideas and arguments accurately.”

Rick turned on the data projector, fired up Compendium and started a new map.  “Our aim today is to identify the most significant risks on the project – this is our root question”  he said, as he created a question node. “OK, so who would like to start?”

 

 

Fig 3: The root question

Figure 3: The root question

 

“Sure,” we’ll start, said Joe easily. “Our top risk is that the schedule is too tight. We’ll hit the deadline only if everything goes well, and everyone knows that they never do.”

“OK,” said Rick, “as he entered Joe and Mike’s risk as an idea connecting to the root question. “You’ve also mentioned a point that supports your contention that this is a significant risk – there is absolutely no buffer.” Rick typed this in as a pro connecting to the risk. He then looked up at Joe and asked,  “have I understood you correctly?”

“Yes,” confirmed Joe.

 

Fig 4: Map in progress

Figure 4: Map in progress

 

“That’s pretty cool,” said Helen from the other end of the table, “I like the notation, it makes reasoning explicit. Oh, and I have another point in support of Joe and Mike’s risk – the deadline was imposed by management before the project was planned.”

Rick began to enter the point…

“Oooh, I’m not sure we should put that down,” interjected Rob from compliance. “I mean, there’s not much we can do about that can we?”

…Rick finished the point as Rob was speaking.

 

Fig 4: Map in progress

Figure 5: Two pros for the idea

 

“I hear you Rob, but I think  it is important we capture everything that is said,” said Helen.

“I disagree,” said Rob. “It will only annoy management.”

“Slow down guys,” said Rick, “I’m going to capture Rob’s objection as “this is a management imposed-constraint rather than risk. Are you OK with that, Rob?”

Rob nodded his assent.

 

Fig 6: A con enters the picture

Fig 6: A con enters the picture

I think it is important we articulate what we really think, even if we can’t do anything about it,” continued Rick. There’s no point going through this exercise if we don’t say what we really think. I want to stress this point, so I’m going to add honesty  and openness  as ground rules for the discussion. Since ground rules apply to the entire discussion, they connect directly to the primary issue being discussed.”

Figure 7: A "criterion" that applies to the analysis of all risks

Figure 7: A “criterion” that applies to the analysis of all risks

 

“OK, so any other points that anyone would like to add to the ones made so far?” Queried Rick as he finished typing.

He looked up. Most of the people seated round the table shook their heads indicating that there weren’t.

“We haven’t spoken about mitigation strategies. Any ideas?” Asked Rick, as he created a question node marked “Mitigation?” connecting to the risk.

 

Figure 8: Mitigating the risk

Figure 8: Mitigating the risk

“Yeah well, we came up with one,” said Mike. “we think the only way to reduce the time pressure is to cut scope.”

“OK,” said Rick, entering the point as an idea connecting to the “Mitigation?” question. “Did you think about how you are going to do this? He entered the question “How?” connecting to Mike’s point.

Figure 9: Mitigating the risk

Figure 9: Mitigating the risk

 

“That’s the problem,” said Joe, “I don’t know how we can convince management to cut scope.”

“Hmmm…I have an idea,” said Helen slowly…

“We’re all ears,” said Rick.

“…Well…you see a large chunk of time has been allocated for building real-time interfaces to assorted systems – HR, ERP etc. I don’t think these need to be real-time – they could be done monthly…and if that’s the case, we could schedule a simple job or even do them manually for the first few months. We can push those interfaces to phase 2 of the project, well into next year.”

There was a silence in the room as everyone pondered this point.

“You know, I think that might actually work, and would give us an extra month…may be even six weeks for the more important upstream stuff,” said Mike. “Great idea, Helen!”

“Can I summarise this point as – identify interfaces that can be delayed to phase 2?” asked Rick, as he began to type it in as a mitigation strategy. “…and if you and Mike are OK with it, I’m going to combine it with the ‘Cut Scope’ idea to save space.”

“Yep, that’s fine,” said Helen. Mike nodded OK.

Rick deleted the “How?” node connecting to the “Cut scope” idea, and edited the latter to capture Helen’s point.

Figure 10: Mitigating the risk

Figure 10: Mitigating the risk

“That’s great in theory, but who is going to talk to the affected departments? They will not be happy.” asserted Rob.  One could always count on compliance to throw in a reality check.

“Good point,”  said Rick as he typed that in as a con, “and I’ll take the responsibility of speaking to the department heads about this,” he continued entering the idea into the map and marking it as an action point for himself. “Is there anything else that Joe, Mike…or anyone else would like to add here,” he added, as he finished.

Figure 11: Completed discussion of first risk (click to see full size

Figure 11: Completed discussion of first risk (click to view larger image)

“Nope,” said Mike, “I’m good with that.”

“Yeah me too,” said Helen.

“I don’t have anything else to say about this point,” said Rob, “ but it would be great if you could give us a tutorial on this technique. I think it could be useful to summarise the rationale behind our compliance regulations. Folks have been complaining that they don’t understand the reasoning behind some of our rules and regulations. ”

“I’d be interested in that too,” said Helen, “I could use it to clarify user requirements.”

“I’d be happy to do a session on the IBIS notation and dialogue mapping next week. I’ll check your availability and send an invite out… but for now, let’s focus on the task at hand.”

The discussion continued…but the fly on the wall was no longer there to record it.

Afterword

I hope this little vignette illustrates how IBIS and dialogue mapping can aid collaborative decision-making / problem solving by making diverse viewpoints explicit. That said, this is a story, and the problem with stories is that things  go the way the author wants them to.  In real life, conversations can go off on unexpected tangents, making them really hard to map. So, although it is important to gain expertise in using the software, it is far more important to practice mapping live conversations. The latter is an art that requires considerable practice. I recommend reading Paul Culmsee’s series on the practice of dialogue mapping or <advertisement> Chapter 14 of The Heretic’s Guide to Best Practices</advertisement> for more on this point.

That said, there are many other ways in which IBIS can be used, that do not require as much skill. Some of these include: mapping the central points in written arguments (what’s sometimes called issue mapping) and even decisions on personal matters.

To sum up: IBIS is a powerful means to clarify options and lay them out in an easy-to-follow visual format. Often this is all that is required to catalyse a group decision.

Three types of uncertainty you (probably) overlook

with 5 comments

Introduction – uncertainty and decision-making

Managing uncertainty deciding what to do in the absence of reliable information – is a significant part of project management and many other managerial roles. When put this way, it is clear that managing uncertainty is primarily a decision-making problem. Indeed, as I will discuss shortly, the main difficulties associated with decision-making are related to specific types of uncertainties that we tend to overlook.

Let’s begin by looking at the standard approach to decision-making, which goes as follows:

  1. Define the decision problem.
  2. Identify options.
  3. Develop criteria for rating options.
  4. Evaluate options against criteria.
  5. Select the top rated option.

As I have pointed out in this post, the above process is too simplistic for some of the complex, multifaceted decisions that we face in life and at work (switching jobs, buying a house or starting a business venture, for example). In such cases:

  1. It may be difficult to identify all options.
  2. It is often impossible to rate options meaningfully because of information asymmetry – we know more about some options than others. For example, when choosing whether or not to switch jobs, we know more about our current situation than the new one.
  3. Even when ratings are possible, different people will rate options differently – i.e. different people invariably have different preferences for a given outcome. This makes it difficult to reach a consensus.

Regular readers of this blog will know that the points listed above are characteristics of wicked problems.  It is fair to say that in recent years, a general awareness of the ubiquity of wicked problems has led to an appreciation of the limits of classical decision theory. (That said,  it should be noted that academics have been aware of this for a long time: Horst Rittel’s classic paper on the dilemmas of planning, written in 1973, is a good example. And there are many others that predate it.)

In this post  I look into some hard-to-tackle aspects of uncertainty by focusing on the aforementioned shortcomings of classical decision theory. My discussion draws on a paper by Richard Bradley and Mareile Drechsler.

This article is organised as follows: I first present an overview of the standard approach to dealing with uncertainty and discuss its limitations. Following this, I elaborate on three types of uncertainty that are discussed in the paper.

Background – the standard view of uncertainty

The standard approach to tackling uncertainty was  articulated by Leonard Savage in his classic text, Foundations of Statistics. Savage’s approach can be summarized as follows:

  1. Figure out all possible states (outcomes)
  2. Enumerate actions that are possible
  3. Figure out the consequences of actions for all possible states.
  4. Attach a value (aka preference) to each consequence
  5. Select the course of action that maximizes value (based on an appropriately defined measure, making sure to factor in the likelihood of achieving the desired consequence)

(Note the close parallels between this process and the standard approach to decision-making outlined earlier.)

To keep things concrete it is useful to see how this process would work in a simple real-life example. Bradley and Drechsler quote the following example from Savage’s book that does just that:

…[consider] someone who is cooking an omelet and has already broken five good eggs into a bowl, but is uncertain whether the sixth egg is good or rotten. In deciding whether to break the sixth egg into the bowl containing the first five eggs, to break it into a separate saucer, or to throw it away, the only question this agent has to grapple with is whether the last egg is good or rotten, for she knows both what the consequence of breaking the egg is in each eventuality and how desirable each consequence is. And in general it would seem that for Savage once the agent has settled the question of how probable each state of the world is, she can determine what to do simply by averaging the utilities (Note: utility is basically a mathematical expression of preference or value) of each action’s consequences by the probabilities of the states of the world in which they are realised…

In this example there are two states (egg is good, egg is rotten), three actions (break egg into bowl, break egg into separate saucer to check if it rotten, throw egg away without checking) and three consequences (spoil all eggs, save eggs in bowl and save all eggs if last egg is not rotten, save eggs in bowl and potentially waste last egg). The problem then boils down to figuring out our preferences for the options (in some quantitative way) and the probability of the two states.  At first sight, Savage’s approach seems like a reasonable way to deal with uncertainty.  However, a closer look reveals major problems.

Problems with the standard approach

Unlike the omelet example, in real life situations it is often difficult to enumerate all possible states or foresee all consequences of an action. Further, even if states and consequences are known, we may not what value to attach to them – that is, we may not be able to determine our preferences for those consequences unambiguously. Even in those situations  where we can,  our preferences for may be subject to change  – witness the not uncommon situation where lottery winners end up wishing they’d never wonThe standard prescription works therefore works only in situations where all states, actions and consequences are known – i.e. tame situations, as opposed to wicked ones.

Before going any further, I should mention that Savage was cognisant of the limitations of his approach. He pointed out that it works only in what he called small world situations–  i.e. situations in which it is possible to enumerate and evaluate all options.  As Bradley and Drechsler put it,

Savage was well aware that not all decision problems could be represented in a small world decision matrix. In Savage’s words, you are in a small world if you can “look before you leap”; that is, it is feasible to enumerate all contingencies and you know what the consequences of actions are. You are in a grand world when you must “cross the bridge when you come to it”, either because you are not sure what the possible states of the world, actions and/or consequences are…

In the following three sections  I elaborate on the complications mentioned above emphasizing, once again, that many real life situations are prone to such complications.

State space uncertainty

The standard view of uncertainty assumes that all possible states are given as a part of the problem definition – as in the omelet example discussed earlier.  In real life, however, this is often not the case.

Bradley and Drechsler identify two distinct cases of state space uncertainty. The first one is when we are unaware that we’re missing states and/or consequences. For example, organisations that embark on a restructuring program are so focused on the cost-related consequences that they may overlook factors such as loss of morale and/or loss of talent (and the consequent loss of productivity). The second, somewhat rarer, case is when we are aware that we might be missing something but we don’t quite know what it is. All one can do here, is make appropriate contingency plans based on  guesses regarding possible consequences.

Figuring out possible states and consequences is largely a matter of scenario envisioning based on knowledge and practical experience. It stands to reason that this is best done by leveraging the collective experience and wisdom of people from diverse backgrounds. This is pretty much the rationale behind collective decision-making techniques such as Dialogue Mapping.

Option uncertainty

The standard approach to tackling uncertainty assumes that the connection between actions and consequences is well defined. This is often not the case, particularly for wicked problems.  For example, as I have discussed in this post, enterprise transformation programs with well-defined and articulated objectives often end up having a host of unintended consequences. At an even more basic level, in some situations it can be difficult to identify sensible options.

Option uncertainty is a fairly common feature in real-life decisions. As Bradley and Drechsler put it:

Option uncertainty is an endemic feature of decision making, for it is rarely the case that we can predict consequences of our actions in every detail (alternatively, be sure what our options are). And although in many decision situations, it won’t matter too much what the precise consequence of each action is, in some the details will matter very much.

…and unfortunately, the cases in which the details matter are precisely those problems in which they are the hardest to figure out – i.e. in wicked problems.

Preference uncertainty

An implicit assumption in the standard approach is that once states and consequences are known, people will be able to figure out their relative preferences for these unambiguously. This assumption is incorrect, as there are at least two situations in which people will not be able to determine their preferences. Firstly, there may be  a lack of factual information about one or more of the states. Secondly, even when one is able to get the required facts, it is hard to figure out how we would value the consequences.

A common example of the aforementioned situation is the job switch dilemma. In many (most?) cases in which one is debating whether or not to switch jobs, one lacks enough factual information about the new job – for example, the new boss’ temperament, the work environment etc. Further, even if one is able to get the required information, it is impossible to know how it would be to actually work there.  Most people would have struggled with this kind of uncertainty at some point in their lives. Bradley and Drechsler term this ethical uncertainty. I prefer the term preference uncertainty, as it has more to do with preferences than ethics.

Some general remarks

The first point to note is that the three types of uncertainty noted above map exactly on to the three shortcomings of classical decision theory discussed in the introduction.  This suggests a connection between the types of uncertainty and wicked problems. Indeed, most wicked problems are exemplars of one or more of the above uncertainty types.  For example, the paradigm-defining super-wicked problem of climate change displays all three types of uncertainty.

The three types of uncertainty discussed above are overlooked by the standard approach to managing uncertainty.  This happens in a number of ways. Here are two common ones:

  1. The standard approach assumes that all uncertainties can somehow be incorporated into a single probability function describing all possible states and/or consequences. This is clearly false for state space and option uncertainty: it is impossible to define a sensible probability function when one is uncertain about the possible states and/or outcomes.
  2. The standard approach assumes that preferences for different consequences are known. This is clearly not true in the case of preference uncertainty…and even for state space and option uncertainty for that matter.

In their paper, Bradley and Dreschsler arrive at these three types of uncertainty from considerations different from the ones I have used above. Their approach, while more general, is considerably more involved. Nevertheless, I would recommend that readers who are interested should take a look at it because they cover a lot of things that I have glossed over or ignored altogether.

Just as an example, they show how the aforementioned uncertainties can be reduced. There is a price to be paid, however: any reduction in uncertainty results in an increase in its severity. An example might help illustrate how this comes about. Consider a situation of state space uncertainty. One can reduce- or even, remove – this by defining a catch-all state (labelled, say, “all other outcomes”). It is easy to see that although one has formally reduced state space uncertainty to zero, one has increased the severity of the uncertainty because the catch-all state is but a reflection of our ignorance and our refusal to do anything about it!

There are many more implications of the above. However, I’ll point out just one more that serves to illustrate the very practical implications of these uncertainties. In a post on the shortcomings of enterprise risk management, I pointed out that the notion of an organisation-wide risk appetite is problematic because it is impossible to capture the diversity of viewpoints through such a construct. Moreover,  rule or process based approaches to risk management tend to focus only on those uncertainties that can be quantified, or conversely they assume that all uncertainties can somehow be clumped into a single probability distribution as prescribed by the standard approach to managing uncertainty. The three types of uncertainty discussed above highlight the limitations of such an approach to enterprise risk.

Conclusion

The standard approach to managing uncertainty assumes that all possible states, actions and consequences are known or can be determined. In this post I have discussed why this is not always so.  In particular, it often happens that we do not know all possible outcomes (state space uncertainty), consequences (option uncertainty) and/or our preferences for consequences (preference or ethical uncertainty).

As I was reading the paper, I felt the authors were articulating issues that I had often felt uneasy about but chose to overlook (suppress?).  Generalising from one’s own experience is always a fraught affair, but  I reckon we tend to deny these uncertainties because they are inconvenient – that is, they are difficult if not impossible to deal with within the procrustean framework of the standard approach.  What is needed as a corrective is a recognition that the pseudo-quantitative approach that is commonly used to manage uncertainty may not the panacea it is claimed to be. The first step towards doing this is to acknowledge the existence of the uncertainties that we (probably) overlook.

Written by K

February 25, 2015 at 9:08 pm

The dilemmas of enterprise IT

with 4 comments

Information technology (IT) is an integral part of any modern day business. Indeed, as Bill Gates once put it, “Information technology and business are becoming inextricably interwoven. I don’t think anybody can talk meaningfully about one without the talking about the other.” Although this is true, decision makers often display ambivalent, even contradictory attitudes towards enterprise IT.  For example, depending on the context, an executive might view IT as a cost of doing business or as a strategic advantage: the former view is common when budgets are being drawn up whereas the latter may come to the fore when a bold new e-marketing initiative is being discussed.

In this post I discuss some of these dilemmas of IT and show how the opposing viewpoints embodied in them need to be managed rather than resolved.  I illustrate my point by describing one way in which this can be done.

The dilemmas in brief

Many of the dilemmas of IT are consequences of conflicting views of what IT is and/or how it should be managed. I’ll describe some of these in brief below, leaving a discussion of their implications to the next section:

  1. IT as a cost of doing business versus IT as strategic asset: This distinction highlights the ambivalent attitudes that senior executives have towards IT. On the one hand, IT is seen as offering strategic advantages to the organization (for example a custom built application for customer segmentation). On the other, it is seen as an operational necessity (for example, core banking systems in the financial industry).
  2. Centralised IT versus Autonomous IT:  This refers to the debate about whether an organisation’s IT environment should be tightly controlled from head office or whether subsidiaries should be given a degree of autonomy.  This is essentially a debate between top-down versus bottom-up approaches to IT planning.
  3. Planning versus Improvisation: This refers to the tension between the structure offered by a plan and process-driven approach to IT and the necessity to step outside of plans and processes in order to come up with improvised solutions suited to the situation at hand. I have written about this paradox in a post on planning and improvisation.

There are other dilemmas – for example, technology driven IT versus business driven IT. However, for the purpose of this discussion the three listed above will suffice.

The poles of a dilemma

In his book entitled Polarity Management, Barry Johnson described how complex organizational issues can often be analysed in terms of their mutually contradictory facets. He termed these facets poles or polarities.  In this and the next section, I elaborate on Johnson’s notion of polarity and show how it offers a means to understand and manage the dilemmas of enterprise IT.

The key features of poles are as follows:

Each pole has associated positives and negatives. For example, the up side of viewing IT as a cost is that the organisation focuses on IT efficiency and value for money; the downside is that exploration and experimentation that is necessary for IT innovation would likely be seen as risky. On the other hand, the positive side of IT as a strategic asset is that it is seen as a means to enable an organisation’s growth and development; the negative is that it can encourage unproven technologies (since new technologies are more likely to offer competitive advantages) and uncontrolled experimentation along with their attendant costs.

Most organisations oscillate between poles.  At any given time the organisation will be “living” in one pole. In such situations, some stakeholders will perceive the negatives of that pole strongly and will thus see the other pole as being more desirable (the “grass is greener on the other” side syndrome).  Johnson labels such stakeholders crusaders” – those who want to rush off into the new world. On the other hand, there are tradition bearers, those who want to stay put.  When an organisation has spent a fair bit of time in one pole, the influence of crusaders tends wax while that of the tradition bearers weakens because the negatives become apparent to more and more people.

A concrete example may help clarify this point:

Consider a situation where all subsidiaries of a multinational have autonomous IT units (and have had these for a while).  The main benefits of such a model are responsiveness and relevance:  local IT units will able to respond quickly to local needs and will also be able to deliver solutions that are tailored to the specific needs of the local business. However, this model has many negative aspects: for example, high costs, duplication of effort, massive software portfolio and attendant costs, high cost of interfacing between subsidiaries etc.

When the model has been in operation for a while, it is quite likely that IT decision makers will perceive the negatives of this pole more clearly than they see the positives. They will then initiate a reform to centralize IT because they perceive the positives of that pole –i.e. low costs, centralization of services etc. – as being worth striving for.  However, when the new world is in place and has been operating for a while, the organisation will begin to see its downside: bureaucracy, lack of flexibility, applications that don’t meet specific local business needs etc. They will then start to delegate responsibility back to the subsidiaries…and thus goes the polarity merry-go-round.

Managing enterprise IT dilemmas

As discussed above, any option will have its supporters and detractors. For example, finance folks may see IT as a cost of doing business whereas those in IT will consider it to be a strategic asset.   What’s important, however, is that most organisations “resolve” such contradictions by taking sides. That is, one side “wins” and their point of view gets implemented as a “solution.”  The concerns of the “losing” side are overlooked entirely.

Although such a “solution” appears to solve the problem, it does not take long for the negative aspects of the other pole to manifest itself; the rumbles of discontent from those whose concerns have been ignored grow louder with time.  In this sense, issues that can be defined in terms of polarities are wicked problems – they are perceived in different ways by different stakeholders and so are difficult to define, let alone solve.

As we have seen above, however, the poles of a dilemma are but different facets of a single reality.  Hence, the first step towards managing a dilemma lies in realizing that it cannot be resolved definitively; regardless of the path chosen, there will always be a group whose concerns remain unaddressed. The best one can do is to be aware of the positives and negatives of each pole and ensure that the entire spectrum of stakeholders is aware of these. A shared awareness can help the group in figuring out ways to mitigate the worst effects of the negatives.

One which this can be done is via a facilitated session, involving people who represent the two sides of the issue.   To begin with, the facilitator helps the group identify the poles. She then helps the group create a polarity map which shows the contradictory aspects of the issue along with their positives and negatives. A rudimentary polarity map for the autonomous/centralized IT dilemma is shown in Figure 1 below.

Figure 1: Polarity map for centralised / autonomous IT dilemma

Figure 1: Polarity map for centralised / autonomous IT dilemma

To ensure completeness of the map, the group must include stakeholders who represent both sides of the dilemma (and also those who hold views that lie between).

As mentioned in the previous section, organisations are not static, they oscillate between poles. Moreover, Johnson claimed that they follow a specific path in the map.  Quoting from the book I wrote with Paul Culmsee:

According to Johnson, organisations tended to oscillate between poles. If you accept the notion of a wicked problem as a polarity, the overall pattern traced as one moves between these poles resembles an infinity symbol. The typical path is L- to R+, to R-, across to L+ and Johnson argued that the trajectory could not be avoided. All we can do is focus on minimizing our time spent in the lower quadrants.

Again, it is worth emphasizing that the conflict between the two groups of stakeholders cannot be resolved definitively. The best one can do is to get the two sides to understand each other’s’ point of view and hence attempt to minimize the downsides of each option.

Finally, polarity management is but one way to manage the dilemmas associate with enterprise IT or any other organizational decision. There are many others – and <advertisement> I highly recommend my book if you’re interested in finding out more about these </advertisement>.  In the end, though, the point I wished to make in this post is less about any particular technique and more about the need to air and acknowledge differing perspectives on issues pertaining to enterprise IT or any other decision with organization-wide implications.

Wrapping up

The dilemmas of enterprise IT are essentially consequences of mutually contradictory, yet equally valid perspectives. Is IT a cost of doing business or is it a strategic asset? The answer depends on the perspective one takes…and there is no objectively right or wrong answer.  Given this, it is important to be aware of both the up and down side of each perspective (or pole) before one makes a decision.  Unfortunately, most often decisions are made on the basis of the up side of one option and the down side of the other.  As should be evident now, a decision that is based on such a selective consideration of viewpoints invariably invites conflict and leads to undesirable outcomes.

Written by K

July 2, 2014 at 9:52 pm

%d bloggers like this: