Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Decision Making’ Category

The dark side of data science

with 5 comments

Data scientists are sometimes blind to the possibility that the predictions of their algorithms can have unforeseen negative effects on people. Ethical or social implications are easy to overlook when one finds interesting new patterns in data, especially if they promise significant financial gains. The Centrelink debt recovery debacle, recently reported in the Australian media, is a case in point.

Here is the story in brief:

Centrelink is an Australian Government organisation responsible for administering welfare services and payments to those in need. A major challenge such organisations face is ensuring that their clients are paid no less and no more than what is due to them. This is difficult because it involves crosschecking client income details across multiple systems owned by different government departments, a process that necessarily involves many assumptions. In July 2016, Centrelink unveiled an automated compliance system that compares income self-reported by clients to information held by the taxation office.

The problem is that the algorithm is flawed: it makes strong (and incorrect!) assumptions regarding the distribution of income across a financial year and, as a consequence, unfairly penalizes a number of legitimate benefit recipients.  It is very likely that the designers and implementers of the algorithm did not fully understand the implications of their assumptions. Worse, from the errors made by the system, it appears they may not have adequately tested it either.  But this did not stop them (or, quite possibly, their managers) from unleashing their algorithm on an unsuspecting public, causing widespread stress and distress.  More on this a bit later.

Algorithms like the one described above are the subject of Cathy O’Neil’s aptly titled book, Weapons of Math Destruction.  In the remainder of this article I discuss the main themes of the book.  Just to be clear, this post is more riff than review. However, for those seeking an opinion, here’s my one-line version: I think the book should be read not only by data science practitioners, but also by those who use or are affected by their algorithms (which means pretty much everyone!).

Abstractions and assumptions

‘O Neil begins with the observation that data algorithms are mathematical models of reality, and are necessarily incomplete because several simplifying assumptions are invariably baked into them. This point is important and often overlooked so it is worth illustrating via an example.

When assessing a person’s suitability for a loan, a bank will want to know whether the person is a good risk. It is impossible to model creditworthiness completely because we do not know all the relevant variables and those that are known may be hard to measure. To make up for their ignorance, data scientists typically use proxy variables, i.e. variables that are believed to be correlated with the variable of interest and are also easily measurable. In the case of creditworthiness, proxy variables might be things like gender, age, employment status, residential postcode etc.  Unfortunately many of these can be misleading, discriminatory or worse, both.

The Centrelink algorithm provides a good example of such a “double-whammy” proxy. The key variable it uses is the difference between the client’s annual income reported by the taxation office and self-reported annual income stated by the client. A large difference is taken to be an indicative of an incorrect payment and hence an outstanding debt. This simplistic assumption overlooks the fact that most affected people are not in steady jobs and therefore do not earn regular incomes over the course of a financial year (see this article by Michael Griffin, for a detailed example).  Worse, this crude proxy places an unfair burden on vulnerable individuals for whom casual and part time work is a fact of life.

Worse still, for those wrongly targeted with a recovery notice, getting the errors sorted out is not a straightforward process. This is typical of a WMD. As ‘O Neil states in her book, “The human victims of WMDs…are held to a far higher standard of evidence than the algorithms themselves.”  Perhaps this is because the algorithms are often opaque. But that’s a poor excuse.  This is the only technical field where practitioners are held to a lower standard of accountability than those affected by their products.

‘O Neil’s sums it up rather nicely when she calls algorithms like the Centrelink one  weapons of math destruction (WMD).

Self-fulfilling prophecies and feedback loops

A characteristic of WMD is that their predictions often become self-fulfilling prophecies. For example a person denied a loan by a faulty risk model is more likely to be denied again when he or she applies elsewhere, simply because it is on their record that they have been refused credit before. This kind of destructive feedback loop is typical of a WMD.

An example that ‘O Neil dwells on at length is a popular predictive policing program. Designed for efficiency rather than nuanced judgment, such algorithms measure what can easily be measured and act by it, ignoring the subtle contextual factors that inform the actions of experienced officers on the beat. Worse, they can lead to actions that can exacerbate the problem. For example, targeting young people of a certain demographic for stop and frisk actions can alienate them to a point where they might well turn to crime out of anger and exasperation.

As Goldratt famously said, “Tell me how you measure me and I’ll tell you how I’ll behave.”

This is not news: savvy managers have known about the dangers of managing by metrics for years. The problem is now exacerbated manyfold by our ability to implement and act on such metrics on an industrial scale, a trend that leads to a dangerous devaluation of human judgement in areas where it is most needed.

A related problem – briefly mentioned earlier – is that some of the important variables are known but hard to quantify in algorithmic terms. For example, it is known that community-oriented policing, where officers on the beat develop relationships with people in the community, leads to greater trust. The degree of trust is hard to quantify, but it is known that communities that have strong relationships with their police departments tend to have lower crime rates than similar communities that do not.  Such important but hard-to-quantify factors are typically missed by predictive policing programs.

Blackballed!

Ironically, although WMDs can cause destructive feedback loops, they are often not subjected to feedback themselves. O’Neil gives the example of algorithms that gauge the suitability of potential hires.  These programs often use proxy variables such as IQ test results, personality tests etc. to predict employability.  Candidates who are rejected often do not realise that they have been screened out by an algorithm. Further, it often happens that candidates who are thus rejected go on to successful careers elsewhere. However, this post-rejection information is never fed back to the algorithm because it impossible to do so.

In such cases, the only way to avoid being blackballed is to understand the rules set by the algorithm and play according to them. As ‘O Neil so poignantly puts it, “our lives increasingly depend on our ability to make our case to machines.” However, this can be difficult because it assumes that a) people know they are being assessed by an algorithm and 2) they have knowledge of how the algorithm works. In most hiring scenarios neither of these hold.

Just to be clear, not all data science models ignore feedback. For example, sabermetric algorithms used to assess player performance in Major League Baseball are continually revised based on latest player stats, thereby taking into account changes in performance.

Driven by data

In recent years, many workplaces have gradually seen the introduction to data-driven efficiency initiatives. Automated rostering, based on scheduling algorithms is an example. These algorithms are based on operations research techniques that were developed for scheduling complex manufacturing processes. Although appropriate for driving efficiency in manufacturing, these techniques are inappropriate for optimising shift work because of the effect they have on people. As O’ Neil states:

Scheduling software can be seen as an extension of just-in-time economy. But instead of lawn mower blades or cell phone screens showing up right on cue, it’s people, usually people who badly need money. And because they need money so desperately, the companies can bend their lives to the dictates of a mathematical model.

She correctly observes that an, “oversupply of low wage labour is the problem.” Employers know they can get away with treating people like machine parts because they have a large captive workforce.  What makes this seriously scary is that vested interests can make it difficult to outlaw such exploitative practices. As ‘O Neil mentions:

Following [a] New York Times report on Starbucks’ scheduling practices, Democrats in Congress promptly drew up bills to rein in scheduling software. But facing a Republican majority fiercely opposed to government regulations, the chances that their bill would become law were nil. The legislation died.

Commercial interests invariably trump social and ethical issues, so it is highly unlikely that industry or government will take steps to curb the worst excesses of such algorithms without significant pressure from the general public. A first step towards this is to educate ourselves on how these algorithms work and the downstream social effects of their predictions.

Messing with your mind

There is an even more insidious way that algorithms mess with us. Hot on the heels of the recent US presidential election, there were suggestions that fake news items on Facebook may have influenced the results.  Mark Zuckerberg denied this, but as this Casey Newton noted in this trenchant tweet, the denial leaves Facebook in “the awkward position of having to explain why they think they drive purchase decisions but not voting decisions.”

Be that as it may, the fact is Facebook’s own researchers have been conducting experiments to fine tune a tool they call the “voter megaphone”. Here’s what ‘O Neil says about it:

The idea was to encourage people to spread the word that they had voted. This seemed reasonable enough. By sprinkling people’s news feeds with “I voted” updates, Facebook was encouraging Americans – more that sixty-one million of them – to carry out their civic duty….by posting about people’s voting behaviour, the site was stoking peer pressure to vote. Studies have shown that the quiet satisfaction of carrying out a civic duty is less likely to move people than the possible judgement of friends and neighbours…The Facebook started out with a constructive and seemingly innocent goal to encourage people to vote. And it succeeded…researchers estimated that their campaign had increased turnout by 340,000 people. That’s a big enough crowd to swing entire states, and even national elections.

And if that’s not scary enough, try this:

For three months leading up to the election between President Obama and Mitt Romney, a researcher at the company….altered the news feed algorithm for about two million people, all of them politically engaged. The people got a higher proportion of hard news, as opposed to the usual cat videos, graduation announcements, or photos from Disney world….[the researcher] wanted to see  if getting more [political] news from friends changed people’s political behaviour. Following the election [he] sent out surveys. The self-reported results that voter participation in this group inched up from 64 to 67 percent.

This might not sound like much, but considering the thin margins of recent presidential elections, it could be enough to change a result.

But it’s even more insidious.  In a paper published in 2014, Facebook researchers showed that users’ moods can be influenced by the emotional content of their newsfeeds. Here’s a snippet from the abstract of the paper:

In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.

As you might imagine, there was a media uproar following which  the lead researcher issued a clarification and  Facebook officials duly expressed regret (but, as far as I know, not an apology).  To be sure, advertisers have been exploiting this kind of “mind control” for years, but a public social media platform should (expect to) be held to a higher standard of ethics. Facebook has since reviewed its internal research practices, but the recent fake news affair shows that the story is to be continued.

Disarming weapons of math destruction

The Centrelink debt debacle, Facebook mood contagion experiments and the other case studies mentioned in the book illusrate the myriad ways in which Big Data algorithms have a pernicious effect on our day-to-day lives. Quite often people remain unaware of their influence, wondering why a loan was denied or a job application didn’t go their way. Just as often, they are aware of what is happening, but are powerless to change it – shift scheduling algorithms being a case in point.

This is not how it was meant to be. Technology was supposed to make life better for all, not just the few who wield it.

So what can be done? Here are some suggestions:

  • To begin with, education is the key. We must work to demystify data science, create a general awareness of data science algorithms and how they work. O’ Neil’s book is an excellent first step in this direction (although it is very thin on details of how the algorithms work)
  • Develop a code of ethics for data science practitioners. It is heartening to see that IEEE has recently come up with a discussion paper on ethical considerations for artificial intelligence and autonomous systems and ACM has proposed a set of principles for algorithmic transparency and accountability.  However, I should also tag this suggestion with the warning that codes of ethics are not very effective as they can be easily violated. One has to – somehow – embed ethics in the DNA of data scientists. I believe, one way to do this is through practice-oriented education in which data scientists-in-training grapple with ethical issues through data challenges and hackathons. It is as Wittgenstein famously said, “it is clear that ethics cannot be articulated.” Ethics must be practiced.
  • Put in place a system of reliable algorithmic audits within data science departments, particularly those that do work with significant social impact.
  • Increase transparency a) by publishing information on how algorithms predict what they predict and b) by making it possible for those affected by the algorithm to access the data used to classify them as well as their classification, how it will be used and by whom.
  • Encourage the development of algorithms that detect bias in other algorithms and correct it.
  • Inspire aspiring data scientists to build models for the good.

It is only right that the last word in this long riff should go to ‘O Neil whose work inspired it. Towards the end of her book she writes:

Big Data processes codify the past. They do not invent the future. Doing that requires moral imagination, and that’s something that only humans can provide. We have to explicitly embed better values into our algorithms, creating Big Data models that follow our ethical lead. Sometimes that will mean putting fairness ahead of profit.

Excellent words for data scientists to live by.

Written by K

January 17, 2017 at 8:38 pm

Improving decision-making in projects

with 5 comments

An irony of organisational life is that the most important decisions on projects (or any other initiatives) have to be made at the start, when ambiguity is at its highest and information availability lowest. I recently gave a talk at the Pune office of BMC Software on improving decision-making in such situations.

The talk was recorded and simulcast to a couple of locations in India. The folks at BMC very kindly sent me a copy of the recording with permission to publish it on Eight to Late. Here it is:


Based on the questions asked and the feedback received, I reckon that a number of people found the talk  useful. I’d welcome your comments/feedback.

Acknowledgements: My thanks go out to Gaurav Pal, Manish Gadgil and Mrinalini Wankhede for giving me the opportunity to speak at BMC, and to Shubhangi Apte for putting me in touch with them. Finally, I’d like to thank the wonderful audience at BMC for their insightful questions and comments.

The Risk – a dialogue mapping vignette

with one comment

Foreword

Last week, my friend Paul Culmsee conducted an internal workshop in my organisation on the theme of collaborative problem solving. Dialogue mapping is one of the tools  of the tools he introduced during the workshop.  This piece, primarily intended as a follow-up for attendees,  is an introduction to dialogue mapping via a vignette that illustrates its practice (see this post for another one). I’m publishing it here as I thought it might be useful for those who wish to understand what the technique is about.

Dialogue mapping uses a notation called Issue Based Information System (IBIS), which I have discussed at length in this post. For completeness, I’ll begin with a short introduction to the notation and then move on to the vignette.

A crash course in IBIS

The IBIS notation consists of the following three elements:

  1. Issues(or questions): these are issues that are being debated. Typically, issues are framed as questions on the lines of “What should we do about X?” where X is the issue that is of interest to a group. For example, in the case of a group of executives, X might be rapidly changing market condition whereas in the case of a group of IT people, X could be an ageing system that is hard to replace.
  2. Ideas(or positions): these are responses to questions. For example, one of the ideas of offered by the IT group above might be to replace the said system with a newer one. Typically the whole set of ideas that respond to an issue in a discussion represents the spectrum of participant perspectives on the issue.
  3. Arguments: these can be Pros (arguments for) or Cons (arguments against) an issue. The complete set of arguments that respond to an idea represents the multiplicity of viewpoints on it.

Compendium is a freeware tool that can be used to create IBIS maps– it can be downloaded here.

In Compendium, IBIS elements are represented as nodes as shown in Figure 1: issues are represented by blue-green question markspositions by yellow light bulbspros by green + signs and cons by red – signs.  Compendium supports a few other node types, but these are not part of the core IBIS notation. Nodes can be linked only in ways specified by the IBIS grammar as I discuss next.

Figure 1: Elements of IBIS

Figure 1: IBIS node types

The IBIS grammar can be summarized in three simple rules:

  1. Issues can be raised anew or can arise from other issues, positions or arguments. In other words, any IBIS element can be questioned.  In Compendium notation:  a question node can connect to any other IBIS node.
  2. Ideas can only respond to questions– i.e. in Compendium “light bulb” nodes can only link to question nodes. The arrow pointing from the idea to the question depicts the “responds to” relationship.
  3. Arguments  can only be associated with ideas– i.e. in Compendium “+” and “–“  nodes can only link to “light bulb” nodes (with arrows pointing to the latter)

The legal links are summarized in Figure 2 below.

Figure 2: Legal links in IBIS

Figure 2: Legal links in IBIS

 

…and that’s pretty much all there is to it.

The interesting (and powerful) aspect of IBIS is that the essence of any debate or discussion can be captured using these three elements. Let me try to convince you of this claim via a vignette from a discussion on risk.

 The Risk – a Dialogue Mapping vignette

“Morning all,” said Rick, “I know you’re all busy people so I’d like to thank you for taking the time to attend this risk identification session for Project X.  The objective is to list the risks that we might encounter on the project and see if we can identify possible mitigation strategies.”

He then asked if there were any questions. The head waggles around the room indicated there were none.

“Good. So here’s what we’ll do,”  he continued. “I’d like you all to work in pairs and spend 10 minutes thinking of all possible risks and then another 5 minutes prioritising.  Work with the person one your left. You can use the flipcharts in the breakout area at the back if you wish to.”

Twenty minutes later, most people were done and back in their seats.

“OK, it looks as though most people are done…Ah, Joe, Mike have you guys finished?” The two were still working on their flip-chart at the back.

“Yeah, be there in a sec,” replied Mike, as he tore off the flip-chart page.

“Alright,” continued Rick, after everyone had settled in. “What I’m going to do now is ask you all to list your top three risks. I’d also like you tell me why they are significant and your mitigation strategies for them.” He paused for a second and asked, “Everyone OK with that?”

Everyone nodded, except Helen who asked, “isn’t it important that we document the discussion?”

“I’m glad you brought that up. I’ll make notes as we go along, and I’ll do it in a way that everyone can see what I’m writing. I’d like you all to correct me if you feel I haven’t understood what you’re saying. It is important that  my notes capture your issues, ideas and arguments accurately.”

Rick turned on the data projector, fired up Compendium and started a new map.  “Our aim today is to identify the most significant risks on the project – this is our root question”  he said, as he created a question node. “OK, so who would like to start?”

 

 

Fig 3: The root question

Figure 3: The root question

 

“Sure,” we’ll start, said Joe easily. “Our top risk is that the schedule is too tight. We’ll hit the deadline only if everything goes well, and everyone knows that they never do.”

“OK,” said Rick, “as he entered Joe and Mike’s risk as an idea connecting to the root question. “You’ve also mentioned a point that supports your contention that this is a significant risk – there is absolutely no buffer.” Rick typed this in as a pro connecting to the risk. He then looked up at Joe and asked,  “have I understood you correctly?”

“Yes,” confirmed Joe.

 

Fig 4: Map in progress

Figure 4: Map in progress

 

“That’s pretty cool,” said Helen from the other end of the table, “I like the notation, it makes reasoning explicit. Oh, and I have another point in support of Joe and Mike’s risk – the deadline was imposed by management before the project was planned.”

Rick began to enter the point…

“Oooh, I’m not sure we should put that down,” interjected Rob from compliance. “I mean, there’s not much we can do about that can we?”

…Rick finished the point as Rob was speaking.

 

Fig 4: Map in progress

Figure 5: Two pros for the idea

 

“I hear you Rob, but I think  it is important we capture everything that is said,” said Helen.

“I disagree,” said Rob. “It will only annoy management.”

“Slow down guys,” said Rick, “I’m going to capture Rob’s objection as “this is a management imposed-constraint rather than risk. Are you OK with that, Rob?”

Rob nodded his assent.

 

Fig 6: A con enters the picture

Fig 6: A con enters the picture

I think it is important we articulate what we really think, even if we can’t do anything about it,” continued Rick. There’s no point going through this exercise if we don’t say what we really think. I want to stress this point, so I’m going to add honesty  and openness  as ground rules for the discussion. Since ground rules apply to the entire discussion, they connect directly to the primary issue being discussed.”

Figure 7: A "criterion" that applies to the analysis of all risks

Figure 7: A “criterion” that applies to the analysis of all risks

 

“OK, so any other points that anyone would like to add to the ones made so far?” Queried Rick as he finished typing.

He looked up. Most of the people seated round the table shook their heads indicating that there weren’t.

“We haven’t spoken about mitigation strategies. Any ideas?” Asked Rick, as he created a question node marked “Mitigation?” connecting to the risk.

 

Figure 8: Mitigating the risk

Figure 8: Mitigating the risk

“Yeah well, we came up with one,” said Mike. “we think the only way to reduce the time pressure is to cut scope.”

“OK,” said Rick, entering the point as an idea connecting to the “Mitigation?” question. “Did you think about how you are going to do this? He entered the question “How?” connecting to Mike’s point.

Figure 9: Mitigating the risk

Figure 9: Mitigating the risk

 

“That’s the problem,” said Joe, “I don’t know how we can convince management to cut scope.”

“Hmmm…I have an idea,” said Helen slowly…

“We’re all ears,” said Rick.

“…Well…you see a large chunk of time has been allocated for building real-time interfaces to assorted systems – HR, ERP etc. I don’t think these need to be real-time – they could be done monthly…and if that’s the case, we could schedule a simple job or even do them manually for the first few months. We can push those interfaces to phase 2 of the project, well into next year.”

There was a silence in the room as everyone pondered this point.

“You know, I think that might actually work, and would give us an extra month…may be even six weeks for the more important upstream stuff,” said Mike. “Great idea, Helen!”

“Can I summarise this point as – identify interfaces that can be delayed to phase 2?” asked Rick, as he began to type it in as a mitigation strategy. “…and if you and Mike are OK with it, I’m going to combine it with the ‘Cut Scope’ idea to save space.”

“Yep, that’s fine,” said Helen. Mike nodded OK.

Rick deleted the “How?” node connecting to the “Cut scope” idea, and edited the latter to capture Helen’s point.

Figure 10: Mitigating the risk

Figure 10: Mitigating the risk

“That’s great in theory, but who is going to talk to the affected departments? They will not be happy.” asserted Rob.  One could always count on compliance to throw in a reality check.

“Good point,”  said Rick as he typed that in as a con, “and I’ll take the responsibility of speaking to the department heads about this,” he continued entering the idea into the map and marking it as an action point for himself. “Is there anything else that Joe, Mike…or anyone else would like to add here,” he added, as he finished.

Figure 11: Completed discussion of first risk (click to see full size

Figure 11: Completed discussion of first risk (click to view larger image)

“Nope,” said Mike, “I’m good with that.”

“Yeah me too,” said Helen.

“I don’t have anything else to say about this point,” said Rob, “ but it would be great if you could give us a tutorial on this technique. I think it could be useful to summarise the rationale behind our compliance regulations. Folks have been complaining that they don’t understand the reasoning behind some of our rules and regulations. ”

“I’d be interested in that too,” said Helen, “I could use it to clarify user requirements.”

“I’d be happy to do a session on the IBIS notation and dialogue mapping next week. I’ll check your availability and send an invite out… but for now, let’s focus on the task at hand.”

The discussion continued…but the fly on the wall was no longer there to record it.

Afterword

I hope this little vignette illustrates how IBIS and dialogue mapping can aid collaborative decision-making / problem solving by making diverse viewpoints explicit. That said, this is a story, and the problem with stories is that things  go the way the author wants them to.  In real life, conversations can go off on unexpected tangents, making them really hard to map. So, although it is important to gain expertise in using the software, it is far more important to practice mapping live conversations. The latter is an art that requires considerable practice. I recommend reading Paul Culmsee’s series on the practice of dialogue mapping or <advertisement> Chapter 14 of The Heretic’s Guide to Best Practices</advertisement> for more on this point.

That said, there are many other ways in which IBIS can be used, that do not require as much skill. Some of these include: mapping the central points in written arguments (what’s sometimes called issue mapping) and even decisions on personal matters.

To sum up: IBIS is a powerful means to clarify options and lay them out in an easy-to-follow visual format. Often this is all that is required to catalyse a group decision.

Three types of uncertainty you (probably) overlook

with 5 comments

Introduction – uncertainty and decision-making

Managing uncertainty deciding what to do in the absence of reliable information – is a significant part of project management and many other managerial roles. When put this way, it is clear that managing uncertainty is primarily a decision-making problem. Indeed, as I will discuss shortly, the main difficulties associated with decision-making are related to specific types of uncertainties that we tend to overlook.

Let’s begin by looking at the standard approach to decision-making, which goes as follows:

  1. Define the decision problem.
  2. Identify options.
  3. Develop criteria for rating options.
  4. Evaluate options against criteria.
  5. Select the top rated option.

As I have pointed out in this post, the above process is too simplistic for some of the complex, multifaceted decisions that we face in life and at work (switching jobs, buying a house or starting a business venture, for example). In such cases:

  1. It may be difficult to identify all options.
  2. It is often impossible to rate options meaningfully because of information asymmetry – we know more about some options than others. For example, when choosing whether or not to switch jobs, we know more about our current situation than the new one.
  3. Even when ratings are possible, different people will rate options differently – i.e. different people invariably have different preferences for a given outcome. This makes it difficult to reach a consensus.

Regular readers of this blog will know that the points listed above are characteristics of wicked problems.  It is fair to say that in recent years, a general awareness of the ubiquity of wicked problems has led to an appreciation of the limits of classical decision theory. (That said,  it should be noted that academics have been aware of this for a long time: Horst Rittel’s classic paper on the dilemmas of planning, written in 1973, is a good example. And there are many others that predate it.)

In this post  I look into some hard-to-tackle aspects of uncertainty by focusing on the aforementioned shortcomings of classical decision theory. My discussion draws on a paper by Richard Bradley and Mareile Drechsler.

This article is organised as follows: I first present an overview of the standard approach to dealing with uncertainty and discuss its limitations. Following this, I elaborate on three types of uncertainty that are discussed in the paper.

Background – the standard view of uncertainty

The standard approach to tackling uncertainty was  articulated by Leonard Savage in his classic text, Foundations of Statistics. Savage’s approach can be summarized as follows:

  1. Figure out all possible states (outcomes)
  2. Enumerate actions that are possible
  3. Figure out the consequences of actions for all possible states.
  4. Attach a value (aka preference) to each consequence
  5. Select the course of action that maximizes value (based on an appropriately defined measure, making sure to factor in the likelihood of achieving the desired consequence)

(Note the close parallels between this process and the standard approach to decision-making outlined earlier.)

To keep things concrete it is useful to see how this process would work in a simple real-life example. Bradley and Drechsler quote the following example from Savage’s book that does just that:

…[consider] someone who is cooking an omelet and has already broken five good eggs into a bowl, but is uncertain whether the sixth egg is good or rotten. In deciding whether to break the sixth egg into the bowl containing the first five eggs, to break it into a separate saucer, or to throw it away, the only question this agent has to grapple with is whether the last egg is good or rotten, for she knows both what the consequence of breaking the egg is in each eventuality and how desirable each consequence is. And in general it would seem that for Savage once the agent has settled the question of how probable each state of the world is, she can determine what to do simply by averaging the utilities (Note: utility is basically a mathematical expression of preference or value) of each action’s consequences by the probabilities of the states of the world in which they are realised…

In this example there are two states (egg is good, egg is rotten), three actions (break egg into bowl, break egg into separate saucer to check if it rotten, throw egg away without checking) and three consequences (spoil all eggs, save eggs in bowl and save all eggs if last egg is not rotten, save eggs in bowl and potentially waste last egg). The problem then boils down to figuring out our preferences for the options (in some quantitative way) and the probability of the two states.  At first sight, Savage’s approach seems like a reasonable way to deal with uncertainty.  However, a closer look reveals major problems.

Problems with the standard approach

Unlike the omelet example, in real life situations it is often difficult to enumerate all possible states or foresee all consequences of an action. Further, even if states and consequences are known, we may not what value to attach to them – that is, we may not be able to determine our preferences for those consequences unambiguously. Even in those situations  where we can,  our preferences for may be subject to change  – witness the not uncommon situation where lottery winners end up wishing they’d never wonThe standard prescription works therefore works only in situations where all states, actions and consequences are known – i.e. tame situations, as opposed to wicked ones.

Before going any further, I should mention that Savage was cognisant of the limitations of his approach. He pointed out that it works only in what he called small world situations–  i.e. situations in which it is possible to enumerate and evaluate all options.  As Bradley and Drechsler put it,

Savage was well aware that not all decision problems could be represented in a small world decision matrix. In Savage’s words, you are in a small world if you can “look before you leap”; that is, it is feasible to enumerate all contingencies and you know what the consequences of actions are. You are in a grand world when you must “cross the bridge when you come to it”, either because you are not sure what the possible states of the world, actions and/or consequences are…

In the following three sections  I elaborate on the complications mentioned above emphasizing, once again, that many real life situations are prone to such complications.

State space uncertainty

The standard view of uncertainty assumes that all possible states are given as a part of the problem definition – as in the omelet example discussed earlier.  In real life, however, this is often not the case.

Bradley and Drechsler identify two distinct cases of state space uncertainty. The first one is when we are unaware that we’re missing states and/or consequences. For example, organisations that embark on a restructuring program are so focused on the cost-related consequences that they may overlook factors such as loss of morale and/or loss of talent (and the consequent loss of productivity). The second, somewhat rarer, case is when we are aware that we might be missing something but we don’t quite know what it is. All one can do here, is make appropriate contingency plans based on  guesses regarding possible consequences.

Figuring out possible states and consequences is largely a matter of scenario envisioning based on knowledge and practical experience. It stands to reason that this is best done by leveraging the collective experience and wisdom of people from diverse backgrounds. This is pretty much the rationale behind collective decision-making techniques such as Dialogue Mapping.

Option uncertainty

The standard approach to tackling uncertainty assumes that the connection between actions and consequences is well defined. This is often not the case, particularly for wicked problems.  For example, as I have discussed in this post, enterprise transformation programs with well-defined and articulated objectives often end up having a host of unintended consequences. At an even more basic level, in some situations it can be difficult to identify sensible options.

Option uncertainty is a fairly common feature in real-life decisions. As Bradley and Drechsler put it:

Option uncertainty is an endemic feature of decision making, for it is rarely the case that we can predict consequences of our actions in every detail (alternatively, be sure what our options are). And although in many decision situations, it won’t matter too much what the precise consequence of each action is, in some the details will matter very much.

…and unfortunately, the cases in which the details matter are precisely those problems in which they are the hardest to figure out – i.e. in wicked problems.

Preference uncertainty

An implicit assumption in the standard approach is that once states and consequences are known, people will be able to figure out their relative preferences for these unambiguously. This assumption is incorrect, as there are at least two situations in which people will not be able to determine their preferences. Firstly, there may be  a lack of factual information about one or more of the states. Secondly, even when one is able to get the required facts, it is hard to figure out how we would value the consequences.

A common example of the aforementioned situation is the job switch dilemma. In many (most?) cases in which one is debating whether or not to switch jobs, one lacks enough factual information about the new job – for example, the new boss’ temperament, the work environment etc. Further, even if one is able to get the required information, it is impossible to know how it would be to actually work there.  Most people would have struggled with this kind of uncertainty at some point in their lives. Bradley and Drechsler term this ethical uncertainty. I prefer the term preference uncertainty, as it has more to do with preferences than ethics.

Some general remarks

The first point to note is that the three types of uncertainty noted above map exactly on to the three shortcomings of classical decision theory discussed in the introduction.  This suggests a connection between the types of uncertainty and wicked problems. Indeed, most wicked problems are exemplars of one or more of the above uncertainty types.  For example, the paradigm-defining super-wicked problem of climate change displays all three types of uncertainty.

The three types of uncertainty discussed above are overlooked by the standard approach to managing uncertainty.  This happens in a number of ways. Here are two common ones:

  1. The standard approach assumes that all uncertainties can somehow be incorporated into a single probability function describing all possible states and/or consequences. This is clearly false for state space and option uncertainty: it is impossible to define a sensible probability function when one is uncertain about the possible states and/or outcomes.
  2. The standard approach assumes that preferences for different consequences are known. This is clearly not true in the case of preference uncertainty…and even for state space and option uncertainty for that matter.

In their paper, Bradley and Dreschsler arrive at these three types of uncertainty from considerations different from the ones I have used above. Their approach, while more general, is considerably more involved. Nevertheless, I would recommend that readers who are interested should take a look at it because they cover a lot of things that I have glossed over or ignored altogether.

Just as an example, they show how the aforementioned uncertainties can be reduced. There is a price to be paid, however: any reduction in uncertainty results in an increase in its severity. An example might help illustrate how this comes about. Consider a situation of state space uncertainty. One can reduce- or even, remove – this by defining a catch-all state (labelled, say, “all other outcomes”). It is easy to see that although one has formally reduced state space uncertainty to zero, one has increased the severity of the uncertainty because the catch-all state is but a reflection of our ignorance and our refusal to do anything about it!

There are many more implications of the above. However, I’ll point out just one more that serves to illustrate the very practical implications of these uncertainties. In a post on the shortcomings of enterprise risk management, I pointed out that the notion of an organisation-wide risk appetite is problematic because it is impossible to capture the diversity of viewpoints through such a construct. Moreover,  rule or process based approaches to risk management tend to focus only on those uncertainties that can be quantified, or conversely they assume that all uncertainties can somehow be clumped into a single probability distribution as prescribed by the standard approach to managing uncertainty. The three types of uncertainty discussed above highlight the limitations of such an approach to enterprise risk.

Conclusion

The standard approach to managing uncertainty assumes that all possible states, actions and consequences are known or can be determined. In this post I have discussed why this is not always so.  In particular, it often happens that we do not know all possible outcomes (state space uncertainty), consequences (option uncertainty) and/or our preferences for consequences (preference or ethical uncertainty).

As I was reading the paper, I felt the authors were articulating issues that I had often felt uneasy about but chose to overlook (suppress?).  Generalising from one’s own experience is always a fraught affair, but  I reckon we tend to deny these uncertainties because they are inconvenient – that is, they are difficult if not impossible to deal with within the procrustean framework of the standard approach.  What is needed as a corrective is a recognition that the pseudo-quantitative approach that is commonly used to manage uncertainty may not the panacea it is claimed to be. The first step towards doing this is to acknowledge the existence of the uncertainties that we (probably) overlook.

Written by K

February 25, 2015 at 9:08 pm

The dilemmas of enterprise IT

with 4 comments

Information technology (IT) is an integral part of any modern day business. Indeed, as Bill Gates once put it, “Information technology and business are becoming inextricably interwoven. I don’t think anybody can talk meaningfully about one without the talking about the other.” Although this is true, decision makers often display ambivalent, even contradictory attitudes towards enterprise IT.  For example, depending on the context, an executive might view IT as a cost of doing business or as a strategic advantage: the former view is common when budgets are being drawn up whereas the latter may come to the fore when a bold new e-marketing initiative is being discussed.

In this post I discuss some of these dilemmas of IT and show how the opposing viewpoints embodied in them need to be managed rather than resolved.  I illustrate my point by describing one way in which this can be done.

The dilemmas in brief

Many of the dilemmas of IT are consequences of conflicting views of what IT is and/or how it should be managed. I’ll describe some of these in brief below, leaving a discussion of their implications to the next section:

  1. IT as a cost of doing business versus IT as strategic asset: This distinction highlights the ambivalent attitudes that senior executives have towards IT. On the one hand, IT is seen as offering strategic advantages to the organization (for example a custom built application for customer segmentation). On the other, it is seen as an operational necessity (for example, core banking systems in the financial industry).
  2. Centralised IT versus Autonomous IT:  This refers to the debate about whether an organisation’s IT environment should be tightly controlled from head office or whether subsidiaries should be given a degree of autonomy.  This is essentially a debate between top-down versus bottom-up approaches to IT planning.
  3. Planning versus Improvisation: This refers to the tension between the structure offered by a plan and process-driven approach to IT and the necessity to step outside of plans and processes in order to come up with improvised solutions suited to the situation at hand. I have written about this paradox in a post on planning and improvisation.

There are other dilemmas – for example, technology driven IT versus business driven IT. However, for the purpose of this discussion the three listed above will suffice.

The poles of a dilemma

In his book entitled Polarity Management, Barry Johnson described how complex organizational issues can often be analysed in terms of their mutually contradictory facets. He termed these facets poles or polarities.  In this and the next section, I elaborate on Johnson’s notion of polarity and show how it offers a means to understand and manage the dilemmas of enterprise IT.

The key features of poles are as follows:

Each pole has associated positives and negatives. For example, the up side of viewing IT as a cost is that the organisation focuses on IT efficiency and value for money; the downside is that exploration and experimentation that is necessary for IT innovation would likely be seen as risky. On the other hand, the positive side of IT as a strategic asset is that it is seen as a means to enable an organisation’s growth and development; the negative is that it can encourage unproven technologies (since new technologies are more likely to offer competitive advantages) and uncontrolled experimentation along with their attendant costs.

Most organisations oscillate between poles.  At any given time the organisation will be “living” in one pole. In such situations, some stakeholders will perceive the negatives of that pole strongly and will thus see the other pole as being more desirable (the “grass is greener on the other” side syndrome).  Johnson labels such stakeholders crusaders” – those who want to rush off into the new world. On the other hand, there are tradition bearers, those who want to stay put.  When an organisation has spent a fair bit of time in one pole, the influence of crusaders tends wax while that of the tradition bearers weakens because the negatives become apparent to more and more people.

A concrete example may help clarify this point:

Consider a situation where all subsidiaries of a multinational have autonomous IT units (and have had these for a while).  The main benefits of such a model are responsiveness and relevance:  local IT units will able to respond quickly to local needs and will also be able to deliver solutions that are tailored to the specific needs of the local business. However, this model has many negative aspects: for example, high costs, duplication of effort, massive software portfolio and attendant costs, high cost of interfacing between subsidiaries etc.

When the model has been in operation for a while, it is quite likely that IT decision makers will perceive the negatives of this pole more clearly than they see the positives. They will then initiate a reform to centralize IT because they perceive the positives of that pole –i.e. low costs, centralization of services etc. – as being worth striving for.  However, when the new world is in place and has been operating for a while, the organisation will begin to see its downside: bureaucracy, lack of flexibility, applications that don’t meet specific local business needs etc. They will then start to delegate responsibility back to the subsidiaries…and thus goes the polarity merry-go-round.

Managing enterprise IT dilemmas

As discussed above, any option will have its supporters and detractors. For example, finance folks may see IT as a cost of doing business whereas those in IT will consider it to be a strategic asset.   What’s important, however, is that most organisations “resolve” such contradictions by taking sides. That is, one side “wins” and their point of view gets implemented as a “solution.”  The concerns of the “losing” side are overlooked entirely.

Although such a “solution” appears to solve the problem, it does not take long for the negative aspects of the other pole to manifest itself; the rumbles of discontent from those whose concerns have been ignored grow louder with time.  In this sense, issues that can be defined in terms of polarities are wicked problems – they are perceived in different ways by different stakeholders and so are difficult to define, let alone solve.

As we have seen above, however, the poles of a dilemma are but different facets of a single reality.  Hence, the first step towards managing a dilemma lies in realizing that it cannot be resolved definitively; regardless of the path chosen, there will always be a group whose concerns remain unaddressed. The best one can do is to be aware of the positives and negatives of each pole and ensure that the entire spectrum of stakeholders is aware of these. A shared awareness can help the group in figuring out ways to mitigate the worst effects of the negatives.

One which this can be done is via a facilitated session, involving people who represent the two sides of the issue.   To begin with, the facilitator helps the group identify the poles. She then helps the group create a polarity map which shows the contradictory aspects of the issue along with their positives and negatives. A rudimentary polarity map for the autonomous/centralized IT dilemma is shown in Figure 1 below.

Figure 1: Polarity map for centralised / autonomous IT dilemma

Figure 1: Polarity map for centralised / autonomous IT dilemma

To ensure completeness of the map, the group must include stakeholders who represent both sides of the dilemma (and also those who hold views that lie between).

As mentioned in the previous section, organisations are not static, they oscillate between poles. Moreover, Johnson claimed that they follow a specific path in the map.  Quoting from the book I wrote with Paul Culmsee:

According to Johnson, organisations tended to oscillate between poles. If you accept the notion of a wicked problem as a polarity, the overall pattern traced as one moves between these poles resembles an infinity symbol. The typical path is L- to R+, to R-, across to L+ and Johnson argued that the trajectory could not be avoided. All we can do is focus on minimizing our time spent in the lower quadrants.

Again, it is worth emphasizing that the conflict between the two groups of stakeholders cannot be resolved definitively. The best one can do is to get the two sides to understand each other’s’ point of view and hence attempt to minimize the downsides of each option.

Finally, polarity management is but one way to manage the dilemmas associate with enterprise IT or any other organizational decision. There are many others – and <advertisement> I highly recommend my book if you’re interested in finding out more about these </advertisement>.  In the end, though, the point I wished to make in this post is less about any particular technique and more about the need to air and acknowledge differing perspectives on issues pertaining to enterprise IT or any other decision with organization-wide implications.

Wrapping up

The dilemmas of enterprise IT are essentially consequences of mutually contradictory, yet equally valid perspectives. Is IT a cost of doing business or is it a strategic asset? The answer depends on the perspective one takes…and there is no objectively right or wrong answer.  Given this, it is important to be aware of both the up and down side of each perspective (or pole) before one makes a decision.  Unfortunately, most often decisions are made on the basis of the up side of one option and the down side of the other.  As should be evident now, a decision that is based on such a selective consideration of viewpoints invariably invites conflict and leads to undesirable outcomes.

Written by K

July 2, 2014 at 9:52 pm

Making sense of sensemaking – a conversation with Paul Culmsee

with 5 comments

Introduction

Welcome to the second post in my conversations series. This time around I chat with my friend and long-time collaborator Paul Culmsee who, among many other things, is a skilled facilitator and a master of the craft of dialogue mapping (more on that below).

In an hour-long conversation recorded a couple of weeks ago, Paul and I talked about the art of sensemaking.  (Editor’s note: the conversation has been lightly edited for clarity)

What is sensemaking?

KA: Hi Paul, in this conversation I wanted to focus on sensemaking. From our association over the years, I know that’s a specialty of yours.  Incidentally, I checked out your LinkedIn profile and saw that you announce yourself as an IT veteran of many years and a sensemaker. So, to begin with, could you tell us what sensemaking is?

PC: [Laughs] First up, thanks for stalking me on LinkedIn.  Well the “IT veteran” part is easy – it’s is what I’ve been doing ever since I left university in 1989. Sensemaking came a while later.

In a nutshell, sensemaking is about helping groups make sense of complex situations that might otherwise lead them into tense or adversarial conditions. A lot of projects exhibit such situations from time to time. Sensemaking seeks to help groups develop a shared understanding of these sorts of situations.

KA:  OK, so what I’m hearing is that it is about helping people get clarity on an ambiguous situation or may be, even define what the problem is.

PC: Yeah absolutely…and we alluded to this in our Heretic’s book.  It is staggering when one realizes how many teams and individuals (in teams and in organisations) spend a stack of money and time without being aligned on the problem that they’re solving. Often this lack of alignment becomes evident only in the wash, long after anything can be done. In some ways it beggars belief that that could happen; that a project or initiative could go on for long without alignment, but it does happen quite often. Sensemaking seeks to eliminate that up front.

There are various tools, techniques and collaborative approaches that one can use to bring people together to air and reconcile different viewpoints.  Of course, this assumes that people genuinely want to see things improve, and in my experience this is often the case. A lot of the time, therefore, sensemaking is simply about helping groups reach a shared understanding so that subsequent actions can be taken with full commitment from everybody concerned.

KA: All that sounds very reasonable, even obvious. Why do you think this has been neglected for so long? Why  have people overlooked this?

PC: Mate, I’m glad we’re having beers as we talk about this [takes a swig].

Look I think it is often seen as an excuse to have a talk-fest, and I think that criticism is actually quite fair.  I think organisations…or, rather, the people within them…tend to have a very strong drive to move to action. The idea of stopping and thinking is seen as not being a particularly productive use of time.

Actually, if you delve into it, sensemaking has been around for years and years. In fact, pretty much anyone who is a facilitator is a sensemaker as well in that he or she seeks to help people [overcome an issue that they’re facing as a group].   The problem is that a lot of the techniques used in sensemaking are rooted in theories or philosophies that aren’t seen as being particularly practical. To a certain extent, the theories themselves are to blame. For example, the first time I heard about soft systems theory, I had no idea what the person was talking about. (Editor’s note: a theory that underlies many facilitation techniques)

KA: Yes, that’s absolutely right. Systems theory  itself has been around for a long time….since the 1950s I think.  It’s also been resurrected in various guises ever since, but has always had this reputation (perhaps unfair) of being somewhat impractical. So, I’m curious as to how you actually get around that. How do you sell what you do? (Editor’s note: Systems theory is the precursor of soft systems theory)

PC: By example. It’s really as simple as that. If you take the example of dialogue mapping, which is a practical facilitation approach involving the visual capture of rationale using a graphical notation.  Even that…which is a practical tool…is much easier to show by example than to explain conceptually.  If I were to try to explain what it is in words, I’d have to say something like “I sit in a room and get paid to draw maps. I map the conversations and facilitate at the same time.”  People might then say, “What’s that? Is it like mind mapping?” Then I have no choice but to say, “Well, yeah…but there’s a lot more to it than that.”

So I’ve long since given up on explaining it to people conceptually; it’s much easier to just show them. Moreover, in a lot of the situations where I do engagements for clients, I discourage the sponsor from making a big deal about the technique. I’d rather just let the technique “sell itself”.

KA: Yeah that rings true. You know, I was trying to write a blog post once on dialogue mapping, and realized it would be much better to tell it through a story (Editor’s note: …and the result was this post).

OK, so you’ve told us a bit about dialogue mapping, and I know it is a mainstay of your practice. Could you tell us a bit about how you came to it and what it has done for you?

PC: Oh, it’s changed my career.  In terms of what it’s done for me – well, where I am now is a direct result of my taking up that craft. And I call it a craft because it took a damn lot of practice. It is not something where you can read a book and go “Oh that makes sense,” and then expect to facilitate a group of twenty people or anything like that.

How I came to it was as follows: I had come off a large failed project and was asking myself what I could have done differently.  In hindsight, the problem was pretty obvious:  there were times when things were said by certain stakeholders and I should have gone, “Right, stop!” But I didn’t.  Of course, such mistakes are part of a learning journey.  I subsequently did some research on techniques that might have helped resolve such issues and came to dialogue mapping directly as a result of that research.

Then, through sheer luck I got to apply the technique in areas other than my discipline.  As I said earlier, I’m an IT guy and have been in IT for a long time, but I was lucky to get an early opportunity to use dialogue mapping in an area that was very different from IT (Urban planning to be precise).  I sucked at it completely in that first engagement, but did enough that the client got value out of it and asked me back.

That engagement was a sink or swim situation, and I managed to do just enough to stay afloat.  I should also say that the group I worked with really wanted to succeed: even though they were deadlocked, the group as a whole had a genuine intent to address the problem. Fortunately we were able to make a small breakthrough in the first session. We ended up doing six more sessions and had a really good outcome for the organisation.

Framing questions

KA: That’s interesting…but also a little bit scary.  A lot of people would find a situation like that quite daunting to facilitate.  In particular, when you walk into a situation where you know a group has been grappling with a problem for a long time, you first need to make sense of it yourself. How do you do that?

PC: Yeah, well as you do it more of it, you gain experience of different situations and domains – for example, not-for-profit organizations, executives of a business, public sector or what have you. You then begin to notice that the patterns behind complex problems are actually quite similar across different areas. Although I can’t quite put my finger on what exactly I do, I would say that it is largely about “listening to the situation” and “asking the right questions”.

When Jeff Conklin teaches dialogue mapping, he talks about the seven different question types (Editor’s note: Jeff Conklin is the inventor of dialogue mapping. See this post for more on his question types). He really gets you to think about the questions you’re going to ask. It took me a while to realize just how important that is: the power of asking questions in the right way or framing them appropriately. Indeed, the real learning for me began when I realized this, and it happened long after I had mastered the notation.

The fact is: each situation is unique. I approach strategic planning, team development or business analysis in completely different ways. I can’t give you a generic answer on the approach, but certainly nowadays when I’m presented with a scenario, I find that there is not much that is unfamiliar. I’ve seen most of the territory now, perhaps.

KA: So it’s almost like you’ve got a “library” of patterns which you can find a match to the situation you’re in

PC: Yes that’s right…and I should also mention that the guys I worked with in my early days of using the craft were also sensitive to this, even though they did not practice dialogue mapping.  One of my earliest gigs was to develop a procurement strategy for a major infrastructure project. We spent half a day – from 8:30 in the morning to 1:00 in the afternoon – just trying to figure out what the first question should be.  It’s conversations like that in the early days, followed by trial and error in actual facilitation scenarios that aided my learning.

KA: That’s interesting, and I’d like to pick up on what you said earlier about the power of asking the right questions. Jeff Conklin has his seven question types which he elaborates at length in his book (and we also talk about them in the Heretic’s Guide).  However, since then, I know that your thinking on this has advanced considerably. Could you tell us a bit more about this?

PC: Yeah, if we ever do a second edition of the Heretic’s Guide, I’ll definitely be covering this kind of stuff.   But, let me try to explain some of the ideas here in brief.

To set the context, I’ll start with one of Jeff’s question types. An important question type is the deontic question, which is a question that a lot of maps start with. A deontic question asks “What ought to be done?” – for example, “What should we do about X?” The aim of such a question is to open up a conversation.

However, deontic questions can be poorly framed. To take a concrete example, say if one were to ask, “What should we do about increasing  X?” – well such a question implicitly suggests a course of action – i.e. one that increases X. A well framed deontic question doesn’t do that.  It solicits information in a neutral or open way. (Editor’s Note: For example, a well-framed alternative to the foregoing question would be, “What should we do about X?”)

All that is well and good, and is something I teach in my dialogue mapping courses as does Jeff in his.  However, I once taught dialogue mapping to a bunch of business analysts, and of course told them about the importance of asking deontic questions.  Some of the guys told me that they intended to use it at work the following week.  Well, I saw them again a few weeks later and naturally asked, “So how did it go?” They said, “Hey that the deontic question just didn’t work!”

I kind of realized then, and in fact I had mentioned to them (but may be not stressed it enough), that questions need to be framed to suit the situation. You can ask an open deontic question in a really bad way… or even lead at the wrong time with the wrong question.

The more I thought about it, the more I realized that there are patterns to [framing] questions. In other words, there are ways of asking questions that will lead to better outcomes. As an example, a deontic question might be, “What should our success indicators be on this project?” This is a perfectly valid open question (as per Conklin’s question types). However, in a workshop setting this is probably not a good question to ask because the conversation will go all over the place without reaching any consensus.  Moreover, people who don’t have tolerance for ambiguity would be uncomfortable with a question like that.

A better option would be  to reframe the question and ask something like, “If this initiative were highly successful and we look back on it after, say 2 years, how would things be different to now?” With that question what you’re saying to people is: let’s not even worry about problem definition, context, criteria and all that stuff that comes with a deontic question; instead you’re asking them to tell you about the difference between now and an aspirational future. This is easier to answer. A lot of people will say things like – we’ll have more of X and less of Y and so on.

On a related note, if you want to understand the long-term implications of “more of X” or “less of Y”, it is not helpful to ask a question like, “What do you think will happen in the long-term?” People won’t intuitively understand that, so you won’t get a useful answer. Instead it is better to ask a question like, “What behaviours do you think will change if we do all this sort of stuff?” Now, if you think about it, the immediate outcomes of projects are things like “increased awareness of something “or “better access to something”, but over time you’re looking for changes in behaviours because that’s when you know that the changes wrought by an initiative have really taken root.

Subtle reframings of this kind yield richer answers that are more meaningful to people. Moreover, when you solicit responses from a group in this manner, you’ll start to see common themes emerge. These are the sorts of subtleties I have come to understand and appreciate through my practice of sensemaking.

Obliquity

KA: That’s fascinating. So what you’re saying is that rather than asking a direct question it is better to ask an oblique one. Is that right?

PC: Yes, and I think that point is worth elaborating. You used the word “oblique” and I know you’re using it deliberately because we’ve talked about this in the past. Essentially I think the “law of asking questions” is that the more direct question, the less likely it is that you will get a useful answer.

Seriously, asking a question like “What should our vision be?” is a completely brainless way of getting to a vision. You’re more likely to get a useful answer from a question like “What would our organization look like three years from now, if we achieved all we are setting out to do?” The themes that come out of the answers to these kinds of question help in answering the direct question.

I’ve learnt that the question that everyone wants answered is never the one you start with. If you start with the direct question, the conversation will meander over all kinds of weird places.

I came across the notion of obliquity in an article (…and I think it was one of the rare times I sent you something instead of the other way around). In the article, the author (John Kay) made the observation that organisations that chased KPIs like earnings per share (for example) generally did not do as well as organisations that had a more holistic vision (Editor’s note: I also recommend Kay’s book on  obliquity). One example Kay gives is that of Microsoft whose objective in the 90s wanted “a PC on every desk in the world.” Microsoft achieved the earnings per share alright, but it was pursued via an oblique goal.

Organisations that chase earnings per share or other financial metrics tend to be like the folks who “seek happiness” directly instead of trying to find it by, say, immersing themselves in activities that they enjoy. I guess I observed that the principle of obliquity – that things are best achieved indirectly – also applies to the art of asking questions.

KA: That makes a lot of sense. Indeed, after we exchanged notes on the article and Kay’s book, I’ve noticed this idea of obliquity popping up in all different kinds of contexts. I’m not sure why this is, but I think it has something to do with the fact that we don’t really know how the future is going to unfold, and obliquity helps open our minds up to possibilities that we would overlook if we took a “straight line from A to B” kind of approach.

PC: Yep, and that brings up an interesting aspect to oblique questions as well. You know, some people – especially those trained in a standard business school curriculum will be surprised if you ask them an oblique question because it seemingly makes no sense. They might say, “Well, why would you ask that? What we really want to answer is this

Well, I’ve found a way to deal with this, and I learnt this from working with a facilitator who is a professor at one of the business schools here (in Perth).  This was in a strategic planning workshop that we co-facilitated.  Before starting, she walked up to the whiteboard and sketched out a very simple strategic planning model – literally a diagram that said here’s our vision, and the vision leads to a mission, which leads to areas of focus which, in turn, lead to processes…a simple causal diagram with a few boxes connected by arrows.

She spent only a minute or so explaining this model; she didn’t do it in any detail. Then she pointed to a particular box and said, “We’re going to talk about this particular one now.” And I don’t know why this is, but when you present a little model like that (which is familiar to the audience) and say that you’re going to focus on a particular aspect of it, people seem to become more receptive to ambiguity, and you can then get as oblique as you like. Perhaps this is because the narrative is then aligned with a mental model that is familiar to them.

What I’ve learnt, in effect, is that you can’t talk about the wonders of complex systems theory to a bunch of rational project managers. Conversely, when I’m dealing with a group of facilitators (who love all that systems theory stuff) I would never draw a management model going from vision to mission to execution. But when dealing with the corporate world, I will often use a model like that. Not to educate them – they already know the model – but purely to reduce their anxiety. After that I can ask them the questions I really want to ask. It’s a subtle trick: you put things in a familiar frame and then, once you have done that, you can get as oblique as you like.

Tying this back to a question you asked earlier about how I prepare for a facilitation session. Well, I usually try to figure out audience first. If I’m dealing with the public sector I might set the stage by talking about wicked problems, whereas with corporate clients I might start with a Strategic Planning 101 sort of presentation. Either way, I find a frame that is familiar to them and then – almost like a sleight of hand – I switch to the questions I really want to ask. Does that make sense?

KA: Yeah, so to summarise: you give them a security blanket and then scare the hell out of them [laughs]

PC:  That’s actually pretty well summarized [laughs]; I like where you’re going with that…but I’d put it slightly differently. It’s actually a bit like when you’re trying to get little kids to eat something they don’t want to eat – you go, “see here’s the choo-choo train” or something like that, and then get them to have a spoonful while their focusing on that. In a way it’s like creating a distraction. But the aim is really to couch things in such a ways as to get to a point where you can start to have a productive dialogue. And the dialogue itself is driven by powerful questions.

From obliquity to directness

KA: By powerful, I guess you mean oblique

PC: Yeah, the oblique aspect of questions is a common thread that runs through much of what I do now. Mind you, I don’t stay oblique all the way through a session. I start obliquely because I want to unpack a problem. Eventually, though, as people start to get insights and themes begin to emerge, I become more direct; I start to ask things such as who, when etc…putting names and dates down on the map.

KA: So you get more direct in your questioning as the group starts to reach a common understanding of a problem.

PC: That’s exactly right. But there’s the other side to it (and by the way, you should have a conversation with my colleague Neil on this kind of stuff):  you typically have a mixed audience, the “left-brainers” who are rational engineer types as well as the “hippies” (the so-called “right-brainers”) who want to stay out in conceptual-land. Both groups like to stay in their comfort zone: the engineers don’t like moving to conceptual-land because they see it as a waste of time; on the other hand conceptual people don’t like moving to action because the conceptual world feels safer to them. So I sort of trick the engineers into doing conceptual stuff while also pushing conceptual guys into answering more direct questions.

Other techniques and skills

KA: Interesting. Let’s talk a bit about techniques – I know dialogue mapping is a mainstay of much of what you do. What are some of the other techniques you use [to draw people out of their comfort zones]? You mentioned soft systems theory and a few others; you seem to have quite a tool-chest of techniques to draw upon.

PC: Yeah well, when I got into mapping, I also looked at other techniques. I was interested in what else you could do, so I looked at various gamestorming techniques, graphic facilitation and, of course, many methods based on the principles of soft systems and related theories.

I use techniques from both the right-brained and left-brained ends of the spectrum…and by the way, I apologize to any neuroscientists who might be reading/listening to this because I know they hate the term left/right brained. However, I do find it useful sometimes.

Anyway, a popular technique on   the right-brained end of the spectrum is Open Space, which operates almost entirely in conceptual-land. It relies on the wisdom of the crowd; there is no preset agenda, just a theme. People sit in a circle, there’s a Tibetan bell…an on the surface it all seems quite hippy. However, I’ve actually done it in construction projects where you have folks who have come off a building site, dressed in their safety gear – hard hats and all – and participated in such situations. And it does work, despite its touchy-feely, hippy image.

On the other hand, once you’ve conceptualized a project you need to get down to hard work of getting stuff done.  One of the first questions that comes up is, “How do we measure success?”  This usually boils down to defining KPIs. Now, I would never dialogue map or open space a conversation on KPIs. You might get a few themes from dialogue mapping, but definitely not enough detail. Instead, one of the things I often do is go to an online KPI library (like http://kpilibrary.com which has over 7000 KPIs) in which you can find KPIs relating to any area you can think of, ranging from project management to customer service to quality or sustainability. I’ll then print relevant ones on cards and then use a card-sorting technique in which I put people in different groups and ask them to look at specific focus areas [that emerged from the conceptualization phase], and figure out which KPIs are relevant to it.

Why do I do that? Not because I think they will find the KPI. They probably won’t. It’s more because such a process avoids those inevitable epic arguments on what a KPI actually is.

A very effective technique is to spend half a day unpacking a problem via dialogue mapping and get key themes to emerge. This “conceptualization phase” is done with the whole group. Then, when you want to drill down into detailed actions action, it helps to use a divide and conquer approach. This is why I split people up into smaller groups and get them to go off and work on themes that emerged from the conceptualization phase. The aim is to get them to come up with concrete KPIs or even actions. If I’m feeling really evil, when there’s only 10 minutes left, I’ll tell them that they can present only their top four actions or KPIs. This forces them to prioritise things according to value. It’s a bit like a Delphi technique really. Finally, the groups come back together and present all their findings, which I then dialogue map. Once that is done, the larger group (together) will turn to the map and synthesise the outputs of what the smaller groups have done. This example is quite typical of the kind of stuff that I do.

Another example: I did some strategic planning work for local government – this was in the area of urban planning. Now, we did not want them to just copy someone else’s community development plan and “cookie cutter” it. So the very first thing we did was a dialogue mapping session geared towards answering a couple of questions: 1) if the community development plan for this organization was highly successful, how would things be different from how they are now? And, 2) what is unique about this particular area (shire)?

Then, in the second workshop we got some of the best community development plans from around Australia and put each one at a different table. We split people up into groups and got them to spend time at each table. Their job was to note down, on flipcharts, the pros and cons of each plan. The first iteration took about half an hour – presumably because this was the first time many of them were reading a development plan. Once the first iteration was done, people moved to the next table and so on, in round robin fashion.

By the end of that exercise, everyone was a world expert in reading community development plans. By the time people got to their third plan, they were flicking through the pages pretty fast, noting down the things they agreed or disagreed with. Then they came back together and did a synthesis of the common themes were – what was good, what was bad and so on.

Finally, we dialogue mapped again, and this time the question was, “Given what we have seen in all of the other plans, what are we going to do differently to mitigate the issues we have seen with some of them?”  That pretty much nailed what they were going to do with their plan.

The need to improvise

KA: From what you’re saying it seems that almost every situation you walk into is different, and you almost have to design your approach as you go along.   I suppose you would make a guess or some tentative plans based on your knowledge of the make-up of the audience, but wouldn’t you also have to adjust a lot on-the-fly?

PC: Oh yeah, all the time. And in fact, that is more a help than a hindrance. I’ll tell you why…by example again.

Often groups will tell you what they aspire to do. They might say, “as a general principle, we will do this” or something along those lines.  For me that’s gold because I can use it on them [laughs].

In fact, I did this a couple days ago in a workshop. Earlier in the workshop they had said, “It’s OK to make mistakes as long as we are honest with each other and upfront about it.”  I totally used that on them towards the end of the workshop when I said, “Given that you guys are honest with each other, the question I’d like you to answer is – what keeps you up at night with this project?”

My colleague uses the phrase, “hang them by their own petard” when we do this kind of stuff [laughter]. I guess what we’re doing, though, is calling them out on what they espouse, and getting them to live it. If you can do that in a workshop, it is brilliant. So I’m always on the lookout for opportunities to improvise like that, particularly when it is a matter of (espoused) principle.

There are many times when I’ve been in workshops where the corporate values are hanging on a wall – in a boardroom, right –  and I’m witnessing them get completely trashed in the conversation that’s happening. So I like to hold people to account to what’s stated…and these are sneaky little subtle ways in which one can do that.

KA: I’m sure you come across situations where a certain approach doesn’t go down very well – may be people start to get defensive or even question the approach. Does that happen, and how do you deal with that?

PC: I’ve never had a situation where people question the approach I use. I guess that’s because we’re able to deal with that as it happens. For example, if I’m going a bit too “hippie” on a group and I see that they need more structure then I’ll change my approach to suit the group and then gently nudge them back to where I want them to be.

I also co-facilitate with other people…and sometimes they’re the ones who design the workshops, or I co-design it with them.  Often it is their tolerance for ambiguity that can be a roadblock. One facilitator I work with loves emergence. This is my crass generalization, but anyone who thinks complex systems theory is just it will be happy to let a group get mired in ambiguity. The group might be struggling, but as far as the facilitator is concerned that’s OK because he or she believes that ambiguity is necessary for an emergent outcome. What they forget is that not everyone has the same level of tolerance for ambiguity.

On the other hand, I also work with highly structured facilitators who follow a set path – “we’ll do this, then we’ll do that and so on”. This approach might not go so well with people who prefer more open-ended approaches.

These sorts of experiences have been handy. When designing my own workshops, I’m the ultimate bower bird: I cherry pick whatever I need and improvise on the fly. So I tend not to worry about the risks of people not finding the workshop of value. That probably comes from a level of confidence too: we’re reasonably confident that we know our craft and have enough experience to deal with unexpected situations.

Coda – capturing organisational knowledge

KA:  Thanks for the insights into sensemaking.  Now, if you don’t mind, I’d like to switch tack and talk about something that your organization – Seven Sigma – is currently involved in. I know you guys started out as a SharePoint outfit, and you’ve been doing some interesting things in SharePoint relating to knowledge management. Could you tell us a bit about that before we wrap up?

PC: Sure. To begin with, dialogue maps are a pretty good knowledge artifact. Anyone who has used the Compendium software will know this (Editor’s note: Compendium is a free software tool that can be used for dialogue mapping). I’ve used it extensively for the last five years and have an “encyclopedia of conversations” that I have mapped. When I go and look at them, they’re as vivid to me as on the day I mapped them. So I’ve always been fascinated by the power of dialogue maps as a visceral way to capture the wisdom of a group at a point in time.

Now SharePoint is a collaborative platform that’s often used for intranets, project portals, knowledge management portal and so forth. It’s a fairly versatile platform. The Compendium software on the other hand, is not a multi-user, collaborative platform. It’s more like Photoshop or Word in that you use it to create an artefact – a map – but if someone else wants to see the map then he or she has to install Compendium.  And it can be a bit of a pain in the butt to install Compendium as it is a freebie, open source product that doesn’t really fit in an enterprise environment.   We’ve therefore always wanted to have the ability to import maps into SharePoint and my colleague Chris [Tomich] had already started writing some code to do that around the time we first got into dialogue mapping.

However, my own Aha moment came later; and come to think of it, the fact that we’re having beers in this conversation is relevant to this story…

I was dialogue mapping a group of executives about 4-5 years ago; it was a team-building exercise built around a lessons learned workshop. The purpose of the workshop was to improve the collaborative and team maturity of this group. [As a part of this exercise] the group was reviewing some old projects, doing a sort of retrospective lessons learned. We got to this one project, and someone complained about an organisational policy that had caused an issue on the project.  Now as it happened, there was this guy in the group who knew how this policy had come about (I knew this guy, by the way, and I also knew that he was about to retire). He said, “Oh yeah, well that happened about 30 years ago, and it was on so-and-so project.” He then proceeded to elaborate on it.

Well, I knew this guy was about to retire. I also knew that the organization does this “phased retirement” thing, where people who are about to retire write documentation about what they do, mentor their successors etc. before they leave. I remember thinking to myself, “there’s no way in hell that he would have written that down in his documentation.”  I just knew it: he had to be in that particular conversation for him to have remembered this.

Then my next thought…and as I’m mapping, I’m having this thought … “man, someone just ought to give him a beer, sit him down, and ask him about these kinds of old projects. I could map that video…how hard could that be? If one can map live conversations then surely one can map videos.”  In fact, videos should be simpler because you can pause them, which is something you can’t do with live conversations.

So that was literally the little spark. It started with me thinking about how great it would be if this guy could spend even half his time recording his reflections…and this could take different forms, may be it could be a grad asking him questions in a mentoring scenario, or it could be another person he has worked with for years and they could reminisce over various projects. The possibilities were endless. But the basic idea was simple: it was to try to capture those sorts of water cooler or pub conversations, or those that you have in conferences. That’s where we get many of our insights – it’s the stories, the war stories through which we learn. That kind of stuff never gets into the manuals or knowledge-base articles.

Indeed, stories are the key to those unwritten insights about, say, when it is OK to break the rules. That kind of stuff can never be captured in the processes, manuals or procedures. One of your pieces highlights this beautifully – it’s one of the parables you’ve written I think, where an experienced project manager suggested to the novice that he should be listening to the stories rather than focusing on the body of knowledge he was studying.  And that is completely true.

So that was, to coin a pun, the “glimmer of an idea” – because the product is called Glyma. The idea was to capture expert knowledge by mapping it and storing the map in SharePoint. SharePoint has a great search engine and we already knew that dialogue maps are a great way to capture conversations in a way that makes it easy to understand and navigate rationale (or the logic of a conversation).  If we could do dialogue maps live, then we sure as hell could map videos. Moreover SharePoint also offers the possibility of tagging, adding feeds etc. – the kinds of things that portals these days are good at. It occurred to us that no one had really done that before.

Sure, there are plenty of story captures, say where people capture reflections on video. But because the resulting videos tend to be quite big, they are usually edited down to 15 minute “elevator pitch” type presentation. But then all the good stuff is gone; indeed, you and I have had many of these brief conversations where you’re summarising something terrific you’ve read and I’ll go, “Yeah well, that doesn’t sound so interesting to me.” The point is: you can’t compress insight into a convenient 10 minute video with nice music. So our idea was – well, don’t do that; take the video as it is and map it. Then, if you click on a node – say an idea node or a question node –Glyma will play the video from the point in time where the idea or question came up. You don’t have to sit through the entire thing. Moreover, when you do a search and get a series of results, you can click on a result and it plays that bit straight away.

So that was really the inspiration for Glyma…and it will see the light of day very soon.

Actually we’ll be putting a beta site out early next week (Editor’s note: the site has since gone live; I urge you to check it out). By the way, Glyma has been four years in the making. One of last things on our bucket list of things to do while running a consultancy was to put an innovative new product out and to see if people like it. So that’s where we are going with that.

KA: That’s sounds very interesting.  The timing should work out well because this conversation will be posted out in a week or two as well.  I wish you luck with Glyma; I’ve seen some early versions of it and it looks really good.  I look forward to seeing how it does in the market place and what sort of reception it receives.  I certainly hope it gets the reception it deserves because it is a tremendous idea.

PC: Well thank you; I appreciate your saying that…and we’ll see if you still say that once you’ve mapped this video because that might be your homework. [laughs]

KA: [laughs] Alright, great mate.  Well, thanks for your time. I think that’s been a really interesting conversation. We’ll chat about Glyma further after it’s been out for a while.

PC: Yeah absolutely.

KA: Cheers mate.

PC: See ya.

Written by K

June 18, 2014 at 7:31 am

Six heresies for business intelligence

with 10 comments

What is business intelligence?

I recently asked a few acquaintances to answer this question without referring to that great single point of truth in the cloud.  They duly came up with a variety of  responses ranging from data warehousing and the names of specific business intelligence tools to particular functions such as reporting or decision support.

After receiving their responses, I did what I asked my respondents not to: I googled the term.  Here are a few samples of what I found:

According to CIO magazine, Business intelligence is an umbrella term that refers to a variety of software applications used to analyze an organization’s raw data.

Wikipedia, on the other hand, tells us that BI is a set of theories, methodologies, architectures, and technologies that transform raw data into meaningful and useful information for business purposes.

Finally, Webopedia, tell us that BI [refers to] the tools and systems that play a key role in the strategic planning process of the corporation.

What’s interesting about the above responses and definitions is that they focus largely on processes and methodologies or tools and techniques. Now, without downplaying the importance of either, I think that many of the problems of business intelligence practice come from taking a perspective that is overly focused on methodology and technique.  In this post, I attempt to broaden this perspective by making some potentially controversial statements –or heresies – that challenge this view. My aim is not so much to criticize current practice as to encourage – or provoke – business intelligence professionals to take a closer look at some of the assumptions underlie their practices.

The heresies

Without further ado, here are my six heresies for business intelligence practice (in no particular order).

A single point of truth is a mirage

Many organisations embark on ambitious programs to build enterprise data warehouses – unified data repositories that serve as a single source of truth for all business-relevant data.  Leaving aside the technical  and business issues associated with establishing definitive data sources and harmonizing data, there is the more fundamental question of what is meant by truth.

The most commonly accepted notion of truth is that information (or data in a particular context) is true if it describes something as it actually is. A major issue with this viewpoint is that data (or information) can never fully describe a real-world object or event. For example, when a sales rep records a customer call, he or she notes down only what is required by the customer management system. Other data that may well be more important is not captured or is relegated to a “Notes” or “Comments” field that is rarely if ever searched or accessed. Indeed, data represents only a fraction of the truth, however one chooses to define it – more on this below.

Some might say that it is naïve to expect our databases to capture all aspects of reality, and that what is needed is a broad consensus between all relevant stakeholders as to what constitutes the truth. The problem with this is that such a consensus is often achieved by means that are not democratic. For example, a KPI definition chosen by a manager may be hotly contested by an employee.  Nevertheless, the employee has to accept it because that is the way (many) organisations work. Another significant issue is that the notion of relevant stakeholders is itself problematic because it is often difficult to come up with clear criterion by which to define relevance.

There are other ways to approach the notion of truth: for example, one might say that a piece of data is true as long as it is practically useful to deem it so. Such a viewpoint, though common, is flawed because utility is in the eye of the beholder: a sales manager may think it useful to believe a particular KPI whereas a sales rep might disagree (particularly if the KPI portrays the rep in a bad light!).

These varied interpretations of what constitute a truth have implications for the notion of a single point of truth. For one, the various interpretations are incommensurate – they cannot be judged by the same standard.  Further, different people may interpret the same piece of data differently. This is something that BI professionals have likely come across – say when attempting to come up with a harmonized definition for a customer record.

In short: the notion of a single point of truth is problematic because there is a great deal of ambiguity about what constitutes a truth.

There is no such thing as raw data

In his book, Memory Practices in the Sciences, Geoffrey Bowker wrote, “Raw data is both an oxymoron and a bad idea; to the contrary, data should be cooked with care.”  I love this quote because it tells a great truth (!) about so-called “raw” data.

To elaborate: raw data is never unprocessed. Firstly, the data collector always makes a choice as to what data will be collected and what will not. So in this sense, data already has meaning imposed on it. Second, and perhaps more important, the method of collection affects the data. For example, responses to a survey depend on how the questions are framed and how the survey itself is carried out (anonymous, face-to-face etc.).   This is also true for more “objective” data such as costs and expenses. In both cases, the actual numbers depend on specific accounting practices used in the organization. So, raw data is an oxymoron because data is never raw, and as Bowker tells us, we need to ensure that the filters we apply and the methods of collection we use are such that the resulting data is “cooked with care.”

In short: data is never raw, it is always “cooked.”

There are no best practices for business intelligence, only appropriate ones

Many software shops and consultancies devise frameworks and methodologies for business intelligence which they claim are based on best or proven practices. However, those who swallow that line and attempt to implement the practices often find that the results obtained are far from best.

I have discussed the shortcomings of best practices in a general context in an earlier article, and (at greater length) in my book. A problem with best practice approaches is that they assume a universal yardstick of what is best.  As a corollary, this also suggest that practices can be transplanted from one organization to another in a wholesale manner, without extensive customisation. This overlooks the fact that organisations are unique, and what works in one may not work in another.

A deeper issue is that much of the knowledge pertaining to best practices is tacit – that is, it cannot be codified in written form. Indeed, what differentiates good business intelligence developers or architects from great ones is not what they learnt from a textbook (or in a training course), but how they actually practice their craft.  These consist of things that they do instinctively and would find hard to put into words.

So, instead of looking to import best practices from your favourite vendor, it is better to focus on understanding what goes on in your environment. A critical examination of your environment and processes will reveal opportunities for improvement. These incremental improvements will cumulatively add up to your very own, customized “best practices.”

In short: develop your own business intelligence best practices rather than copying those peddled by “experts.”

Business intelligence does not support strategic decision-making

One of the stated aims of business intelligence systems is to support better business decision making in organisations (see the Wikipedia article, for example). It is true that business intelligence systems are perfectly adequate – even indispensable – for certain decision-making situations. Examples of these include, financial reporting (when done right!) and other operational reporting (inventory, logistics etc).  These generally tend to be routine situations with clear cut decision criteria and well-defined processes – i.e. decisions that can be programmed.

In contrast, decisions pertaining to strategic matters cannot be programmed. Examples of such decisions include: dealing with an uncertain business environment, responding to a new competitor etc. The reason such decisions cannot be programmed is that they depend on a host of factors other than data and are generally made in situations that are ambiguous.  Typically people use deliberative methods – i.e. methods based on argumentation – to arrive at decisions on such matters.  The sad fact is that all the major business tools in the market lack support for deliberative decision-making. Check out this post for more on what can be done about this.

In short: business intelligence does not support strategic decision-making .

Big data is not the panacea it is trumpeted to be

One of the more recent trends in business intelligence is the move towards analyzing increasingly large, diverse, rapidly changing datasets – what goes under the umbrella term big data.  Analysing these datasets entails the use of new technologies (e.g. Hadoop and NoSQL)  as well as statistical techniques that are not familiar to many mainstream business intelligence professionals.

Much has been claimed for big data; in fact, one might say too much.  In this article Tim Harford (aka the Undercover Economist) summarises the four main claims of “big data cheerleaders” as follows (the four phrases below are quoted directly from the article):

  1. Data analysis produces uncannily accurate results.
  2. Every single data point can be captured, making old statistical sampling techniques obsolete.
  3. It is passé to fret about what causes what, because statistical correlation tells us what we need to know.
  4. Scientific or statistical models aren’t needed.

The problem, as Harford points out, is that all of these claims are incorrect.

Firstly, the accuracy of the results that come out of a big data analysis depend critically on how the analysis is formulated. However, even analyses based on well-founded assumptions can get it wrong, as is illustrated in this article about Google Flu Trends.

Secondly, it is pretty obvious that it is impossible to capture every single data point (also relevant here is the discussion on raw data above – i.e. how data is selected for inclusion).

The third claim is simply absurd. The fact is detecting a correlation is not the same as  understanding what is going on a point made rather nicely by Dilbert.  Enough said, I think.

Fourthly, the claim that scientific or statistical models aren’t needed is simply ill-informed. As any big data practitioner will tell you, big data analysis relies on statistics. Moreover, as mentioned earlier, a correlation-based understanding is no understanding at all –  it cannot be reliably extrapolated to related situations without the help of hypotheses and (possibly tentative)  models of how the phenomenon under study works.

Finally, as Danah Boyd and Kate Crawford point out in this paper , big data changes the meaning of what it means to know something….and it is highly debatable as to whether these changes are for the better. See the paper for more on this point. (Acknowledgement: the title of this post is inspired by the title of the Boyd-Crawford paper).

In short:  business intelligence practitioners should not uncritically accept the pronouncements of big data evangelists and vendors.

Business intelligence has ethical implications

This heresy applies to much more than business intelligence: any human activity that affects other people has an ethical dimension. Many IT professionals tend to overlook this facet of their work because they are unaware of it – and sometimes prefer to remain so. Fact is, the decisions business intelligence professionals make with respect to usability, display, testing etc. have a potential impact on the people who use their applications. The impact may be as trivial as having to click a button or filter too many before they get their report, to something more significant, like a data error that leads to a poor business decision.

In short: business intelligence professionals ought to consider how their artefacts and applications affect their users.

In closing

This brings me to the end of my heresies for business intelligence. I suspect there will be a few practitioners who agree with me and (possibly many) others who don’t…and some of the latter may even find specific statements provocative. If so, I consider my job done, for my intent was to get business intelligence practitioners to question a few unquestioned tenets of their profession.

Written by K

April 3, 2014 at 9:29 pm

%d bloggers like this: