Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Wicked Problems’ Category

Learning, evolution and the future of work

leave a comment »

The Janus-headed rise of AI has prompted many discussions about the future of work.  Most, if not all, are about AI-driven automation and its consequences for various professions. We are warned to prepare for this change by developing skills that cannot be easily “learnt” by machines.  This sounds reasonable at first, but less so on reflection: if skills that were thought to be uniquely human less than a decade ago can now be done, at least partially, by machines, there is no guarantee that any specific skill one chooses to develop will remain automation-proof in the medium-term future.

This begs the question as to what we can do, as individuals, to prepare for a machine-centric workplace. In this post I offer a perspective on this question based on Gregory Bateson’s writings as well as  my consulting and teaching experiences.

Levels of learning

Given that humans are notoriously poor at predicting the future, it should be clear hitching one’s professional wagon to a specific set of skills is not a good strategy. Learning a set of skills may pay off in the short term, but it is unlikely to work in the long run.

So what can one do to prepare for an ambiguous and essentially unpredictable future?

To answer this question, we need to delve into an important, yet oft-overlooked aspect of learning.

A key characteristic of learning is that it is driven by trial and error.  To be sure, intelligence may help winnow out poor choices at some stages of the process, but one cannot eliminate error entirely. Indeed, it is not desirable to do so because error is essential for that “aha” instant that precedes insight.  Learning therefore has a stochastic element: the specific sequence of trial and error followed by an individual is unpredictable and likely to be unique. This is why everyone learns differently: the mental model I build of a concept is likely to be different from yours.

In a paper entitled, The Logical Categories of Learning and Communication, Bateson noted that the stochastic nature of learning has an interesting consequence. As he notes:

If we accept the overall notion that all learning is in some degree stochastic (i.e., contains components of “trial and error”), it follows that an ordering of the processes of learning can be built upon a hierarchic classification of the types of error which are to be corrected in the various learning processes.

Let’s unpack this claim by looking at his proposed classification:

Zero order learning –    Zero order learning refers to situations in which a given stimulus (or question) results in the same response (or answer) every time. Any instinctive behaviour – such as a reflex response on touching a hot kettle – is an example of zero order learning.  Such learning is hard-wired in the learner, who responds with the “correct” option to a fixed stimulus every single time. Since the response does not change with time, the process is not subject to trial and error.

First order learning (Learning I) –  Learning I is where an individual learns to select a correct option from a set of similar elements. It involves a specific kind of trial and error that is best explained through a couple of examples. The  canonical example of Learning I is memorization: Johnny recognises the letter “A” because he has learnt to distinguish it from the 25 other similar possibilities. Another example is Pavlovian conditioning wherein the subject’s response is altered by training: a dog that initially salivates only when it smells food is trained, by repetition, to salivate when it hears the bell.

A key characteristic of Learning I is that the individual learns to select the correct response from a set of comparable possibilities – comparable because the possibilities are of the same type (e.g. pick a letter from the set of alphabets). Consequently, first order learning  cannot lead to a qualitative change in the learner’s response. Much of traditional school and university teaching is geared toward first order learning: students are taught to develop the “correct” understanding of concepts and techniques via a repetition-based process of trial and error.

As an aside, note that much of what goes under the banner of machine learning and AI can be also classed as first order learning.

Second order learning (Learning II) –  Second order learning involves a qualitative change in the learner’s response to a given situation. Typically, this occurs when a learner sees a familiar problem situation in a completely new light, thus opening up new possibilities for solutions.  Learning II therefore necessitates a higher order of trial and error, one that is beyond the ken of machines, at least at this point in time.

Complex organisational problems, such as determining a business strategy, require a second order approach because they cannot be precisely defined and therefore lack an objectively correct solution. Echoing Horst Rittel, solutions to such problems are not true or false, but better or worse.

Much of the teaching that goes on in schools and universities hinders second order learning because it implicitly conditions learners to frame problems in ways that make them amenable to familiar techniques. However, as Russell Ackoff noted, “outside of school, problems are seldom given; they have to be taken, extracted from complex situations…”   Two  aspects of this perceptive statement bear further consideration. Firstly, to extract a problem from a situation one has to appreciate or make sense of  the situation.  Secondly,  once the problem is framed, one may find that solving it requires skills that one does not possess. I expand on the implications of these points in the following two sections.

Sensemaking and second order learning

In an earlier piece, I described sensemaking as the art of collaborative problem formulation. There are a huge variety of sensemaking approaches, the gamestorming site describes many of them in detail.   Most of these are aimed at exploring a problem space by harnessing the collective knowledge of a group of people who have diverse, even conflicting, perspectives on the issue at hand.  The greater the diversity, the more complete the exploration of the problem space.

Sensemaking techniques help in elucidating the context in which a problem lives. This refers to the the problem’s environment, and in particular the constraints that the environment imposes on potential solutions to the problem.  As Bateson puts it, context is “a collective term for all those events which tell an organism among what set of alternatives [it] must make [its] next choice.”  But this begs the question as to how these alternatives are to be determined.  The question cannot be answered directly because it depends on the specifics of the environment in which the problem lives.  Surfacing these by asking the right questions is the task of sensemaking.

As a simple example, if I request you to help me formulate a business strategy, you are likely to begin by asking me a number of questions such as:

  • What kind of business are you in?
  • Who are your customers?
  • What’s the competitive landscape?
  • …and so on

Answers to these questions fill out the context in which the business operates, thus making it possible to formulate a meaningful strategy.

It is important to note that context rarely remains static, it evolves in time. Indeed, many companies faded away because they failed to appreciate changes in their business context:  Kodak is a well-known example, there are many more.  So organisations must evolve too. However, it is a mistake to think of an organisation and its environment as evolving independently, the two always evolve together.  Such co-evolution is as true of natural systems as it is of social ones. As Bateson tells us:

…the evolution of the horse from Eohippus was not a one-sided adjustment to life on grassy plains. Surely the grassy plains themselves evolved [on the same footing] with the evolution of the teeth and hooves of the horses and other ungulates. Turf was the evolving response of the vegetation to the evolution of the horse. It is the context which evolves.

Indeed, one can think of evolution by natural selection as a process by which nature learns (in a second-order sense).

The foregoing discussion points to another problem with traditional approaches to education: we are implicitly taught that problems once solved, stay solved. It is seldom so in real life because, as we have noted, the environment evolves even if the organisation remains static. In the worst case scenario (which happens often enough) the organisation will die if it does not adapt appropriately to changes in its environment. If this is true, then it seems that second-order learning is important not just for individuals but for organisations as a whole. This harks back to notion of the notion of the learning organisation, developed and evangelized by Peter Senge in the early 90s. A learning organisation is one that continually adapts itself to a changing environment. As one might imagine, it is an ideal that is difficult to achieve in practice. Indeed, attempts to create learning organisations have often ended up with paradoxical outcomes.  In view of this it seems more practical for organisations to focus on developing what one might call  learning individuals – people who are capable of adapting to changes in their environment by continual learning

Learning to learn

Cliches aside, the modern workplace is characterised by rapid, technology-driven change. It is difficult for an  individual to keep up because one has to:

    • Figure out which changes are significant and therefore worth responding to.
    • Be capable of responding to them meaningfully.

The media hype about the sexiest job of the 21st century and the like further fuel the fear of obsolescence.  One feels an overwhelming pressure to do something. The old adage about combating fear with action holds true: one has to do something, but the question then is: what meaningful action can one take?

The fact that this question arises points to the failure of traditional university education. With its undue focus on teaching specific techniques, the more important second-order skill of learning to learn has fallen by the wayside.  In reality, though,  it is now easier than ever to learn new skills on ones own. When I was hired as a database architect in 2004, there were few quality resources available for free. Ten years later, I was able to start teaching myself machine learning using topnotch software, backed by countless quality tutorials in blog and video formats. However, I wasted a lot of time in getting started because it took me a while to get over my reluctance to explore without a guide. Cultivating the habit of learning on my own earlier would have made it a lot easier.

Back to the future of work

When industry complains about new graduates being ill-prepared for the workplace, educational institutions respond by updating curricula with more (New!! Advanced!!!) techniques. However, the complaints continue and  Bateson’s notion of second order learning tells us why:

  • Firstly, problem solving is distinct from problem formulation; it is akin to the distinction between human and machine intelligence.
  • Secondly, one does not know what skills one may need in the future, so instead of learning specific skills one has to learn how to learn

In my experience,  it is possible to teach these higher order skills to students in a classroom environment. However, it has to be done in a way that starts from where students are in terms of skills and dispositions and moves them gradually to less familiar situations. The approach is based on David Cavallo’s work on emergent design which I have often used in my  consulting work.  Two examples may help illustrate how this works in  the classroom:

  • Many analytically-inclined people think sensemaking is a waste of time because they see it as “just talk”. So, when teaching sensemaking, I begin with quantitative techniques to deal with uncertainty, such as Monte Carlo simulation, and then gradually introduce examples of uncertainties that are hard if not impossible to quantify. This progression naturally leads on to problem situations in which they see the value of sensemaking.
  • When teaching data science, it is difficult to comprehensively cover basic machine learning algorithms in a single semester. However, students are often reluctant to explore on their own because they tend to be daunted by the mathematical terminology and notation. To encourage exploration (i.e. learning to learn) we use a two-step approach: a) classes focus on intuitive explanations of algorithms and the commonalities between concepts used in different algorithms.  The classes are not lectures but interactive sessions involving lots of exercises and Q&A, b) the assignments go beyond what is covered in the classroom (but still well within reach of most students), this forces them to learn on their own. The approach works: just the other day, my wonderful co-teacher, Alex, commented on the amazing learning journey of some of the students – so tentative and hesitant at first, but well on their way to becoming confident data professionals.

In the end, though, whether or not an individual learner learns depends on the individual. As Bateson once noted:

Perhaps the best documented generalization in the field of psychology is that, at any given moment, the behavioral characteristics of a mammal, and especially of [a human], depend upon the previous experience and behavior of that individual.

The choices we make when faced with change depend on our individual natures and experiences. Educators can’t do much about the former but they can facilitate more meaningful instances of the latter, even within the confines of the classroom.

Written by K

July 5, 2018 at 6:05 pm

Risk management and organizational anxiety

with 10 comments

In practice risk management is a rational, means-end based process: risks are identified, analysed and then “solved” (or mitigated).  Although these steps seem to be objective, each of them involves human perceptions, biases and interests. Where Jill sees an opportunity, Jack may see only risks.

Indeed, the problem of differences in stakeholder perceptions is broader than risk analysis. The recognition that such differences in world-views may be irreconcilable is what led Horst Rittel to coin the now well-known term, wicked problem.   These problems tend to be made up of complex interconnected and interdependent issues which makes them difficult to tackle using standard rational- analytical methods of problem solving.

Most high-stakes risks that organisations face have elements of wickedness – indeed any significant organisational change is fraught with risk. Murphy rules; things can go wrong, and they often do. The current paradigm of risk management, which focuses on analyzing and quantifying risks using rational methods, is not broad enough to account for the wicked aspects of risk.

I had been thinking about this for a while when I stumbled on a fascinating paper by Robin Holt entitled, Risk Management: The Talking Cure, which outlines a possible approach to analysing interconnected risks. In brief, Holt draws a parallel between psychoanalysis (as a means to tackle individual anxiety) and risk management (as a means to tackle organizational anxiety).  In this post, I present an extensive discussion and interpretation of Holt’s paper. Although more about the philosophy of risk management than its practice, I found the paper interesting, relevant and thought provoking. My hope is that some readers might find it so too.

Background

Holt begins by noting that modern life is characterized by uncertainty. Paradoxically, technological progress which should have increased our sense of control over our surroundings and lives has actually heightened our personal feelings of uncertainty. Moreover, this sense of uncertainty is not allayed by rational analysis. On the contrary, it may have even increased it by, for example, drawing our attention to risks that we may otherwise have remained unaware of. Risk thus becomes a lens through which we perceive the world. The danger is that this can paralyze.  As Holt puts it,

…risk becomes the only backdrop to perceiving the world and perception collapses into self-inhibition, thereby compounding uncertainty through inertia.

Most individuals know this through experience: most of us have at one time or another been frozen into inaction because of perceived risks.  We also “know” at a deep personal level that the standard responses to risk are inadequate because many of our worries tend to be inchoate and therefore can neither be coherently articulated nor analysed. In Holt’s words:

..People do not recognize [risk] from the perspective of a breakdown in their rational calculations alone, but because of threats to their forms of life – to the non-calculative way they see themselves and the world. [Mainstream risk analysis] remains caught in the thrall of its own ‘expert’ presumptions, denigrating the very lay knowledge and perceptions on the grounds that they cannot be codified and institutionally expressed.

Holt suggests that risk management should account for the “codified, uncodified and uncodifiable aspects of uncertainty from an organizational perspective.” This entails a mode of analysis that takes into account different, even conflicting, perspectives in a non-judgemental way. In essence, he suggests “talking it over” as a means to increase awareness of the contingent nature of risks rather than a means of definitively resolving them.

Shortcomings of risk analysis

The basic aim of risk analysis (as it is practiced) is to contain uncertainty within set bounds that are determined by an organisation’s risk appetite.  As mentioned earlier, this process begins by identifying and classifying risks. Once this is done, one determines the probability and impact of each risk. Then, based on priorities and resources available (again determined by the organisation’s risk appetite) one develops strategies to mitigate the risks that are significant from the organisation’s perspective.

However, the messiness of organizational life makes it difficult to see risk in such a clear-cut way. We may  pretend to be rational about it, but in reality we perceive it through the lens of our background, interests , experiences.  Based on these perceptions we rationalize our action (or inaction!) and simply get on with life. As Holt writes:

The concept [of risk] refers to…the mélange of experience, where managers accept contingencies without being overwhelmed to a point of complete passivity or confusion, Managers learn to recognize the differences between things, to acknowledge their and our limits. Only in this way can managers be said to make judgements, to be seen as being involved in something called the future.

Then, in a memorable line, he goes on to say:

The future, however, lasts a long time, so much so as to make its containment and prediction an often futile exercise.

Although one may well argue that this is not the case for many organizational risks, it is undeniable that certain mitigation strategies (for example, accepting risks that turn out to be significant later) may have significant consequences in the not-so-near future.

Advice from a politician-scholar

So how can one address the slippery aspects of risk – the things people sense intuitively, but find difficult to articulate?

Taking inspiration from Machiavelli, Holt suggests reframing risk management as a means to determine wise actions in the face of the contradictory forces of fortune and necessity.  As Holt puts it:

Necessity describes forces that are unbreachable but manageable by acceptance and containment—acts of God, tendencies of the species, and so on. In recognizing inevitability, [one can retain one’s] position, enhancing it only to the extent that others fail to recognize necessity. Far more influential, and often confused with necessity, is fortune. Fortune is elusive but approachable. Fortune is never to be relied upon: ‘The greatest good fortune is always least to be trusted’; the good is often kept underfoot and the ridiculous elevated, but it provides [one] with opportunity.

Wise actions involve resolve and cunning (which I interpret as political nous). This entails understanding that we do not have complete (or even partial) control over events that may occur in the future. The future is largely unknowable as are people’s true drives and motivations. Yet, despite this, managers must act.  This requires personal determination together with a deep understanding of the social and political aspects of one’s environment.

And a little later,

…risk management is not the clear conception of a problem coupled to modes of rankable resolutions, or a limited process, but a judgemental  analysis limited by the vicissitudes of budgets, programmes, personalities and contested priorities.

In short: risk management in practice tends to be a far way off from how it is portrayed in textbooks and the professional literature.

The wickedness of risk management

Most managers and those who work under their supervision have been schooled in the rational-scientific approach of problem solving. It is no surprise, therefore, that they use it to manage risks: they gather and analyse information about potential risks, formulate potential solutions (or mitigation strategies) and then implement the best one (according to predetermined criteria). However, this method works only for problems that are straightforward or tame, rather than wicked.

Many of the issues that risk managers are confronted with are wicked, messy or both.  Often though, such problems are treated as being tame.   Reducing a wicked or messy problem to one amenable to rational analysis invariably entails overlooking  the views of certain stakeholder groups or, worse, ignoring key  aspects of the problem.  This may work in the short term, but will only exacerbate the problem in the longer run. Holt illustrates this point as follows:

A primary danger in mistaking a mess for a tame problem is that it becomes even more difficult to deal with the mess. Blaming ‘operator error’ for a mishap on the production line and introducing added surveillance is an illustration of a mess being mistaken for a tame problem. An operator is easily isolated and identifiable, whereas a technological system or process is embedded, unwieldy and, initially, far more costly to alter. Blaming operators is politically expedient. It might also be because managers and administrators do not know how to think in terms of messes; they have not learned how to sort through complex socio-technical systems.

It is important to note that although many risk management practitioners recognize the essential wickedness of the issues they deal with, the practice of risk management is not quite up to the task of dealing with such matters.  One step towards doing this is to develop a shared (enterprise-wide) understanding of risks by soliciting input from diverse stakeholders groups, some of who may hold opposing views.

The skills required to do this are very different from the analytical techniques that are the focus of problem solving and decision making techniques that are taught in colleges and business schools.  Analysis is replaced by sensemaking – a collaborative process that harnesses the wisdom of a group to arrive at a collective understanding of a problem and thence a common  commitment to a course of action. This necessarily involves skills that do not appear in the lexicon of rational problem solving: negotiation, facilitation, rhetoric and those of the same ilk that are dismissed as being of no relevance by the scientifically oriented analyst.

In the end though, even this may not be enough: different stakeholders may perceive a given “risk” in have wildly different ways, so much so that no consensus can be reached.  The problem is that the current framework of risk management requires the analyst to perform an objective analysis of situation/problem, even in situations where this is not possible.

To get around this Holt suggests that it may be more useful to see risk management as a way to encounter problems rather than analyse or solve them.

What does this mean?

He sees this as a forum in which people can talk about the risks openly:

To enable organizational members to encounter problems, risk management’s repertoire of activity needs to engage their all too human components: belief, perception, enthusiasm and fear.

This gets to the root of the problem: risk matters because it increases anxiety and generally affects peoples’ sense of wellbeing. Given this, it is no surprise that Holt’s proposed solution draws on psychoanalysis.

The analogy between psychoanalysis and risk management

Any discussion of psychoanalysis –especially one that is intended for an audience that is largely schooled in rational/scientific methods of analysis – must begin with the acknowledgement that the claims of psychoanalysis cannot be tested. That is, since psychoanalysis speaks of unobservable “objects” such as the ego and the unconscious, any claims it makes about these concepts cannot be proven or falsified.

However  as Holt suggests, this is exactly what makes it a good fit for encountering (as opposed to  analyzing) risks. In his words:

It is precisely because psychoanalysis avoids an overarching claim to produce testable, watertight, universal theories that it is of relevance for risk management. By so avoiding universal theories and formulas, risk management can afford to deviate from pronouncements using mathematical formulas to cover the ‘immanent indeterminables’ manifest in human perception and awareness and systems integration.

His point is that there is a clear parallel between psychoanalysis and the individual, and risk management and the organisation:

We understand ourselves not according to a template but according to our own peculiar, beguiling histories. Metaphorically, risk management can make explicit a similar realization within and between organizations. The revealing of an unconscious world and its being in a constant state of tension between excess and stricture, between knowledge and ignorance, is emblematic of how organizational members encountering messes, wicked problems and wicked messes can be forced to think.

In brief, Holt suggests that what psychoanalysis does for the individual, risk management ought to do for the organisation.

Talking it over – the importance of conversations

A key element of psychoanalysis is the conversation between the analyst and patient. Through this process, the analyst attempts to get the patient to become aware of hidden fears and motivations. As Holt puts it,

Psychoanalysis occupies the point of rupture between conscious intention and unconscious desire — revealing repressed or overdetermined aspects of self-organization manifest in various expressions of anxiety, humour, and so on

And then, a little later,   he makes the connection to organisations:

The fact that organizations emerge from contingent, complex interdependencies between specific narrative histories suggests that risk management would be able to use similar conversations to psychoanalysis to investigate hidden motives, to examine…the possible reception of initiatives or strategies from the perspective of inherently divergent stakeholders, or to analyse the motives for and expectations of risk management itself. This fundamentally reorients the perspective of risk management from facing apparent uncertainties using technical assessment tools, to using conversations devoid of fixed formulas to encounter questioned identities, indeterminate destinies, multiple and conflicting aims and myriad anxieties.

Through conversations involving groups of stakeholders who have different risk perceptions,   one might be able to get a better understanding of a particular risk and hence, may be, design a more effective mitigation strategy.   More importantly, one may even realise that certain risks are not risks at all or others that seem straightforward have implications that would have remained hidden were it not for the conversation.

These collective conversations would take place in workshops…

…that tackle problems as wicked messes, avoid lowest-denominator consensus in favour of continued discovery of alternatives through conversation, and are instructed by metaphor rather than technical taxonomy, risk management is better able to appreciate the everyday ambivalence that fundamentally influences late-modern organizational activity. As such, risk management would be not merely a rationalization of uncertain experience but a structured and contested activity involving multiple stakeholders engaged in perpetual translation from within environments of operation and complexes of aims.

As a facilitator of such workshops, the risk analyst provokes stakeholders to think about their feelings and motivations that may be “out of bounds” in a standard risk analysis workshop.  Such a paradigm goes well beyond mainstream risk management because it addresses the risk-related anxieties and fears of individuals who are affected by it.

Conclusion

This brings me to the end of my not-so-short summary of Holt’s paper. Given the length of this post, I reckon I should keep my closing remarks short. So I’ll leave it here paraphrasing the last line of the paper, which summarises its main message:  risk management ought to be about developing an organizational capacity for overcoming risks, freed from the presumption of absolute control.

Written by K

February 5, 2018 at 11:21 pm

The dark side of data science

with 9 comments

Data scientists are sometimes blind to the possibility that the predictions of their algorithms can have unforeseen negative effects on people. Ethical or social implications are easy to overlook when one finds interesting new patterns in data, especially if they promise significant financial gains. The Centrelink debt recovery debacle, recently reported in the Australian media, is a case in point.

Here is the story in brief:

Centrelink is an Australian Government organisation responsible for administering welfare services and payments to those in need. A major challenge such organisations face is ensuring that their clients are paid no less and no more than what is due to them. This is difficult because it involves crosschecking client income details across multiple systems owned by different government departments, a process that necessarily involves many assumptions. In July 2016, Centrelink unveiled an automated compliance system that compares income self-reported by clients to information held by the taxation office.

The problem is that the algorithm is flawed: it makes strong (and incorrect!) assumptions regarding the distribution of income across a financial year and, as a consequence, unfairly penalizes a number of legitimate benefit recipients.  It is very likely that the designers and implementers of the algorithm did not fully understand the implications of their assumptions. Worse, from the errors made by the system, it appears they may not have adequately tested it either.  But this did not stop them (or, quite possibly, their managers) from unleashing their algorithm on an unsuspecting public, causing widespread stress and distress.  More on this a bit later.

Algorithms like the one described above are the subject of Cathy O’Neil’s aptly titled book, Weapons of Math Destruction.  In the remainder of this article I discuss the main themes of the book.  Just to be clear, this post is more riff than review. However, for those seeking an opinion, here’s my one-line version: I think the book should be read not only by data science practitioners, but also by those who use or are affected by their algorithms (which means pretty much everyone!).

Abstractions and assumptions

‘O Neil begins with the observation that data algorithms are mathematical models of reality, and are necessarily incomplete because several simplifying assumptions are invariably baked into them. This point is important and often overlooked so it is worth illustrating via an example.

When assessing a person’s suitability for a loan, a bank will want to know whether the person is a good risk. It is impossible to model creditworthiness completely because we do not know all the relevant variables and those that are known may be hard to measure. To make up for their ignorance, data scientists typically use proxy variables, i.e. variables that are believed to be correlated with the variable of interest and are also easily measurable. In the case of creditworthiness, proxy variables might be things like gender, age, employment status, residential postcode etc.  Unfortunately many of these can be misleading, discriminatory or worse, both.

The Centrelink algorithm provides a good example of such a “double-whammy” proxy. The key variable it uses is the difference between the client’s annual income reported by the taxation office and self-reported annual income stated by the client. A large difference is taken to be an indicative of an incorrect payment and hence an outstanding debt. This simplistic assumption overlooks the fact that most affected people are not in steady jobs and therefore do not earn regular incomes over the course of a financial year (see this article by Michael Griffin, for a detailed example).  Worse, this crude proxy places an unfair burden on vulnerable individuals for whom casual and part time work is a fact of life.

Worse still, for those wrongly targeted with a recovery notice, getting the errors sorted out is not a straightforward process. This is typical of a WMD. As ‘O Neil states in her book, “The human victims of WMDs…are held to a far higher standard of evidence than the algorithms themselves.”  Perhaps this is because the algorithms are often opaque. But that’s a poor excuse.  This is the only technical field where practitioners are held to a lower standard of accountability than those affected by their products.

‘O Neil’s sums it up rather nicely when she calls algorithms like the Centrelink one  weapons of math destruction (WMD).

Self-fulfilling prophecies and feedback loops

A characteristic of WMD is that their predictions often become self-fulfilling prophecies. For example a person denied a loan by a faulty risk model is more likely to be denied again when he or she applies elsewhere, simply because it is on their record that they have been refused credit before. This kind of destructive feedback loop is typical of a WMD.

An example that ‘O Neil dwells on at length is a popular predictive policing program. Designed for efficiency rather than nuanced judgment, such algorithms measure what can easily be measured and act by it, ignoring the subtle contextual factors that inform the actions of experienced officers on the beat. Worse, they can lead to actions that can exacerbate the problem. For example, targeting young people of a certain demographic for stop and frisk actions can alienate them to a point where they might well turn to crime out of anger and exasperation.

As Goldratt famously said, “Tell me how you measure me and I’ll tell you how I’ll behave.”

This is not news: savvy managers have known about the dangers of managing by metrics for years. The problem is now exacerbated manyfold by our ability to implement and act on such metrics on an industrial scale, a trend that leads to a dangerous devaluation of human judgement in areas where it is most needed.

A related problem – briefly mentioned earlier – is that some of the important variables are known but hard to quantify in algorithmic terms. For example, it is known that community-oriented policing, where officers on the beat develop relationships with people in the community, leads to greater trust. The degree of trust is hard to quantify, but it is known that communities that have strong relationships with their police departments tend to have lower crime rates than similar communities that do not.  Such important but hard-to-quantify factors are typically missed by predictive policing programs.

Blackballed!

Ironically, although WMDs can cause destructive feedback loops, they are often not subjected to feedback themselves. O’Neil gives the example of algorithms that gauge the suitability of potential hires.  These programs often use proxy variables such as IQ test results, personality tests etc. to predict employability.  Candidates who are rejected often do not realise that they have been screened out by an algorithm. Further, it often happens that candidates who are thus rejected go on to successful careers elsewhere. However, this post-rejection information is never fed back to the algorithm because it impossible to do so.

In such cases, the only way to avoid being blackballed is to understand the rules set by the algorithm and play according to them. As ‘O Neil so poignantly puts it, “our lives increasingly depend on our ability to make our case to machines.” However, this can be difficult because it assumes that a) people know they are being assessed by an algorithm and 2) they have knowledge of how the algorithm works. In most hiring scenarios neither of these hold.

Just to be clear, not all data science models ignore feedback. For example, sabermetric algorithms used to assess player performance in Major League Baseball are continually revised based on latest player stats, thereby taking into account changes in performance.

Driven by data

In recent years, many workplaces have gradually seen the introduction to data-driven efficiency initiatives. Automated rostering, based on scheduling algorithms is an example. These algorithms are based on operations research techniques that were developed for scheduling complex manufacturing processes. Although appropriate for driving efficiency in manufacturing, these techniques are inappropriate for optimising shift work because of the effect they have on people. As O’ Neil states:

Scheduling software can be seen as an extension of just-in-time economy. But instead of lawn mower blades or cell phone screens showing up right on cue, it’s people, usually people who badly need money. And because they need money so desperately, the companies can bend their lives to the dictates of a mathematical model.

She correctly observes that an, “oversupply of low wage labour is the problem.” Employers know they can get away with treating people like machine parts because they have a large captive workforce.  What makes this seriously scary is that vested interests can make it difficult to outlaw such exploitative practices. As ‘O Neil mentions:

Following [a] New York Times report on Starbucks’ scheduling practices, Democrats in Congress promptly drew up bills to rein in scheduling software. But facing a Republican majority fiercely opposed to government regulations, the chances that their bill would become law were nil. The legislation died.

Commercial interests invariably trump social and ethical issues, so it is highly unlikely that industry or government will take steps to curb the worst excesses of such algorithms without significant pressure from the general public. A first step towards this is to educate ourselves on how these algorithms work and the downstream social effects of their predictions.

Messing with your mind

There is an even more insidious way that algorithms mess with us. Hot on the heels of the recent US presidential election, there were suggestions that fake news items on Facebook may have influenced the results.  Mark Zuckerberg denied this, but as this Casey Newton noted in this trenchant tweet, the denial leaves Facebook in “the awkward position of having to explain why they think they drive purchase decisions but not voting decisions.”

Be that as it may, the fact is Facebook’s own researchers have been conducting experiments to fine tune a tool they call the “voter megaphone”. Here’s what ‘O Neil says about it:

The idea was to encourage people to spread the word that they had voted. This seemed reasonable enough. By sprinkling people’s news feeds with “I voted” updates, Facebook was encouraging Americans – more that sixty-one million of them – to carry out their civic duty….by posting about people’s voting behaviour, the site was stoking peer pressure to vote. Studies have shown that the quiet satisfaction of carrying out a civic duty is less likely to move people than the possible judgement of friends and neighbours…The Facebook started out with a constructive and seemingly innocent goal to encourage people to vote. And it succeeded…researchers estimated that their campaign had increased turnout by 340,000 people. That’s a big enough crowd to swing entire states, and even national elections.

And if that’s not scary enough, try this:

For three months leading up to the election between President Obama and Mitt Romney, a researcher at the company….altered the news feed algorithm for about two million people, all of them politically engaged. The people got a higher proportion of hard news, as opposed to the usual cat videos, graduation announcements, or photos from Disney world….[the researcher] wanted to see  if getting more [political] news from friends changed people’s political behaviour. Following the election [he] sent out surveys. The self-reported results that voter participation in this group inched up from 64 to 67 percent.

This might not sound like much, but considering the thin margins of recent presidential elections, it could be enough to change a result.

But it’s even more insidious.  In a paper published in 2014, Facebook researchers showed that users’ moods can be influenced by the emotional content of their newsfeeds. Here’s a snippet from the abstract of the paper:

In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.

As you might imagine, there was a media uproar following which  the lead researcher issued a clarification and  Facebook officials duly expressed regret (but, as far as I know, not an apology).  To be sure, advertisers have been exploiting this kind of “mind control” for years, but a public social media platform should (expect to) be held to a higher standard of ethics. Facebook has since reviewed its internal research practices, but the recent fake news affair shows that the story is to be continued.

Disarming weapons of math destruction

The Centrelink debt debacle, Facebook mood contagion experiments and the other case studies mentioned in the book illusrate the myriad ways in which Big Data algorithms have a pernicious effect on our day-to-day lives. Quite often people remain unaware of their influence, wondering why a loan was denied or a job application didn’t go their way. Just as often, they are aware of what is happening, but are powerless to change it – shift scheduling algorithms being a case in point.

This is not how it was meant to be. Technology was supposed to make life better for all, not just the few who wield it.

So what can be done? Here are some suggestions:

  • To begin with, education is the key. We must work to demystify data science, create a general awareness of data science algorithms and how they work. O’ Neil’s book is an excellent first step in this direction (although it is very thin on details of how the algorithms work)
  • Develop a code of ethics for data science practitioners. It is heartening to see that IEEE has recently come up with a discussion paper on ethical considerations for artificial intelligence and autonomous systems and ACM has proposed a set of principles for algorithmic transparency and accountability.  However, I should also tag this suggestion with the warning that codes of ethics are not very effective as they can be easily violated. One has to – somehow – embed ethics in the DNA of data scientists. I believe, one way to do this is through practice-oriented education in which data scientists-in-training grapple with ethical issues through data challenges and hackathons. It is as Wittgenstein famously said, “it is clear that ethics cannot be articulated.” Ethics must be practiced.
  • Put in place a system of reliable algorithmic audits within data science departments, particularly those that do work with significant social impact.
  • Increase transparency a) by publishing information on how algorithms predict what they predict and b) by making it possible for those affected by the algorithm to access the data used to classify them as well as their classification, how it will be used and by whom.
  • Encourage the development of algorithms that detect bias in other algorithms and correct it.
  • Inspire aspiring data scientists to build models for the good.

It is only right that the last word in this long riff should go to ‘O Neil whose work inspired it. Towards the end of her book she writes:

Big Data processes codify the past. They do not invent the future. Doing that requires moral imagination, and that’s something that only humans can provide. We have to explicitly embed better values into our algorithms, creating Big Data models that follow our ethical lead. Sometimes that will mean putting fairness ahead of profit.

Excellent words for data scientists to live by.

Written by K

January 17, 2017 at 8:38 pm

The Heretic’s Guide to Management – understanding ambiguity in the corporate world

with 7 comments

I am delighted to announce that my new business book, The Heretic’s Guide to Management: The Art of Harnessing Ambiguity, is now available in e-book and print formats. The book, co-written with Paul Culmsee, is a loose sequel to our previous tome, The Heretics Guide to Best Practices.

Many reviewers liked the writing style of our first book, which combined rigour with humour. This book continues in the same vein, so if you enjoyed the first one we hope you might like this one too. The new book is half the size of the first one and I considerably less idealistic too. In terms of subject matter, I could say “Ambiguity, Teddy Bears and Fetishes” and leave it at that…but that might leave you thinking that it’s not the kind of book you would want anyone to see on your desk!

Rest assured, The Heretic’s Guide to Management is not a corporate version of Fifty Shades of Grey. Instead, it aims to delve into the complex but fascinating ways in which ambiguity affects human behaviour. More importantly, it discusses how ambiguity can be harnessed in ways that achieve positive outcomes.  Most management techniques (ranging from strategic planning to operational budgeting) attempt to reduce ambiguity and thereby provide clarity. It is a profound irony of modern corporate life that they often end up doing the opposite: increasing ambiguity rather than reducing it.

On the surface, it is easy enough to understand why: organizations are complex entities so it is unreasonable to expect management models, such as those that fit neatly into a 2*2 matrix or a predetermined checklist, to work in the real world. In fact, expecting them to work as advertised is like colouring a paint-by-numbers Mona Lisa, expecting to recreate Da Vinci’s masterpiece. Ambiguity therefore invariably remains untamed, and reality reimposes itself no matter how alluring the model is.

It turns out that most of us have a deep aversion to situations that involve even a hint of ambiguity. Recent research in neuroscience has revealed the reason for this: ambiguity is processed in the parts of the brain which regulate our emotional responses. As a result, many people associate it with feelings of anxiety. When kids feel anxious, they turn to transitional objects such as teddy bears or security blankets. These objects provide them with a sense of stability when situations or events seem overwhelming. In this book, we show that as grown-ups we don’t stop using teddy bears – it is just that the teddies we use take a different, more corporate, form. Drawing on research, we discuss how management models, fads and frameworks are actually akin to teddy bears. They provide the same sense of comfort and certainty to corporate managers and minions as real teddies do to distressed kids.

A plain old Teddy

A Plain Teddy

Most children usually outgrow their need for teddies as they mature and learn to cope with their childhood fears. However, if development is disrupted or arrested in some way, the transitional object can become a fetish – an object that is held on to with a pathological intensity, simply for the comfort that it offers in the face of ambiguity. The corporate reliance on simplistic solutions for the complex challenges faced is akin to little Johnny believing that everything will be OK provided he clings on to Teddy.

When this happens, the trick is finding ways to help Johnny overcome his fear of ambiguity.

Ambiguity is a primal force that drives much of our behaviour. It is typically viewed negatively, something to be avoided or to be controlled.

A Sith Teddy

A Sith Teddy

The truth, however, is that ambiguity is a force that can be used in positive ways too. The Force that gave the Dark Side their power in the Star Wars movies was harnessed by the Jedi in positive ways.

A Jedi Teddy

A Jedi Teddy

Our book shows you how ambiguity, so common in the corporate world, can be harnessed to achieve the results you want.

The e-book is available via popular online outlets. Here are links to some:

Amazon Kindle

Google Play

Kobo

For those who prefer paperbacks, the print version is available here.

Thanks for your support 🙂

Written by K

July 12, 2016 at 10:30 pm

Three types of uncertainty you (probably) overlook

with 5 comments

Introduction – uncertainty and decision-making

Managing uncertainty deciding what to do in the absence of reliable information – is a significant part of project management and many other managerial roles. When put this way, it is clear that managing uncertainty is primarily a decision-making problem. Indeed, as I will discuss shortly, the main difficulties associated with decision-making are related to specific types of uncertainties that we tend to overlook.

Let’s begin by looking at the standard approach to decision-making, which goes as follows:

  1. Define the decision problem.
  2. Identify options.
  3. Develop criteria for rating options.
  4. Evaluate options against criteria.
  5. Select the top rated option.

As I have pointed out in this post, the above process is too simplistic for some of the complex, multifaceted decisions that we face in life and at work (switching jobs, buying a house or starting a business venture, for example). In such cases:

  1. It may be difficult to identify all options.
  2. It is often impossible to rate options meaningfully because of information asymmetry – we know more about some options than others. For example, when choosing whether or not to switch jobs, we know more about our current situation than the new one.
  3. Even when ratings are possible, different people will rate options differently – i.e. different people invariably have different preferences for a given outcome. This makes it difficult to reach a consensus.

Regular readers of this blog will know that the points listed above are characteristics of wicked problems.  It is fair to say that in recent years, a general awareness of the ubiquity of wicked problems has led to an appreciation of the limits of classical decision theory. (That said,  it should be noted that academics have been aware of this for a long time: Horst Rittel’s classic paper on the dilemmas of planning, written in 1973, is a good example. And there are many others that predate it.)

In this post  I look into some hard-to-tackle aspects of uncertainty by focusing on the aforementioned shortcomings of classical decision theory. My discussion draws on a paper by Richard Bradley and Mareile Drechsler.

This article is organised as follows: I first present an overview of the standard approach to dealing with uncertainty and discuss its limitations. Following this, I elaborate on three types of uncertainty that are discussed in the paper.

Background – the standard view of uncertainty

The standard approach to tackling uncertainty was  articulated by Leonard Savage in his classic text, Foundations of Statistics. Savage’s approach can be summarized as follows:

  1. Figure out all possible states (outcomes)
  2. Enumerate actions that are possible
  3. Figure out the consequences of actions for all possible states.
  4. Attach a value (aka preference) to each consequence
  5. Select the course of action that maximizes value (based on an appropriately defined measure, making sure to factor in the likelihood of achieving the desired consequence)

(Note the close parallels between this process and the standard approach to decision-making outlined earlier.)

To keep things concrete it is useful to see how this process would work in a simple real-life example. Bradley and Drechsler quote the following example from Savage’s book that does just that:

…[consider] someone who is cooking an omelet and has already broken five good eggs into a bowl, but is uncertain whether the sixth egg is good or rotten. In deciding whether to break the sixth egg into the bowl containing the first five eggs, to break it into a separate saucer, or to throw it away, the only question this agent has to grapple with is whether the last egg is good or rotten, for she knows both what the consequence of breaking the egg is in each eventuality and how desirable each consequence is. And in general it would seem that for Savage once the agent has settled the question of how probable each state of the world is, she can determine what to do simply by averaging the utilities (Note: utility is basically a mathematical expression of preference or value) of each action’s consequences by the probabilities of the states of the world in which they are realised…

In this example there are two states (egg is good, egg is rotten), three actions (break egg into bowl, break egg into separate saucer to check if it rotten, throw egg away without checking) and three consequences (spoil all eggs, save eggs in bowl and save all eggs if last egg is not rotten, save eggs in bowl and potentially waste last egg). The problem then boils down to figuring out our preferences for the options (in some quantitative way) and the probability of the two states.  At first sight, Savage’s approach seems like a reasonable way to deal with uncertainty.  However, a closer look reveals major problems.

Problems with the standard approach

Unlike the omelet example, in real life situations it is often difficult to enumerate all possible states or foresee all consequences of an action. Further, even if states and consequences are known, we may not what value to attach to them – that is, we may not be able to determine our preferences for those consequences unambiguously. Even in those situations  where we can,  our preferences for may be subject to change  – witness the not uncommon situation where lottery winners end up wishing they’d never wonThe standard prescription works therefore works only in situations where all states, actions and consequences are known – i.e. tame situations, as opposed to wicked ones.

Before going any further, I should mention that Savage was cognisant of the limitations of his approach. He pointed out that it works only in what he called small world situations–  i.e. situations in which it is possible to enumerate and evaluate all options.  As Bradley and Drechsler put it,

Savage was well aware that not all decision problems could be represented in a small world decision matrix. In Savage’s words, you are in a small world if you can “look before you leap”; that is, it is feasible to enumerate all contingencies and you know what the consequences of actions are. You are in a grand world when you must “cross the bridge when you come to it”, either because you are not sure what the possible states of the world, actions and/or consequences are…

In the following three sections  I elaborate on the complications mentioned above emphasizing, once again, that many real life situations are prone to such complications.

State space uncertainty

The standard view of uncertainty assumes that all possible states are given as a part of the problem definition – as in the omelet example discussed earlier.  In real life, however, this is often not the case.

Bradley and Drechsler identify two distinct cases of state space uncertainty. The first one is when we are unaware that we’re missing states and/or consequences. For example, organisations that embark on a restructuring program are so focused on the cost-related consequences that they may overlook factors such as loss of morale and/or loss of talent (and the consequent loss of productivity). The second, somewhat rarer, case is when we are aware that we might be missing something but we don’t quite know what it is. All one can do here, is make appropriate contingency plans based on  guesses regarding possible consequences.

Figuring out possible states and consequences is largely a matter of scenario envisioning based on knowledge and practical experience. It stands to reason that this is best done by leveraging the collective experience and wisdom of people from diverse backgrounds. This is pretty much the rationale behind collective decision-making techniques such as Dialogue Mapping.

Option uncertainty

The standard approach to tackling uncertainty assumes that the connection between actions and consequences is well defined. This is often not the case, particularly for wicked problems.  For example, as I have discussed in this post, enterprise transformation programs with well-defined and articulated objectives often end up having a host of unintended consequences. At an even more basic level, in some situations it can be difficult to identify sensible options.

Option uncertainty is a fairly common feature in real-life decisions. As Bradley and Drechsler put it:

Option uncertainty is an endemic feature of decision making, for it is rarely the case that we can predict consequences of our actions in every detail (alternatively, be sure what our options are). And although in many decision situations, it won’t matter too much what the precise consequence of each action is, in some the details will matter very much.

…and unfortunately, the cases in which the details matter are precisely those problems in which they are the hardest to figure out – i.e. in wicked problems.

Preference uncertainty

An implicit assumption in the standard approach is that once states and consequences are known, people will be able to figure out their relative preferences for these unambiguously. This assumption is incorrect, as there are at least two situations in which people will not be able to determine their preferences. Firstly, there may be  a lack of factual information about one or more of the states. Secondly, even when one is able to get the required facts, it is hard to figure out how we would value the consequences.

A common example of the aforementioned situation is the job switch dilemma. In many (most?) cases in which one is debating whether or not to switch jobs, one lacks enough factual information about the new job – for example, the new boss’ temperament, the work environment etc. Further, even if one is able to get the required information, it is impossible to know how it would be to actually work there.  Most people would have struggled with this kind of uncertainty at some point in their lives. Bradley and Drechsler term this ethical uncertainty. I prefer the term preference uncertainty, as it has more to do with preferences than ethics.

Some general remarks

The first point to note is that the three types of uncertainty noted above map exactly on to the three shortcomings of classical decision theory discussed in the introduction.  This suggests a connection between the types of uncertainty and wicked problems. Indeed, most wicked problems are exemplars of one or more of the above uncertainty types.  For example, the paradigm-defining super-wicked problem of climate change displays all three types of uncertainty.

The three types of uncertainty discussed above are overlooked by the standard approach to managing uncertainty.  This happens in a number of ways. Here are two common ones:

  1. The standard approach assumes that all uncertainties can somehow be incorporated into a single probability function describing all possible states and/or consequences. This is clearly false for state space and option uncertainty: it is impossible to define a sensible probability function when one is uncertain about the possible states and/or outcomes.
  2. The standard approach assumes that preferences for different consequences are known. This is clearly not true in the case of preference uncertainty…and even for state space and option uncertainty for that matter.

In their paper, Bradley and Dreschsler arrive at these three types of uncertainty from considerations different from the ones I have used above. Their approach, while more general, is considerably more involved. Nevertheless, I would recommend that readers who are interested should take a look at it because they cover a lot of things that I have glossed over or ignored altogether.

Just as an example, they show how the aforementioned uncertainties can be reduced. There is a price to be paid, however: any reduction in uncertainty results in an increase in its severity. An example might help illustrate how this comes about. Consider a situation of state space uncertainty. One can reduce- or even, remove – this by defining a catch-all state (labelled, say, “all other outcomes”). It is easy to see that although one has formally reduced state space uncertainty to zero, one has increased the severity of the uncertainty because the catch-all state is but a reflection of our ignorance and our refusal to do anything about it!

There are many more implications of the above. However, I’ll point out just one more that serves to illustrate the very practical implications of these uncertainties. In a post on the shortcomings of enterprise risk management, I pointed out that the notion of an organisation-wide risk appetite is problematic because it is impossible to capture the diversity of viewpoints through such a construct. Moreover,  rule or process based approaches to risk management tend to focus only on those uncertainties that can be quantified, or conversely they assume that all uncertainties can somehow be clumped into a single probability distribution as prescribed by the standard approach to managing uncertainty. The three types of uncertainty discussed above highlight the limitations of such an approach to enterprise risk.

Conclusion

The standard approach to managing uncertainty assumes that all possible states, actions and consequences are known or can be determined. In this post I have discussed why this is not always so.  In particular, it often happens that we do not know all possible outcomes (state space uncertainty), consequences (option uncertainty) and/or our preferences for consequences (preference or ethical uncertainty).

As I was reading the paper, I felt the authors were articulating issues that I had often felt uneasy about but chose to overlook (suppress?).  Generalising from one’s own experience is always a fraught affair, but  I reckon we tend to deny these uncertainties because they are inconvenient – that is, they are difficult if not impossible to deal with within the procrustean framework of the standard approach.  What is needed as a corrective is a recognition that the pseudo-quantitative approach that is commonly used to manage uncertainty may not the panacea it is claimed to be. The first step towards doing this is to acknowledge the existence of the uncertainties that we (probably) overlook.

Written by K

February 25, 2015 at 9:08 pm

From information to knowledge: the what and whence of issue based information systems

with 3 comments

Issue Based Information System (IBIS) is a notation invented by Horst Rittel and Werner Kunz in the early 1970s.  IBIS is best known for its use in dialogue mapping, a collaborative approach to tackling wicked problems (i.e. contentious issues) in organisations. It has a range of other applications as well – capturing knowledge is a good example, and I’ll have much more to say about that later in this post.

Over the last five years or so, I have written a fair bit on IBIS on this blog and in a book that I co-authored with the dialogue mapping expert, Paul Culmsee.   The present post reprises an article I wrote five years ago on the “what” and “whence” of the notation:  its practical aspects – notation, grammar etc -, as well as its origins, advantages and limitations. My motivations for revisiting the piece are to revise and update the original discussion and, more important, to cover some recent developments in IBIS technology that open up interesting possibilities in the area of knowledge management.

To appreciate the power of the IBIS, it is best to begin by understanding the context in which the notation was invented.  I’ll therefore start with a discussion of the origins of the notation followed by an introduction to it. Finally, I’ll cover its development through the last 40 odd years, focusing on the recent developments that I mentioned above.

Wicked origins

A good place to start is where it all started. IBIS was first described in a paper entitled, Issues as elements of Information Systems; written by Horst Rittel (the man who coined the term wicked problem) and Werner Kunz in July 1970. They state the intent behind IBIS in the very first line of the abstract of their paper:

 Issue-Based Information Systems (IBIS) are meant to support coordination and planning of political decision processes. IBIS guides the identification, structuring, and settling of issues raised by problem-solving groups, and provides information pertinent to the discourse.

Rittel’s preoccupation was the area of public policy and planning – which is also the context in which he defined the term wicked problem originally. Given the above background it is no surprise that Rittel and Kunz foresaw IBIS to be the:

…type of information system meant to support the work of cooperatives like governmental or administrative agencies or committees, planning groups, etc., that are confronted with a problem complex in order to arrive at a plan for decision…

The problems tackled by such cooperatives are paradigm-defining examples of wicked problems. From the start, then, IBIS was intended as a tool to facilitate a collaborative approach to solving…or better, managing a wicked problem by helping develop a shared perspective on it.

A brief introduction to IBIS

The IBIS notation consists of the following three elements:

  1. Issues(or questions): these are issues that are being debated. Typically, issues are framed as questions on the lines of “What should we do about X?” where X is the issue that is of interest to a group. For example, in the case of a group of executives, X might be rapidly changing market condition whereas in the case of a group of IT people, X could be an ageing system that is hard to replace.
  2. Ideas(or positions): these are responses to questions. For example, one of the ideas of offered by the IT group above might be to replace the said system with a newer one. Typically the whole set of ideas that respond to an issue in a discussion represents the spectrum of participant perspectives on the issue.
  3. Arguments: these can be Pros (arguments for) or Cons (arguments against) an issue. The complete set of arguments that respond to an idea represents the multiplicity of viewpoints on it.

Compendium is a freeware tool that can be used to create IBIS maps– it can be downloaded here.

In Compendium, the IBIS elements described above are represented as nodes as shown in Figure 1: issues are represented by blue-green question marks; positions by yellow light bulbs; pros by green + signs and cons by red – signs.  Compendium supports a few other node types, but these are not part of the core IBIS notation. Nodes can be linked only in ways specified by the IBIS grammar as I discuss next.

 

Figure 1: IBIS elements

Figure 1: IBIS elements

The IBIS grammar can be summarized in three simple rules:

  1. Issues can be raised anew or can arise from other issues, positions or arguments. In other words, any IBIS element can be questioned.  In Compendium notation:  a question node can connect to any other IBIS node.
  2. Ideas can only respond to questions– i.e. in Compendium “light bulb” nodes can only link to question nodes. The arrow pointing from the idea to the question depicts the “responds to” relationship.
  3. Arguments  can only be associated with ideas– i.e. in Compendium “+” and “–“  nodes can only link to “light bulb” nodes (with arrows pointing to the latter)

The legal links are summarized in Figure 2 below.

Figure 2: Legal links in IBIS

Figure 2: Legal links in IBIS

Yes, it’s as simple as that.

The rules are best illustrated by example-   follow the links below to see some illustrations of IBIS in action:

  1. See this postfor a simple example of dialogue mapping.
  2. See this postor this one for examples of argument visualisation. (Note: using IBIS to map out the structure of written arguments is called issue mapping.
  3. See this one for an example Paul did with his children. This example also features in our book. that made an appearance in our book.

Now that we know how IBIS works and have seen a few examples of it in action, it’s time to trace the history of the notation from its early days the present.

Operation of early systems

When Rittel and Kunz wrote their paper, there were three IBIS-type systems in operation: two in government agencies (in the US, one presumes) and one in a university environment (quite possibly Berkeley, where Rittel worked). Although it seems quaint and old-fashioned now, it is no surprise that these were manual, paper-based systems; the effort and expense involved in computerizing such systems in the early 70s would have been prohibitive and the pay-off questionable.

The Rittel-Kunz paper introduced earlier also offers a short description of how these early IBIS systems operated:

An initially unstructured problem area or topic denotes the task named by a “trigger phrase” (“Urban Renewal in Baltimore,” “The War,” “Tax Reform”). About this topic and its subtopics a discourse develops. Issues are brought up and disputed because different positions (Rittel’s word for ideas or responses) are assumed. Arguments are constructed in defense of or against the different positions until the issue is settled by convincing the opponents or decided by a formal decision procedure. Frequently questions of fact are directed to experts or fed into a documentation system. Answers obtained can be questioned and turned into issues. Through this counterplay of questioning and arguing, the participants form and exert their judgments incessantly, developing more structured pictures of the problem and its solutions. It is not possible to separate “understanding the problem” as a phase from “information” or “solution” since every formulation of the problem is also a statement about a potential solution.

 Even today, forty years later, this is an excellent description of how IBIS is used to facilitate a common understanding of complex problems.  Moreover, the process of reaching a shared understanding (whether using IBIS or not) is one of the key ways in which knowledge is created within organizations. To foreshadow a point I will elaborate on later, using IBIS to capture the key issues, ideas and arguments, and the connections between them, results in a navigable map of the knowledge that is generated in a discussion.

 Fast forward a couple decades (and more!)

In a paper published in 1988 entitled, gIBIS: A hypertext tool for exploratory policy discussion, Conklin and Begeman describe a prototype of a graphical, hypertext-based  IBIS-type system (called gIBIS) and its use in capturing design rationale (yes, despite the title of the paper, it is more about capturing design rationale than policy discussions). The development of gIBIS represents a key step between the original Rittel-Kunz version of IBIS and its more recent version as implemented in Compendium.  Amongst other things, IBIS was finally off paper and on to disk, opening up a world of new possibilities.

gIBIS aimed to offer users:

  1. The ability to capture design rationale – the options discussed (including the ones rejected) and the discussion around the pros and cons of each.
  2. A platform for promoting computer-mediated collaborativedesign work – ideally in situations where participants were located at sites remote from each other.
  3. The ability to store a large amount of information and to be able to navigate through it in an intuitive way.

The gIBIS prototype proved successful enough to catalyse the development of Questmap, a commercially available software tool that supported IBIS.  In a recent conversation Jeff Conklin mentioned to me that Questmap was one of the earliest Windows-based groupware tools available on the market…and it won a best-of-show award in that category. It is interesting to note that in contrast to Questmap (which no longer exists), Compendium is a single-user, desktop software.

The primary application of Questmap was in the area of sensemaking which is all about helping groups reach a collective understanding of complex situations that might otherwise lead them into tense or adversarial conditions. Indeed, that is precisely how Rittel and Kunz intended IBIS to be used.  The key advantage offered by computerized IBIS systems was that one could map dialogues in real-time, with the map representing the points raised in the conversation along with their logical connections. This proved to be a big step forward in the use of IBIS to help groups achieve a shared understanding of complex issues.

That said, although there were some notable early successes in the real-time use of IBIS in industry environments (see this paper, for example), these were not accompanied by widespread adoption of the technique. It is worth exploring the reasons for this briefly.

 The tacitness of IBIS mastery

The reasons for the lack of traction of IBIS-type techniques for real-time knowledge capture are discussed in a paper by Shum et. al. entitled, Hypermedia Support for Argumentation-Based Rationale: 15 Years on from gIBIS and QOC.  The reasons they give are:

  1. For acceptance, any system must offer immediate value to the person who is using it. Quoting from the paper, “No designer can be expected to altruistically enter quality design rationale solely for the possible benefit of a possibly unknown person at an unknown point in the future for an unknown task. There must be immediate value.” Such immediate value is not obvious to novice users of IBIS-type systems.
  2. There is some effort involved in gaining fluency in the use of IBIS-based software tools. It is only after this that users can gain an appreciation of the value of such tools in overcoming the limitations of mapping design arguments on paper, whiteboards etc.

While the rules of IBIS are simple to grasp, the intellectual effort – or cognitive overhead in using IBIS in real time involves:

  1. Teasing out issues, ideas and arguments from the dialogue.
  2. Classifying points raised into issues, ideas and arguments.
  3. Naming (or describing) the point succinctly.
  4. Relating (or linking) the point to the existing map (or anticipating how it will fit in later)
  5. Developing a sense for conversational patterns.

Expertise in these skills can only be developed through sustained practice, so it is no surprise that beginners find it hard to use IBIS to map dialogues.   Indeed, the use of IBIS for real-time conversation mapping is a tacit skill, much like riding a bike or swimming – it can only be mastered by doing.

Making sense through IBIS 

Despite the difficulties of mastering IBIS, it is easy to see that it offers considerable advantages over conventional methods of documenting discussions. Indeed, Rittel and Kunz were well aware of this. Their paper contains a nice summary of the advantages, which I paraphrase below:

  1. IBIS can bridge the gap between discussions and records of discussions (minutes, audio/video transcriptions etc.). IBIS sits between the two, acting as a short-term memory. The paper thus foreshadows the use of issue-based systems as an aid to organizational or project memory.
  2. Many elements (issues, ideas or arguments) that come up in a discussion have contextual meanings that are different from any pre-existing definitions. That is, the interpretation of points made or questions raised depends on the circumstances surrounding the discussion. What is more important is that contextual meaning is more important than formal meaning. IBIS captures the former in a very clear way – for example a response to a question “What do we mean by X?” elicits the meaning of X in the context of the discussion, which is then subsequently captured as an idea (position)”. I’ll have much more to say about this towards the end of this article.
  3. The reasoning used in discussions is made transparent, as is the supporting (or opposing) evidence.
  4. The state of the argument (discussion) at any time can be inferred at a glance (unlike the case in written records). See this post for more on the advantages of visual documentation over prose.
  5. Often times it happens that the commonality of issues with other, similar issues might be more important than its precise meaning. To quote from the paper, “…the description of the subject matter in terms of librarians or documentalists (sic) may be less significant than the similarity of an issue with issues dealt with previously and the information used in their treatment…”  This is less of an issue now because of search of technologies. However, search technologies are still largely based on keywords rather than context. A properly structured, context-searchable IBIS-based archive would be more useful than a conventional document-based system. As I’ll discuss in the next section, the technology for this is now available.

To sum up, then: although IBIS offers a means to map out arguments what is lacking is the ability to make these maps available and searchable across an organization.

IBIS in the enterprise

It is interesting to note that Compendium, unlike its predecessor, Questmap, is a single-user, desktop tool – it does not, by itself, enable the sharing of maps across the enterprise. To be sure, it is possible work around this limitation but the workarounds are somewhat clunky.  A recent advance in IBIS technology addresses this issue rather elegantly: Seven Sigma, an Australian consultancy founded by Paul Culmsee, Chris Tomich and Peter Chow, has developed Glyma (pronounced “glimmer”): a product that makes IBIS available on collaboration platforms like Microsoft SharePoint. This is a game-changer because it enables sharing and searching of IBIS maps across the enterprise. Moreover, as we shall see below, the implications of this go beyond sharing and search.

Full Disclosure: As regular readers of this blog know, Paul is a good friend and we have jointly written a book and a few papers. However, at the time of writing, I have no commercial association with Seven Sigma.  My comments below are based on playing with beta version of the product that Paul was kind enough to give me to access to as well as some discussions that I have had with him.

Figure 3: IBIS in Glyma

Figure 3: IBIS in Glyma (Click to see larger picture)

The look and feel of Glyma is much the same as Compendium (see Fig 3 above) – and the keystrokes and shortcuts are quite similar. I have trialled Glyma for a few weeks and feel that the overall experience is actually a bit better than in Compendium. For example one can navigate through a series of maps and sub-maps using a breadcrumb trail. Another example: documents and videos are embedded within the map – so one does not need to leave the map in order to view tagged media (unless of course one wants to see it at a higher resolution).

I won’t go into any detail about product features etc. since that kind of information is best accessed at source – i.e. the product website and documentation. Instead, I will now focus on how Glyma addresses a longstanding gap in knowledge management systems.

Revisiting the problem of context

In most organisations, knowledge artefacts (such as documents and audio-visual media) are stored in hierarchical or relational structures (for example, a folder within a directory structure or a relational database). To be sure, stored artefacts are often tagged with metadata indicating when, where and by whom they were created, but experience tells me that such metadata is not as useful as it should be.  The problem is that the context in which an artefact was created is typically not captured. Anyone who has read a document and wondered, “What on earth were the authors thinking when they wrote this?” has encountered the problem of missing context.

Context, though hard to capture, is critically important in understanding the content of a knowledge artefact. Any artefact, when accessed without an appreciation of the context in which it was created is liable to be misinterpreted or only partially understood.

 Capturing context in the enterprise

Glyma addresses the issue of context rather elegantly at the level of the enterprise.  I’ll illustrate this point an inspiring case study on the innovative use of SharePoint in education that Paul has written about some time ago.

The case study

Here is the backstory in Paul’s words:

Earlier this year, I met Louis Zulli Jnr – a teacher out of Florida who is part of a program called the Centre of Advanced Technologies. We were co-keynoting at a conference and he came on after I had droned on about common SharePoint governance mistakes…The majority of Lou’s presentation showcased a whole bunch of SharePoint powered solutions that his students had written. The solutions were very impressive…We were treated to examples like:

  • IOS, Android and Windows Phone apps that leveraged SharePoint to display teacher’s assignments, school events and class times;
  • Silverlight based application providing a virtual tour of the campus;
  • Integration of SharePoint with Moodle (an open source learning platform)
  • An Academic Planner web application allowing students to plan their classes, submit a schedule, have them reviewed, track of the credits of the classes selected and whether a student’s selections meet graduation requirements;

All of this and more was developed by 16 to 18 year olds and all at a level of quality that I know most SharePoint consultancies would be jealous of…

Although the examples highlighted by Louis were very impressive, what Paul found more interesting were the anecdotes that Lou related about the dedication and persistence that students displayed in their work. Quoting again from Paul,

So the demos themselves were impressive enough, but that is actually not what impressed me the most. In fact, what had me hooked was not on the slide deck. It was the anecdotes that Lou told about the dedication of his students to the task and how they went about getting things done. He spoke of students working during their various school breaks to get projects completed and how they leveraged each other’s various skills and other strengths. Lou’s final slide summed his talk up brilliantly:

  • Students want to make a difference! Give them the right project and they do incredible things.
  • Make the project meaningful. Let it serve a purpose for the campus community.
  • Learn to listen. If your students have a better way, do it. If they have an idea, let them explore it.
  • Invest in success early. Make sure you have the infrastructure to guarantee uptime and have a development farm.
  • Every situation is different but there is no harm in failure. “I have not failed. I’ve found 10,000 ways that won’t work” – Thomas A. Edison

In brief:  these points highlight the fact that Lou’s primary role as director of the center is to create the conditions that make it possible for students to do great work.  The commercial-level quality of work turned out by students suggests that Lou’s knowledge on how to build high-performing teams is definitely worth capturing.

The question is: what’s the best way to do this (short of getting him to visit you and talk about his experiences)?

Seeing the forest for the trees

Paul recently interviewed Lou with the intent of documenting Lou’s experiences. The conversation was recorded on video and then “Glymafied” it – i.e the video was mapped using IBIS (see Figure 4 below).

Figure 4: Knowledge capture via Glyma

Figure 4: Knowledge capture via Glyma (Click to see larger picture)

There are a few points worth noting here:

  1. The content of the entire conversation is mapped out so one can “see” the conversation at a glance.
  2. The context in which a particular point (i.e. the content of a node) is made is clarified by the connections between a node and its neighbours. Moving left from a node gives a higher level picture, moving right drills down into details.

Of course, the reader will have noted that these are core IBIS capabilities that are available in Compendium (or any other IBIS tool).  Glyma offers much more. Consider the following:

  1. Relevant documents or audio visual media can be tagged to specific nodes to provide supplementary material. In this case the video recording was tagged to specific nodes that related to points made in the video. Clicking on the play icon attached to such a node plays the segment in which the content of the node is being discussed. This is a really nice feature as it saves the user from having to watch the whole video (or play an extended game of ffwd-rew to get to the point of interest). Moreover, this provides additional context that cannot (or is not) captured by in the map. For example, one can attach papers, links to web pages, Slideshare presentations etc. to fill in background and context.
  2. Glyma is integrated with an enterprise content management system by design. One can therefore link map and video content to the powerful built-in search and content aggregation features of these systems. For example, users would be able enter a search from their intranet home page and retrieve not only traditional content such as documents, but also stories, reflections and anecdotes from experts such as Lou.
  3. Another critical aspect to intranet integration is the ability to provide maps as contextual navigation. Amazon’s ability to sell books that people never intended to buy is an example of the power of such navigation. The ability to execute the kinds of queries outlined in the previous point, along with contextual information such as user profile details, previous activity on the intranet and the area of an intranet the user is browsing, makes it possible to present recommendations of nodes or maps that may be of potential interest to users. Such targeted recommendations might encourage users to explore video (and other rich media) content.

Technical Aside: An interesting under-the-hood feature of Glyma is that it uses an implementation of a hypergraph database to store maps. (Note: this is a database that can store representations of graphs (maps) in which an edge can connect to more than 2 vertices). These databases enable the storing of very general graph structures. What this means is that Glyma can be extended to store any kind of map (Mind Maps, Concept Maps, Argument Maps or whatever)…and nodes can be shared across maps. This feature has not been developed as yet, but I mention it because it offers some exciting possibilities in the future.

To summarise: since Glyma stores all its data in an enterprise-class database, maps can be made available across an organization.  It offers the ability to tag nodes with pretty much any kind of media (documents, audio/video clips etc.), and one can tag specific parts of the media that are relevant to the content of the node (a snippet of a video, for example). Moreover, the sophisticated search capabilities of the platform enable context aware search.  Specifically, we can search for nodes by keywords or tags, and once a node of interest is located, we can also view the map(s) in which it appears. The map(s) inform us of the context(s) relating to the node. The ability to display the “contextual environment” of a piece of information is what makes Glyma really interesting.

In metaphorical terms, Glyma enables us to see the forest for the trees.

…and so, to conclude

My aim in this post has been to introduce readers to the IBIS notation and trace its history from its origins in issue mapping to recent developments in knowledge management.  The history of a technique is valuable because it gives insight into the rationale behind its creation, which leads to a better understanding of the different ways in which it can be used. Indeed, it would not be an exaggeration to say that the knowledge management applications discussed above are but an extension of Rittel’s original reasons for inventing IBIS.

I would like to close this piece with a couple of observations from my experience with IBIS:

Firstly, the real surprise for me has been that the technique can capture most written arguments and conversations, despite having only three distinct elements and a very simple grammar. Yes, it does require some thought to do this, particularly when mapping discussions in real time. However, this cognitive overhead is good because it forces the mapper to think about what’s being said instead of just writing it down blind. Thoughtful transcription is the aim of the game. When done right, this results in a map that truly reflects an understanding of a complex issue.

Secondly, although most current discussions of IBIS focus on its applications in dialogue mapping, it has a more important role to play in mapping organizational knowledge. Maps offer a powerful means to navigate the complex network of knowledge within an organisation. The (aspirational) end-goal of such an effort would be a “global” knowledge map that highlights interconnections between different kinds of knowledge that exists within an organization. To be sure, such a map will make little sense to the human eye, but powerful search capabilities could make it navigable. To the extent that this is a feasible direction, I foresee IBIS becoming an important skill in the repertoire of knowledge management professionals.

Emergent design in enterprise IT

with 7 comments

Introduction

Over the last few months, I’ve published a number of posts in which the term emergent design makes a cameo appearance (see this article or this interview for example). Some readers may have noticed that although the term is used in various contexts in the articles/interviews, it is not explicitly defined anywhere. This is deliberate. Emergent design is…well, emergent, so a detailed definition is neither necessary nor useful – providing one can describe a set of guidelines for its practice.  My main aim in this post is to do just that. To keep things concrete I will discuss the guidelines in the context of the often bizarre world of enterprise IT, a domain that epitomizes top-down, plan-based design.

(Note: Before going any further a couple of clarifications are in order. Firstly, the word emergent as used here has nothing to do with emergence in complex systems. Secondly, the guidelines provided here are a starting point, not a comprehensive list.)

The wickedness of enterprise IT

Most IT initiatives in large organisations are planned, designed and executed in a top-down manner, with little or no attempt to understand the existing culture and / or on-the-ground realities. This observation applies not only to enterprise software projects, such as those involving Collaboration or   Customer Relationship Management platforms, but also to design and process-driven IT functions like architecture and service management.

Top down approaches are liable to fail because enterprise IT displays many of the characteristics of wicked problems.  In particular, organization-wide IT initiatives:

  1. Are one-shot operations – for example, an ERP system is simply too expensive to implement over and over again.
  2. Have no stopping rule – enterprise IT systems are never completely done; there are always things to be fixed and additional features to be implemented.
  3. Are highly contentious – whether or not an initiative is good, or even necessary, depends on who you ask.
  4. Could be done in other, possibly “better”, ways – and the problem is that one person’s “better” is another one’s “worse”!
  5. Are essentially unique – and don’t let vendors or Big $$$ consultants tell you otherwise!

These characteristics make enterprise IT a socially complex problem – that is, different stakeholder groups have different perceptions of the problem that the initiative is intended to address.   The most important implication of social complexity is that it cannot be tackled using rational methods of planning, design and implementation that are taught in schools, propagated in books, or evangelized by standards authorities and assorted snake oil salespersons.

Enter emergent design

The term emergent design was coined by David Cavallo in the context of technology-driven education reforms in indigenous cultures (the original paper by David Cavallo can be accessed here). Cavallo observed that traditional systems engineering approaches that attempt to change an educational system in a top-down manner fail primarily because they do not take into account the unique features of local cultures. Instead, he found that using the existing culture as a starting point from which to work towards systemic change offered a much better chance of the new ways taking root. In his words, “[the] adoption and implementation of new methodologies needs to be based in, and grow from, the existing culture.”

Cavallo’s words hold the key to understanding emergent design in the context of enterprise IT. It is that any enterprise IT initiative necessarily affects many stakeholders, and should therefore start by taking their concerns seriously.

“Ah, so it’s like Agile software development,” concludes the Agilista.

Not quite. As I will discuss in the remainder of this post, although emergent design shares a few number features with Agile methods, there’s considerably more to it than that. That said, chances are that good Agile coaches are emergent design practitioners without knowing it. This is something that will become apparent as we go on.

Guidelines for emergent design

I have, for a while, been thinking about what emergent design means in the context of enterprise IT. Among other things, I have been looking at how it might be applied to a wide variety initiatives that are traditionally planned upfront – things such as offshoring and enterprise-wide projects such as data warehouse or enterprise resource planning initiatives.

In one of those serendipitous occurrences, last week I happened re-read an old series of articles entitled Confessions of a post-SharePoint Architect written by  my friend, the ace sensemaker and emergent entrepreneur, Paul Culmsee. Although the series focuses on emergent design principles in the context of the Microsoft SharePoint platform, many of the points that Paul makes apply to enterprise IT in general.  In addition to material drawn from Paul’s blog, I also borrow from a few posts on my blog. In the interests of space I have provided only a brief overview of the points because they have been elaborated elsewhere. The original pieces fill in a lot of relevant detail, so please do follow the links if you have the inclination and the time.

With that said, let’s get to it.

Be a midwife rather than an expert

In a paper entitled, On the planning crisis, Horst Rittel  (the man who coined the term wicked problem) wrote:

You do not learn in school how to deal with wicked problems…expertise and ignorance is distributed over all participants in a wicked problem. There is a symmetry of ignorance among those who participate because nobody knows better by virtue of his degrees or his status. There are no experts (which is irritating for experts), and if experts there are, they are only experts in guiding the process of dealing with a wicked problem, but not for the subject matter of the problem.

The first guideline of emergent design is to realize that no one is an expert – not you nor your Big $$$ consultant. The only way to build a robust and lasting system or process is for everyone to put their heads together and work towards it in a collaborative fashion, dispensing with the pretense that one can outsource one’s thinking to the “expert”. Indeed, the role of the “expert” is to create and foster the conditions for such collaboration to occur. Paul and I elaborate on this point at length in our book and this paper (summarized in this post).

In brief, the knowledge required to successfully implement an enterprise system is distributed across all stakeholders (analysts, consultants, architects and, above all, users). Pulling all this together into a coherent whole has more to do with facilitation and people skills than technology.

Ensure that governance is about enablement rather than control

Most organisations have onerous procedures to ensure that people do the right thing – the poor system lead drowns in a ream of documentation that she is required to “read and understand”; things have to be documented according to certain standards etc. etc. All these procedures are aimed at keeping people on the straight and narrow path of IT righteousness.

I submit that most governance regimes within organisations encourage a checkbox-based compliance mentality aimed at ensuring that people comply in letter, but not necessarily in spirit (actually, never in spirit).  As Paul mentions in this post  governance ought to be about enablement rather than compliance or control.

There’s a very simple test to tell one from the other: when you come across a procedure such as an SOP or a methodology that you are required to follow, ask yourself this question: does this help me do my job?

If the answer positive, the procedure is an enabler; if not, it is likely a control that is primarily intended as a CYA mechanism.

Do not penalize people for learning

The main rationale behind iterative and incremental approaches to software development is that they encourage (and take advantage) of continuous learning. Incremental increases in functionality are easier to test exhaustively and errors are also easier to correct. Reviews and retrospectives also tend to be more focused leading to a better chance of lessons actually being learnt.  Thanks to the Agile movement, this is now well known and understood in mainstream IT.

However, learning is not just a matter of using iterative/incremental methodologies; one also needs to build an environment that encourages it. This is trickier matter because it depends on things that are outside an individual manager’s control; indeed it has more to do with the entire IT function or even the organization. In an organisation with a strong blame culture, the culture tends to win against pretty much any methodology, agile or otherwise. Blame cultures preclude learning because mistakes are punished and people are scapegoated as a result. Check out this article on learning organizations for more on this topic, and this post for a more nuanced (realistic?) view.

With that said for the importance of learning, it is also important to note that there are some situations where learning is less important. This is the case for work for that can be planned and scripted in detail up front.  It is important to be able to distinguish between the two types of situations…which brings us to the next point.

Understand the difference between complicated and complex initiatives

Requirements analysis is one of the first activities in traditional system development. Most enterprise IT initiatives that are driven by a vendor or consultant will have many sessions for this at the front-end of an engagement. Enterprise wisdom tells us that things need to be specified in detail at the start. The rationale behind this is to set requirements in stone so that the entire project can be planned in detail before actual implementation begins.   Such an approach is fine if one knows for sure:

  1. How the future is going to unfold and has appropriate contingencies in place for adverse events;
  2. That users have a clear idea of what they want, and
  3. That requirements will not change (or will change minimally).

It is obvious that this approach will be disastrous if any of the above assumptions are incorrect. Unfortunately it is more often the case that the assumptions do not hold, as evidenced by innumerable IT project that have failed for a lack of adequate risk management, scope clarity and/or uncontrolled change.

So how does one distinguish between initiatives that can be planned in detail upfront and those that can’t?

The distinction is best illustrated via an example: consider a project to replace a fixed line phone system by VoIP versus an ERP project. The first project has a fixed set of requirements across different groups. The second one, in contrast, involves diverse stakeholder groups, each with their own unique expectations of the system. Both projects are complicated from the technology point of view, but the second one has elements of wickedness arising from social complexity.  Consequently, the two projects cannot be run in the same way.  In particular, the first one can be planned in detail upfront while the second one cannot.  Borrowing from David Snowden’s Cynefin framework, we call the first type of project complicated and the second one complex. You need to understand which kind of initiative you are dealing with before deciding which project management approach would be appropriate.

Beware of platitudinous goals

The enterprise IT marketplace is one that is largely buzzword driven. The in-vogue buzzwords at the time this piece was written is the cloud and big data. Buzzwords, while sounding “right”, are actually platitudes – words that are devoid of meaning because different people interpret and use them differently.  The use of platitudes, therefore, results in confusion rather than clarity. For example, your information security guy may be wary of the cloud because he sees it as a potential security risk whereas a business user may view it positively because it liberates her from the clutches of a ponderous IT department. (Check out this video for a cautionary fable regarding a poorly thought out cloud strategy)

People tend to use platitudes as mental shortcuts to avoid thinking things through and coming up with their own opinions. It is therefore pointless to ask a person who uses a platitude to clarify what he or she means: they have not thought it through and will therefore be unable to give you a good answer.

The best way to deconstruct a platitude is via an oblique approach that is best illustrated through an example.

Say someone tells you that they want to improve efficiency (a rather common platitude). Asking them to define efficiency is not a good way to go because the answer you get is likely to be couched in generalities such as higher productivity and performance, words that are world class platitudes in their own right!  Instead, it is better to ask them what difference would be apparent if efficiency were to improve. To answer this question, they would have to come down from platitude-land and start thinking about concrete, measurable outcomes.

Use open questions to broaden perspectives

A more general point to note from the foregoing is that the framing of questions matters, particularly when one wants people to come up with ideas. For example, instead of asking people what they like (or dislike) about a particular approach, it is generally better to ask them what aspects of the approach are important to them. The latter question is neutrally framed so it does not impose any constraints on their thinking

Another good way to get people thinking about possibilities is to ask them what they would like to do if there were no constraints (such as budget or time, say).  Conversely, if you encounter a constraining factor (like a company policy), it is sometimes helpful to ask what the intent behind the policy is.

If posed in the right way and in the right environment, answers to such questions get people to think beyond their immediate concerns and focus on purposes and outcomes instead.

Check out Paul’s posts on powerful questions to find out more about these perspective-expanding questions.

Understand the need for different types of thinking

One of the ironies of enterprise IT initiatives is that the most important decisions on projects have to be made when the information available is the least. As I wrote in the introduction to this paper,

The early stages of projects are fraught with ambiguity. Yet, it is at this “front end” of projects that the most important decisions have to be made. Front-end decisions are hard because there is:

  • uncertainty about scope, i.e. about what needs to be done;
  • uncertainty about rationale, i.e. why it needs to be done; and
  • uncertainty about approach, i.e. how it should be done.

Arguably, the lack of clarity regarding any of these can sow the seeds of failure in the early stages of a project.

The standard approach is to treat uncertainty as a problem that can be solved through convergent thinking – i.e. the kind of thinking that assumes a problem has a single “correct answer.” However, project uncertainty has a social dimension; different people have different perceptions of things like scope and rationale, as well as different coping mechanisms for ambiguity. So, project uncertainty is a wicked problem that has no single “right answer.” This can cause anxiety for some. One therefore needs to begin with divergent thinking, which is largely about generating ideas and options and move to convergent thinking only when:

  1. The group has a shared understanding of the main issues.
  2. An adequate set of options have been generated.

As I alluded to above, people tend to show a preference for one type of thinking over the other. The strength of collaborative problem solving lies precisely in the fact that a group is likely to have a mix of the two types of thinkers.

It is perhaps obvious, but still worth mentioning that the other standard way to deal with uncertainty is to impose a solution by diktat or governance. Clearly such approaches are doomed to fail because they suppress the problem instead of solving it.

Consider long term consequences

It is an unfortunate fact of life that cost tends to be the ultimate arbiter on IT decisions. Vendors know this, and take advantage of it when crafting their proposals. The contract goes to the lowest bidder and the rest, as they say, is tragedy. Although cost is an important criterion in IT decisions, making it the overriding concern is a recipe for future disappointment.

The general lesson to draw from this is that one must consider the longer-term consequences of one’s choices. This can be hard to do because the distant future is less salient than the present or the immediate future. A good way to look beyond immediate concerns (such as cost) is to use the solution after next principle proposed by Gerald Nadler and Shozo Hibino in their book, Breakthrough Thinking. The basic idea behind the principle is to get people to focus on the goals that lie beyond the immediate goal. The process of thinking about and articulating longer-term goals can often provide insights into potential problems with the current goals and/or how they are being achieved.

Build in spare capacity

In his book on antifragility, Nicholas Taleb points out that the opposite of fragility is not robustness or resilience, rather it is the ability to thrive on or benefit from uncertainty. There is no word in the English language to describe such behavior, and that is what led him to coin the term antifragile.

In a post inspired by the book, I outlined the elements of an antifragile IT strategy.  One of the key points of such a strategy is the assumption that, despite our best laid plans, it is inevitable that something important will have been overlooked. It is therefore important to  build in some spare capacity to deal with the unexpected events and demands. Unfortunately, experience tells me that many enterprise IT systems operate at the limits of their capacity, with little or nothing in reserve.  This is a disaster waiting to happen.

Design so as to increase your future choices

This is perhaps the most important point in my list because it encapsulates all the other points. I have adapted it from Heinz von Foerster’s ethical imperative which states that one should always act so as to increase the number of choices in the future.  This principle is useful as a tiebreaker between two designs that are close in all other respects. However, there is more to it than just that.  I have found it particularly useful in making decisions regarding IT outsourcing and software as a service. There is very little critical scrutiny of the benefits of these as claimed by vendors and advisories. This principle can help you see through the fog of marketing rhetoric and advisory hype.

Parting thoughts

One of the paradoxes of life is that the harder we strive for something – money, happiness or whatever – the more unattainable it seems to become. Indeed, some of the most financially successful people (Bill Gates and Warren Buffett, for example) became rich by doing what they loved. Their financial success was a happy byproduct of their engagement in their work. The economist John Kay formalized this paradoxical notion in his concept of obliquity – that certain goals are best attained via indirect means.

If you have been patient enough to read through this piece, you will have noted that some of the guidelines listed above have a hint of obliquity about them. This is no surprise; indeed it is inevitable in a design approach that values people over processes and improvisation (or even serendipity) over planning.

I usually conclude my posts with a summary of the main message. For reasons that should be obvious I will not do that here. Instead, I will end by pointing out yet another feature of emergent design that you have likely figured out already: the guidelines listed above are actually domain neutral; they can be applied to pretty much any area of endeavour. This is no surprise: wicked problems aren’t domain specific and therefore neither are the techniques to deal with them.  For example, see my interview with Neil Preston for a perspective on emergent design in organizational change and <advertisement> my book co-authored with Paul </advertisement> for ways in which it can be applied to domains as diverse as town planning and knowledge management.

…and now I really must stop for I have gone on too long.  Many thanks for your patience!

Acknowledgement

My thanks go out to Paul Culmsee for his feedback on a draft version of this post.

Written by K

October 28, 2014 at 9:04 pm

%d bloggers like this: