Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘sensemaking’ Category

Learning, evolution and the future of work

leave a comment »

The Janus-headed rise of AI has prompted many discussions about the future of work.  Most, if not all, are about AI-driven automation and its consequences for various professions. We are warned to prepare for this change by developing skills that cannot be easily “learnt” by machines.  This sounds reasonable at first, but less so on reflection: if skills that were thought to be uniquely human less than a decade ago can now be done, at least partially, by machines, there is no guarantee that any specific skill one chooses to develop will remain automation-proof in the medium-term future.

This begs the question as to what we can do, as individuals, to prepare for a machine-centric workplace. In this post I offer a perspective on this question based on Gregory Bateson’s writings as well as  my consulting and teaching experiences.

Levels of learning

Given that humans are notoriously poor at predicting the future, it should be clear hitching one’s professional wagon to a specific set of skills is not a good strategy. Learning a set of skills may pay off in the short term, but it is unlikely to work in the long run.

So what can one do to prepare for an ambiguous and essentially unpredictable future?

To answer this question, we need to delve into an important, yet oft-overlooked aspect of learning.

A key characteristic of learning is that it is driven by trial and error.  To be sure, intelligence may help winnow out poor choices at some stages of the process, but one cannot eliminate error entirely. Indeed, it is not desirable to do so because error is essential for that “aha” instant that precedes insight.  Learning therefore has a stochastic element: the specific sequence of trial and error followed by an individual is unpredictable and likely to be unique. This is why everyone learns differently: the mental model I build of a concept is likely to be different from yours.

In a paper entitled, The Logical Categories of Learning and Communication, Bateson noted that the stochastic nature of learning has an interesting consequence. As he notes:

If we accept the overall notion that all learning is in some degree stochastic (i.e., contains components of “trial and error”), it follows that an ordering of the processes of learning can be built upon a hierarchic classification of the types of error which are to be corrected in the various learning processes.

Let’s unpack this claim by looking at his proposed classification:

Zero order learning –    Zero order learning refers to situations in which a given stimulus (or question) results in the same response (or answer) every time. Any instinctive behaviour – such as a reflex response on touching a hot kettle – is an example of zero order learning.  Such learning is hard-wired in the learner, who responds with the “correct” option to a fixed stimulus every single time. Since the response does not change with time, the process is not subject to trial and error.

First order learning (Learning I) –  Learning I is where an individual learns to select a correct option from a set of similar elements. It involves a specific kind of trial and error that is best explained through a couple of examples. The  canonical example of Learning I is memorization: Johnny recognises the letter “A” because he has learnt to distinguish it from the 25 other similar possibilities. Another example is Pavlovian conditioning wherein the subject’s response is altered by training: a dog that initially salivates only when it smells food is trained, by repetition, to salivate when it hears the bell.

A key characteristic of Learning I is that the individual learns to select the correct response from a set of comparable possibilities – comparable because the possibilities are of the same type (e.g. pick a letter from the set of alphabets). Consequently, first order learning  cannot lead to a qualitative change in the learner’s response. Much of traditional school and university teaching is geared toward first order learning: students are taught to develop the “correct” understanding of concepts and techniques via a repetition-based process of trial and error.

As an aside, note that much of what goes under the banner of machine learning and AI can be also classed as first order learning.

Second order learning (Learning II) –  Second order learning involves a qualitative change in the learner’s response to a given situation. Typically, this occurs when a learner sees a familiar problem situation in a completely new light, thus opening up new possibilities for solutions.  Learning II therefore necessitates a higher order of trial and error, one that is beyond the ken of machines, at least at this point in time.

Complex organisational problems, such as determining a business strategy, require a second order approach because they cannot be precisely defined and therefore lack an objectively correct solution. Echoing Horst Rittel, solutions to such problems are not true or false, but better or worse.

Much of the teaching that goes on in schools and universities hinders second order learning because it implicitly conditions learners to frame problems in ways that make them amenable to familiar techniques. However, as Russell Ackoff noted, “outside of school, problems are seldom given; they have to be taken, extracted from complex situations…”   Two  aspects of this perceptive statement bear further consideration. Firstly, to extract a problem from a situation one has to appreciate or make sense of  the situation.  Secondly,  once the problem is framed, one may find that solving it requires skills that one does not possess. I expand on the implications of these points in the following two sections.

Sensemaking and second order learning

In an earlier piece, I described sensemaking as the art of collaborative problem formulation. There are a huge variety of sensemaking approaches, the gamestorming site describes many of them in detail.   Most of these are aimed at exploring a problem space by harnessing the collective knowledge of a group of people who have diverse, even conflicting, perspectives on the issue at hand.  The greater the diversity, the more complete the exploration of the problem space.

Sensemaking techniques help in elucidating the context in which a problem lives. This refers to the the problem’s environment, and in particular the constraints that the environment imposes on potential solutions to the problem.  As Bateson puts it, context is “a collective term for all those events which tell an organism among what set of alternatives [it] must make [its] next choice.”  But this begs the question as to how these alternatives are to be determined.  The question cannot be answered directly because it depends on the specifics of the environment in which the problem lives.  Surfacing these by asking the right questions is the task of sensemaking.

As a simple example, if I request you to help me formulate a business strategy, you are likely to begin by asking me a number of questions such as:

  • What kind of business are you in?
  • Who are your customers?
  • What’s the competitive landscape?
  • …and so on

Answers to these questions fill out the context in which the business operates, thus making it possible to formulate a meaningful strategy.

It is important to note that context rarely remains static, it evolves in time. Indeed, many companies faded away because they failed to appreciate changes in their business context:  Kodak is a well-known example, there are many more.  So organisations must evolve too. However, it is a mistake to think of an organisation and its environment as evolving independently, the two always evolve together.  Such co-evolution is as true of natural systems as it is of social ones. As Bateson tells us:

…the evolution of the horse from Eohippus was not a one-sided adjustment to life on grassy plains. Surely the grassy plains themselves evolved [on the same footing] with the evolution of the teeth and hooves of the horses and other ungulates. Turf was the evolving response of the vegetation to the evolution of the horse. It is the context which evolves.

Indeed, one can think of evolution by natural selection as a process by which nature learns (in a second-order sense).

The foregoing discussion points to another problem with traditional approaches to education: we are implicitly taught that problems once solved, stay solved. It is seldom so in real life because, as we have noted, the environment evolves even if the organisation remains static. In the worst case scenario (which happens often enough) the organisation will die if it does not adapt appropriately to changes in its environment. If this is true, then it seems that second-order learning is important not just for individuals but for organisations as a whole. This harks back to notion of the notion of the learning organisation, developed and evangelized by Peter Senge in the early 90s. A learning organisation is one that continually adapts itself to a changing environment. As one might imagine, it is an ideal that is difficult to achieve in practice. Indeed, attempts to create learning organisations have often ended up with paradoxical outcomes.  In view of this it seems more practical for organisations to focus on developing what one might call  learning individuals – people who are capable of adapting to changes in their environment by continual learning

Learning to learn

Cliches aside, the modern workplace is characterised by rapid, technology-driven change. It is difficult for an  individual to keep up because one has to:

    • Figure out which changes are significant and therefore worth responding to.
    • Be capable of responding to them meaningfully.

The media hype about the sexiest job of the 21st century and the like further fuel the fear of obsolescence.  One feels an overwhelming pressure to do something. The old adage about combating fear with action holds true: one has to do something, but the question then is: what meaningful action can one take?

The fact that this question arises points to the failure of traditional university education. With its undue focus on teaching specific techniques, the more important second-order skill of learning to learn has fallen by the wayside.  In reality, though,  it is now easier than ever to learn new skills on ones own. When I was hired as a database architect in 2004, there were few quality resources available for free. Ten years later, I was able to start teaching myself machine learning using topnotch software, backed by countless quality tutorials in blog and video formats. However, I wasted a lot of time in getting started because it took me a while to get over my reluctance to explore without a guide. Cultivating the habit of learning on my own earlier would have made it a lot easier.

Back to the future of work

When industry complains about new graduates being ill-prepared for the workplace, educational institutions respond by updating curricula with more (New!! Advanced!!!) techniques. However, the complaints continue and  Bateson’s notion of second order learning tells us why:

  • Firstly, problem solving is distinct from problem formulation; it is akin to the distinction between human and machine intelligence.
  • Secondly, one does not know what skills one may need in the future, so instead of learning specific skills one has to learn how to learn

In my experience,  it is possible to teach these higher order skills to students in a classroom environment. However, it has to be done in a way that starts from where students are in terms of skills and dispositions and moves them gradually to less familiar situations. The approach is based on David Cavallo’s work on emergent design which I have often used in my  consulting work.  Two examples may help illustrate how this works in  the classroom:

  • Many analytically-inclined people think sensemaking is a waste of time because they see it as “just talk”. So, when teaching sensemaking, I begin with quantitative techniques to deal with uncertainty, such as Monte Carlo simulation, and then gradually introduce examples of uncertainties that are hard if not impossible to quantify. This progression naturally leads on to problem situations in which they see the value of sensemaking.
  • When teaching data science, it is difficult to comprehensively cover basic machine learning algorithms in a single semester. However, students are often reluctant to explore on their own because they tend to be daunted by the mathematical terminology and notation. To encourage exploration (i.e. learning to learn) we use a two-step approach: a) classes focus on intuitive explanations of algorithms and the commonalities between concepts used in different algorithms.  The classes are not lectures but interactive sessions involving lots of exercises and Q&A, b) the assignments go beyond what is covered in the classroom (but still well within reach of most students), this forces them to learn on their own. The approach works: just the other day, my wonderful co-teacher, Alex, commented on the amazing learning journey of some of the students – so tentative and hesitant at first, but well on their way to becoming confident data professionals.

In the end, though, whether or not an individual learner learns depends on the individual. As Bateson once noted:

Perhaps the best documented generalization in the field of psychology is that, at any given moment, the behavioral characteristics of a mammal, and especially of [a human], depend upon the previous experience and behavior of that individual.

The choices we make when faced with change depend on our individual natures and experiences. Educators can’t do much about the former but they can facilitate more meaningful instances of the latter, even within the confines of the classroom.

Written by K

July 5, 2018 at 6:05 pm

Risk management and organizational anxiety

with 10 comments

In practice risk management is a rational, means-end based process: risks are identified, analysed and then “solved” (or mitigated).  Although these steps seem to be objective, each of them involves human perceptions, biases and interests. Where Jill sees an opportunity, Jack may see only risks.

Indeed, the problem of differences in stakeholder perceptions is broader than risk analysis. The recognition that such differences in world-views may be irreconcilable is what led Horst Rittel to coin the now well-known term, wicked problem.   These problems tend to be made up of complex interconnected and interdependent issues which makes them difficult to tackle using standard rational- analytical methods of problem solving.

Most high-stakes risks that organisations face have elements of wickedness – indeed any significant organisational change is fraught with risk. Murphy rules; things can go wrong, and they often do. The current paradigm of risk management, which focuses on analyzing and quantifying risks using rational methods, is not broad enough to account for the wicked aspects of risk.

I had been thinking about this for a while when I stumbled on a fascinating paper by Robin Holt entitled, Risk Management: The Talking Cure, which outlines a possible approach to analysing interconnected risks. In brief, Holt draws a parallel between psychoanalysis (as a means to tackle individual anxiety) and risk management (as a means to tackle organizational anxiety).  In this post, I present an extensive discussion and interpretation of Holt’s paper. Although more about the philosophy of risk management than its practice, I found the paper interesting, relevant and thought provoking. My hope is that some readers might find it so too.

Background

Holt begins by noting that modern life is characterized by uncertainty. Paradoxically, technological progress which should have increased our sense of control over our surroundings and lives has actually heightened our personal feelings of uncertainty. Moreover, this sense of uncertainty is not allayed by rational analysis. On the contrary, it may have even increased it by, for example, drawing our attention to risks that we may otherwise have remained unaware of. Risk thus becomes a lens through which we perceive the world. The danger is that this can paralyze.  As Holt puts it,

…risk becomes the only backdrop to perceiving the world and perception collapses into self-inhibition, thereby compounding uncertainty through inertia.

Most individuals know this through experience: most of us have at one time or another been frozen into inaction because of perceived risks.  We also “know” at a deep personal level that the standard responses to risk are inadequate because many of our worries tend to be inchoate and therefore can neither be coherently articulated nor analysed. In Holt’s words:

..People do not recognize [risk] from the perspective of a breakdown in their rational calculations alone, but because of threats to their forms of life – to the non-calculative way they see themselves and the world. [Mainstream risk analysis] remains caught in the thrall of its own ‘expert’ presumptions, denigrating the very lay knowledge and perceptions on the grounds that they cannot be codified and institutionally expressed.

Holt suggests that risk management should account for the “codified, uncodified and uncodifiable aspects of uncertainty from an organizational perspective.” This entails a mode of analysis that takes into account different, even conflicting, perspectives in a non-judgemental way. In essence, he suggests “talking it over” as a means to increase awareness of the contingent nature of risks rather than a means of definitively resolving them.

Shortcomings of risk analysis

The basic aim of risk analysis (as it is practiced) is to contain uncertainty within set bounds that are determined by an organisation’s risk appetite.  As mentioned earlier, this process begins by identifying and classifying risks. Once this is done, one determines the probability and impact of each risk. Then, based on priorities and resources available (again determined by the organisation’s risk appetite) one develops strategies to mitigate the risks that are significant from the organisation’s perspective.

However, the messiness of organizational life makes it difficult to see risk in such a clear-cut way. We may  pretend to be rational about it, but in reality we perceive it through the lens of our background, interests , experiences.  Based on these perceptions we rationalize our action (or inaction!) and simply get on with life. As Holt writes:

The concept [of risk] refers to…the mélange of experience, where managers accept contingencies without being overwhelmed to a point of complete passivity or confusion, Managers learn to recognize the differences between things, to acknowledge their and our limits. Only in this way can managers be said to make judgements, to be seen as being involved in something called the future.

Then, in a memorable line, he goes on to say:

The future, however, lasts a long time, so much so as to make its containment and prediction an often futile exercise.

Although one may well argue that this is not the case for many organizational risks, it is undeniable that certain mitigation strategies (for example, accepting risks that turn out to be significant later) may have significant consequences in the not-so-near future.

Advice from a politician-scholar

So how can one address the slippery aspects of risk – the things people sense intuitively, but find difficult to articulate?

Taking inspiration from Machiavelli, Holt suggests reframing risk management as a means to determine wise actions in the face of the contradictory forces of fortune and necessity.  As Holt puts it:

Necessity describes forces that are unbreachable but manageable by acceptance and containment—acts of God, tendencies of the species, and so on. In recognizing inevitability, [one can retain one’s] position, enhancing it only to the extent that others fail to recognize necessity. Far more influential, and often confused with necessity, is fortune. Fortune is elusive but approachable. Fortune is never to be relied upon: ‘The greatest good fortune is always least to be trusted’; the good is often kept underfoot and the ridiculous elevated, but it provides [one] with opportunity.

Wise actions involve resolve and cunning (which I interpret as political nous). This entails understanding that we do not have complete (or even partial) control over events that may occur in the future. The future is largely unknowable as are people’s true drives and motivations. Yet, despite this, managers must act.  This requires personal determination together with a deep understanding of the social and political aspects of one’s environment.

And a little later,

…risk management is not the clear conception of a problem coupled to modes of rankable resolutions, or a limited process, but a judgemental  analysis limited by the vicissitudes of budgets, programmes, personalities and contested priorities.

In short: risk management in practice tends to be a far way off from how it is portrayed in textbooks and the professional literature.

The wickedness of risk management

Most managers and those who work under their supervision have been schooled in the rational-scientific approach of problem solving. It is no surprise, therefore, that they use it to manage risks: they gather and analyse information about potential risks, formulate potential solutions (or mitigation strategies) and then implement the best one (according to predetermined criteria). However, this method works only for problems that are straightforward or tame, rather than wicked.

Many of the issues that risk managers are confronted with are wicked, messy or both.  Often though, such problems are treated as being tame.   Reducing a wicked or messy problem to one amenable to rational analysis invariably entails overlooking  the views of certain stakeholder groups or, worse, ignoring key  aspects of the problem.  This may work in the short term, but will only exacerbate the problem in the longer run. Holt illustrates this point as follows:

A primary danger in mistaking a mess for a tame problem is that it becomes even more difficult to deal with the mess. Blaming ‘operator error’ for a mishap on the production line and introducing added surveillance is an illustration of a mess being mistaken for a tame problem. An operator is easily isolated and identifiable, whereas a technological system or process is embedded, unwieldy and, initially, far more costly to alter. Blaming operators is politically expedient. It might also be because managers and administrators do not know how to think in terms of messes; they have not learned how to sort through complex socio-technical systems.

It is important to note that although many risk management practitioners recognize the essential wickedness of the issues they deal with, the practice of risk management is not quite up to the task of dealing with such matters.  One step towards doing this is to develop a shared (enterprise-wide) understanding of risks by soliciting input from diverse stakeholders groups, some of who may hold opposing views.

The skills required to do this are very different from the analytical techniques that are the focus of problem solving and decision making techniques that are taught in colleges and business schools.  Analysis is replaced by sensemaking – a collaborative process that harnesses the wisdom of a group to arrive at a collective understanding of a problem and thence a common  commitment to a course of action. This necessarily involves skills that do not appear in the lexicon of rational problem solving: negotiation, facilitation, rhetoric and those of the same ilk that are dismissed as being of no relevance by the scientifically oriented analyst.

In the end though, even this may not be enough: different stakeholders may perceive a given “risk” in have wildly different ways, so much so that no consensus can be reached.  The problem is that the current framework of risk management requires the analyst to perform an objective analysis of situation/problem, even in situations where this is not possible.

To get around this Holt suggests that it may be more useful to see risk management as a way to encounter problems rather than analyse or solve them.

What does this mean?

He sees this as a forum in which people can talk about the risks openly:

To enable organizational members to encounter problems, risk management’s repertoire of activity needs to engage their all too human components: belief, perception, enthusiasm and fear.

This gets to the root of the problem: risk matters because it increases anxiety and generally affects peoples’ sense of wellbeing. Given this, it is no surprise that Holt’s proposed solution draws on psychoanalysis.

The analogy between psychoanalysis and risk management

Any discussion of psychoanalysis –especially one that is intended for an audience that is largely schooled in rational/scientific methods of analysis – must begin with the acknowledgement that the claims of psychoanalysis cannot be tested. That is, since psychoanalysis speaks of unobservable “objects” such as the ego and the unconscious, any claims it makes about these concepts cannot be proven or falsified.

However  as Holt suggests, this is exactly what makes it a good fit for encountering (as opposed to  analyzing) risks. In his words:

It is precisely because psychoanalysis avoids an overarching claim to produce testable, watertight, universal theories that it is of relevance for risk management. By so avoiding universal theories and formulas, risk management can afford to deviate from pronouncements using mathematical formulas to cover the ‘immanent indeterminables’ manifest in human perception and awareness and systems integration.

His point is that there is a clear parallel between psychoanalysis and the individual, and risk management and the organisation:

We understand ourselves not according to a template but according to our own peculiar, beguiling histories. Metaphorically, risk management can make explicit a similar realization within and between organizations. The revealing of an unconscious world and its being in a constant state of tension between excess and stricture, between knowledge and ignorance, is emblematic of how organizational members encountering messes, wicked problems and wicked messes can be forced to think.

In brief, Holt suggests that what psychoanalysis does for the individual, risk management ought to do for the organisation.

Talking it over – the importance of conversations

A key element of psychoanalysis is the conversation between the analyst and patient. Through this process, the analyst attempts to get the patient to become aware of hidden fears and motivations. As Holt puts it,

Psychoanalysis occupies the point of rupture between conscious intention and unconscious desire — revealing repressed or overdetermined aspects of self-organization manifest in various expressions of anxiety, humour, and so on

And then, a little later,   he makes the connection to organisations:

The fact that organizations emerge from contingent, complex interdependencies between specific narrative histories suggests that risk management would be able to use similar conversations to psychoanalysis to investigate hidden motives, to examine…the possible reception of initiatives or strategies from the perspective of inherently divergent stakeholders, or to analyse the motives for and expectations of risk management itself. This fundamentally reorients the perspective of risk management from facing apparent uncertainties using technical assessment tools, to using conversations devoid of fixed formulas to encounter questioned identities, indeterminate destinies, multiple and conflicting aims and myriad anxieties.

Through conversations involving groups of stakeholders who have different risk perceptions,   one might be able to get a better understanding of a particular risk and hence, may be, design a more effective mitigation strategy.   More importantly, one may even realise that certain risks are not risks at all or others that seem straightforward have implications that would have remained hidden were it not for the conversation.

These collective conversations would take place in workshops…

…that tackle problems as wicked messes, avoid lowest-denominator consensus in favour of continued discovery of alternatives through conversation, and are instructed by metaphor rather than technical taxonomy, risk management is better able to appreciate the everyday ambivalence that fundamentally influences late-modern organizational activity. As such, risk management would be not merely a rationalization of uncertain experience but a structured and contested activity involving multiple stakeholders engaged in perpetual translation from within environments of operation and complexes of aims.

As a facilitator of such workshops, the risk analyst provokes stakeholders to think about their feelings and motivations that may be “out of bounds” in a standard risk analysis workshop.  Such a paradigm goes well beyond mainstream risk management because it addresses the risk-related anxieties and fears of individuals who are affected by it.

Conclusion

This brings me to the end of my not-so-short summary of Holt’s paper. Given the length of this post, I reckon I should keep my closing remarks short. So I’ll leave it here paraphrasing the last line of the paper, which summarises its main message:  risk management ought to be about developing an organizational capacity for overcoming risks, freed from the presumption of absolute control.

Written by K

February 5, 2018 at 11:21 pm

Uncertainty, ambiguity and the art of decision making

with 3 comments

A common myth about decision making in organisations is that it is, by and large, a rational process.   The term rational refers to decision-making methods that are based on the following broad steps:

  1. Identify available options.
  2. Develop criteria for rating options.
  3. Rate options according to criteria developed.
  4. Select the top-ranked option.

Although this appears to be a logical way to proceed it is often difficult to put into practice, primarily because of uncertainty about matters relating to the decision.

Uncertainty can manifest itself in a variety of ways: one could be uncertain about facts, the available options, decision criteria or even one’s own preferences for options.

In this post, I discuss the role of uncertainty in decision making and, more importantly, how one can make well-informed decisions in such situations.

A bit about uncertainty

It is ironic that the term uncertainty is itself vague when used in the context of decision making. There are at least five distinct senses in which it is used:

  1. Uncertainty about decision options.
  2. Uncertainty about one’s preferences for options.
  3. Uncertainty about what criteria are relevant to evaluating the options.
  4. Uncertainty about what data is needed (data relevance).
  5. Uncertainty about the data itself (data accuracy).

Each of these is qualitatively different: uncertainty about data accuracy (item 5 above) is very different from uncertainty regarding decision options (item 1). The former can potentially be dealt with using statistics whereas the latter entails learning more about the decision problem and its context, ideally from different perspectives. Put another way, the item 5 is essentially a technical matter whereas item 1 is a deeper issue that may have social, political and – as we shall see – even behavioural dimensions. It is therefore reasonable to expect that the two situations call for vastly different approaches.

Quantifiable uncertainty

A common problem in project management is the estimation of task durations. In this case, what’s requested is a “best guess” time (in hours or days) it will take to complete a task. Many project schedules represent task durations by point estimates, i.e.  by single numbers. The Gantt Chart shown in Figure 1 is a common example. In it, each task duration is represented by its expected duration. This is misleading because the single number conveys a sense of certainty that is unwarranted.  It is far more accurate, not to mention safer, to quote a range of possible durations.

Figure 1: Gantt Chart (courtesy Wikimedia)

Figure 1: Gantt Chart (courtesy Wikimedia)

In general, quantifiable uncertainties, such as those conveyed in estimates, should always be quoted as ranges – something along the following lines: task A may take anywhere between 2 and 8 days, with a most likely completion time of 4 days (Figure 2).

Figure 2: Task completion likelihood (3 point estimates)

Figure 2: Task completion likelihood (3 point estimates)

In this example, aside from stating that the task will finish sometime between 2 and 4 days, the estimator implicitly asserts that the likelihood of finishing before 2 days or after 8 days is zero.  Moreover, she also implies that some completion times are more likely than others. Although it may be difficult to quantify the likelihood exactly, one can begin by making simple (linear!) approximations as shown in Figure 3.

Figure 3: Simple probability distribution based on the estimates in Figure 2

Figure 3: Simple probability distribution based on the estimates in Fig 2

The key takeaway from the above is that quantifiable uncertainties are shapes rather than single numbers.  See this post and this one for details for how far this kind of reasoning can take you. That said, one should always be aware of the assumptions underlying the approximations. Failure to do so can be hazardous to the credibility of estimators!

Although I haven’t explicitly said so, estimation as described above has a subjective element. Among other things, the quality of an estimate depends on the judgement and experience of the estimator. As such, it is prone to being affected by errors of judgement and cognitive biases.  However, provided one keeps those caveats in mind, the probability-based approach described above is suited to situations in which uncertainties are quantifiable, at least in principle. That said, let’s move on to more complex situations in which uncertainties defy quantification.

Introducing ambiguity

The economist Frank Knight was possibly the first person to draw the distinction between quantifiable and unquantifiable uncertainties.  To make things really confusing, he called the former risk and the latter uncertainty. In his doctoral thesis, published in 1921, wrote:

…it will appear that a measurable uncertainty, or “risk” proper, as we shall call the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all. We shall accordingly restrict the term “uncertainty” to cases of the non-quantitative type (p.20)

Terminology has moved on since Knight’s time, the term uncertainty means lots of different things, depending on context. In this piece, we’ll use the term uncertainty to refer to quantifiable uncertainty (as in the task estimate of the previous section) and use ambiguity to refer to nonquantifiable uncertainty. In essence, then, we’ll use the term uncertainty for situations where we know what we’re measuring (i.e. the facts) but are uncertain about its numerical or categorical values whereas we’ll use the word ambiguity to refer to situations in which we are uncertain about what the facts  are or which facts are relevant.

As a test of understanding, you may want to classify each of the five points made in the second section of this post as either uncertain or ambiguous (Answers below)

Answer: 1 through 4 are ambiguous and 5 is uncertain.

How ambiguity manifests itself in decision problems

The distinction between uncertainty and ambiguity points to a problem with quantitative decision-making techniques such as cost-benefit analysis, multicriteria decision making methods or analytic hierarchy process. All these methods assume that decision makers are aware of all the available options, their preferences for them, the relevant evaluation criteria and the data needed. This is almost never the case for consequential decisions. To see why, let’s take a closer look at the different ways in which ambiguity can play out in the rational decision making process mentioned at the start of this article.

  1. The first step in the process is to identify available options. In the real world, however, options often cannot be enumerated or articulated fully. Furthermore, as options are articulated and explored, new options and sub-options tend to emerge. This is particularly true if the options depend on how future events unfold.
  2. The second step is to develop criteria for rating options. As anyone who has been involved in deciding on a contentious issue will confirm, it is extremely difficult to agree on a set of decision criteria for issues that affect different stakeholders in different ways.  Building a new road might improve commute times for one set of stakeholders but result in increased traffic in a residential area for others. The two criteria will be seen very differently by the two groups. In this case, it is very difficult for the two groups to agree on the relative importance of the criteria or even their legitimacy. Indeed, what constitutes a legitimate criterion is a matter of opinion.
  3. The third step is to rate options. The problem here is that real-world options often cannot be quantified or rated in a meaningful way. Many of life’s dilemmas fall into this category. For example, a decision to accept or decline a job offer is rarely made on the basis of material gain alone. Moreover, even where ratings are possible, they can be highly subjective. For example, when considering a job offer, one candidate may give more importance to financial matters whereas another might consider lifestyle-related matters (flexi-hours, commuting distance etc.) to be paramount. Another complication here is that there may not be enough information to settle the matter conclusively. As an example, investment decisions are often made on the basis of quantitative information that is based on questionable assumptions.

A key consequence of the above is that such ambiguous decision problems are socially complex – i.e. different stakeholders could have wildly different perspectives on the problem itself.   One could say the ambiguity experienced by an individual is compounded by the group.

Before going on I should point out that acute versions of such ambiguous decision problems go by many different names in the management literature. For example:

All these terms are more or less synonymous:  the root cause of the difficulty in every case is ambiguity (or unquantifiable uncertainty), which prevents a clear formulation of the problem.

Social complexity is hard enough to tackle as it is, but there’s another issue that makes things even harder: ambiguity invariably triggers negative emotions such as fear and anxiety in individuals who make up the group.  Studies in neuroscience have shown that in contrast to uncertainty, which evokes logical responses in people, ambiguity tends to stir up negative emotions while simultaneously suppressing the ability to think logically.  One can see this playing out in a group that is debating a contentious decision: stakeholders tend to get worked up over issues that touch on their values and identities, and this seems to limit their ability to look at the situation objectively.

Tackling ambiguity

Summarising the discussion thus far: rational decision making approaches are based on the assumption that stakeholders have a shared understanding of the decision problem as well as the facts and assumptions around it. These conditions are clearly violated in the case of ambiguous decision problems. Therefore, when confronted with a decision problem that has even a hint of ambiguity, the first order of the day is to help the group reach a shared understanding of the problem.  This is essentially an exercise in sensemaking, the art of collaborative problem formulation. However, this is far from straightforward because ambiguity tends to evoke negative emotions and attendant defensive behaviours.

The upshot of all this is that any approach to tackle ambiguity must begin by taking the concerns of individual stakeholders seriously.  Unless this is done, it will be impossible for the group to coalesce around a consensus decision. Indeed, ambiguity-laden decisions in organisations invariably fail when they overlook concerns of specific stakeholder groups.  The high failure rate of organisational change initiatives (60-70% according to this Deloitte report) is largely attributable to this point

There are a number of techniques that one can use to gather and synthesise diverse stakeholder viewpoints and thus reach a shared understanding of a complex or ambiguous problem. These techniques are often referred to as problem structuring methods (PSMs). I won’t go into these in detail here; for an example check out Paul Culmsee’s articles on dialogue mapping and Barry Johnson’s introduction to polarity management. There are many more techniques in the PSM stable. All of them are intended to help a group reconcile different viewpoints and thus reach a common basis from which one can proceed to the next step (i.e., make a decision on what should be done). In other words, these techniques help reduce ambiguity.

But there’s more to it than a bunch of techniques.  The main challenge is to create a holding environment that enables such techniques to work. I am sure readers have been involved in a meeting or situation where the outcome seems predetermined by management or has been undermined by self- interest. When stakeholders sense this, no amount of problem structuring is going to help.  In such situations one needs to first create the conditions for open dialogue to occur. This is precisely what a holding environment provides.

Creating such a holding environment is difficult in today’s corporate world, but not impossible. Note that this is not an idealist’s call for an organisational utopia. Rather, it involves the application of a practical set of tools that address the diverse, emotion-laden reactions that people often have when confronted with ambiguity.   It would take me too far afield to discuss PSMs and holding environments any further here. To find out more, check out my papers on holding environments and dialogue mapping in enterprise IT projects, and (for a lot more) the Heretic’s Guide series of books that I co-wrote with Paul Culmsee.

The point is simply this: in an ambiguous situation, a good decision – whatever it might be – is most likely to be reached by a consultative process that synthesises diverse viewpoints rather than by an individual or a clique.  However, genuine participation (the hallmark of a holding environment) in such a process will occur only after participants’ fears have been addressed.

Wrapping up

Standard approaches to decision making exhort managers and executives to begin with facts, and if none are available, to gather them diligently prior to making a decision. However, most real-life decisions are fraught with uncertainty so it may be best to begin with what one doesn’t know, and figure out how to make the possible decision under those “constraints of ignorance.” In this post I’ve attempted to outline what such an approach would entail. The key point is to figure out the kind uncertainty one is dealing with and choosing an approach that works for it. I’d argue that most decision making debacles stem from a failure to appreciate this point.

Of course, there’s a lot more to this approach than I can cover in the span of a post, but that’s a story for another time.

Note: This post is written as an introduction to the Data and Decision Making subject that is part of the core curriculum of the Master of Data Science and Innovation program at UTS. I’m co-teaching the subject in Autumn 2018 with Rory Angus and Alex Scriven.

Written by K

March 9, 2017 at 10:04 am

Data science and sensemaking – tales from two hackathons

with 5 comments

It isn’t that they can’t see the solution. It is that they can’t see the problem” – GK Chesterton

Introduction

Examples of vendor-generated hype about data science are not hard to find,   I found one on the very first site I visited:  a large technology and services vendor who, in their own words, claim their analytics solutions help organisations “engage with data to answer the toughest business questions, uncover patterns and pursue breakthrough ideas.”  I’ve deliberately avoided linking to the guilty party because there are many others that spout similar rhetoric.

Unfortunately it seems to work:  according to Gartner, “by 2020, predictive and prescriptive analytics will attract 40% of enterprises’ net new investment in business intelligence and analytics.” This trend is accompanied by a concomitant increase in demand for data science education, fuelled by  remarks along the lines that data science is “The Sexiest Job of the 21st Century.”

By and large, data science education tends to focus on algorithms and technology, but its practice involves much more. The vendor who claims that technology can help organisations grapple with “toughest business questions” and “pursue breakthrough ideas” is singularly silent about where these questions or ideas come from. Data is meaningless without a meaningful hypothesis.  Problem is, in the real world questions or hypotheses aren’t obvious; one has to work to formulate them. As the management icon Russell Ackoff once said, “Outside of school, problems are seldom given; they have to be taken, extracted from complex situations…”

The art of taking problems is what sensemaking is all about.

Unfortunately, it is a skill that is typically ignored by data science educators.

Why?

Probably because it is hard to teach…but the good news is that it can be learnt. Like most tacit skills, sensemaking is best learnt by doing, that is, by formulating problems in real-world situations.  Before I get to that, however, let’s take a brief detour.

Real world problems are characterised by ambiguity

An important aspect of real-world problems – as opposed to classroom ones – is that they are invariably fraught with ambiguity. For example, a customer’s requirements may be vague or the available data incomplete and messy. What this means is that there is no guarantee one will be able to formulate a well-posed problem, let alone get a useful answer.   Worse, unlike a risk-based situation in which uncertainty can be quantified, one cannot even figure out the odds of success.

The human brain processes quantifiable uncertainty (aka risk) and ambiguity very differently. The former, which can be calculated, is dealt with by the prefrontal cortex which is responsible for decision making and goal-oriented thinking. Ambiguity, on the other hand, is processed by the amygdala, which deals with emotions.  The upshot of this is that ambiguity evokes an emotional response, the most common one being anxiety.

Although some people are innately better at coping with anxiety than others, it is possible to get better at it by repeatedly putting oneself in high-pressure (yet safe) situations that are ambiguous.  For data science students, hackathons provide a perfect opportunity to do this.

Ambiguity in data science – tales from two hackathons

Over the last two months, I’ve had the privilege of being a part of the Master of Data Science Innovation (MDSI) program run by the Connected Intelligence Centre at UTS.   The course director, Theresa Anderson, sees hackathons as a great way for students to learn how to handle ambiguity.  So, apart from regular coursework assignments, students are encouraged to participate in external hackathons sponsored by industry and government organisations.   This gives them opportunities to gain practical experience in formulating problems in ambiguous and high-pressure environments.

Datacake at GovHack

A few MDSI student teams participated in a GovHack event earlier this year. Here’s what William Azevedo,  a member of team that called themselves Datacake, wrote about his team’s problem formulation journey at the event :

The challenge is simple: the competitors should form teams, identify a problem and use data from government agencies from Australia and New Zealand to present a solution to the problem. Naturally, this solution should bring some benefit to the society.

I’m not sure I’d use the word simple…but the importance of problem formulation comes through quite clearly.  Here’s how he and his team (called Datacake) went about it:

 As a starting point, our team published an online survey to understand how safe people feel when walking on the streets, especially at night. As we didn’t have much time, we spread the message via social networks. In a couple of hours, we received 44 answers. It gave us enough information to back our idea.

Notice the process used in defining the problem – the team realised they did not know enough to define a meaningful problem so they went and got relevant data. Following this:

Our team analysed the answers of the survey, engaged in passionate discussions, took tips from the mentors, had lots of coffee and designed some cool diagrams on the blackboard.

…and then his description of the Aha moment when a good idea emerged:

Then the magic happened. We had this idea of merging information about crime, demographics, weather, land zoning and street illumination to provide a map of the safe and unsafe areas within a suburb.

An important point is that sensemaking is best done collaboratively. Since the problem is ambiguous or even undefined (as in this case) no individual has a privileged access to the “truth.” It is therefore important to bring diverse perspectives to bear on the problem. Indeed, sensemaking may be thought of as collaborative problem formulation and solving. In view of this it is interesting to hear what other members of Team Datacake had to say about their problem formulation process.  Here’s a comment from Anthony So:

During the whole weekend we really forced ourselves to go deep and asked “Why is it happening? Why is it happening? Why is it happening?” every time we found an interesting pattern. We really wanted to understand the true root causes of those accidents. We didn’t want to stay at a descriptive level. We knew the answers were behavioural. We knew there were multiple problems and therefore require different answers and solutions. We did different techniques to do so: machine learning, stats, data visualisation. It didn’t matter which we used the only important point was how can we get to answers of those questions.

The specific area they looked at was pedestrian safety. They found that obvious variables, such as driver fatigue and hazards were not significant, so they started looking for other potential factors. Here’s how Anthony put it:

For instance we built a classification model on the severity of the accidents involving children but we didn’t use it to make predictions. We used it to identify the important features (and unimportant) for those cases. We found out that some of the variables related to the environment (Primary_hazardous_feature, Surface_condition, Weather…) and to the drivers (Fatigue_involved_in_crash…) were not important. This gave us a good indication that those accidents are mostly related directly to the behaviour of the children. So we kept diving further and further and found 3 postcodes with higher numbers of accidents than others. We focused on those 3 areas and we kept going deeper and deeper…

In the end Datacake came up with a few suggestions for improving pedestrian safety. They were awarded a prize for their efforts, so the problem they formulated and solved was clearly valuable to the sponsors.

Peppermoney Hackathon

A couple of weekends ago, Pepper Money, Australia’s largest non-bank lender sponsored a day long internal hackathon for MDSI students, with a hefty winner-take-all prize as an incentive. The challenge was quite open-ended, and had to do with helping the organisation develop a consistent brand voice. Participants were given a small corpus of text files from the organisation’s public and social media sites and were given very general guidelines on how to proceed. Details were left entirely to the teams.

As one might expect, most teams spent the first few hours struggling to define a relevant and tractable problem – relevance being paramount for the client and tractability for the teams.  Being a mentor at the event, I was able observe how different teams handled this. Among other things, I was particularly impressed by how some teams with very little text mining experience were able to – in a few hours – come up with a good problem, an approach to solve it…and, most importantly, make decent progress by day’s end.

I won’t go into details except to say that the approaches were diverse, ranging from the somewhat philosophical to the very technical. A couple of examples:

I was amazed at the diversity of solutions the groups came up with, and so were the other mentors and the sponsor. Blair Hudson, Innovation Portfolio Manager at Pepper Money, summed the day up very well when he said:

#PepxUTS was our first hackathon event, challenging students to build data science solutions in a day to allow everyone at Pepper to communicate using a consistent brand voice. Our Co-Group CEOs both joined in for judging and awarded the winners. It was a rewarding day for all involved

(For some vignettes from the day, check out the #PepxUTS hashtag on Twitter.)

The day’s experiences left me ever more convinced that hackathons are an excellent vehicle for learning and demonstrating the practical utility of sensemaking skills.

Wrapping up

The two case studies highlight the benefits of sensemaking skills, both for students and organisations.  On the one hand,  students who participated got valuable experience in formulating problems collaboratively in high-pressure, high-ambiguity situations. This is a skill that cannot be learnt in classrooms, MOOCs or even in online data challenges (like Kaggle) where problems tend to be clearly defined. On the other hand, sponsoring organisations have benefited from new insights into longstanding problems.

Finally, it should be clear that although I’ve focused on educational settings,  what I’ve said for students applies to organisational settings too: there’s nothing to stop organisations from using hackathons as a means to help their employees learn sensemaking skills.

To conclude, the main point I want to make is that the most important situations we encounter at work (and even in our personal lives) are usually fraught with ambiguity. Our first reaction is to jump into problem solving mode because it feels like the right thing to do. In reality, one is generally better off stepping back and taking the time to think the situation through, preferably with a group of diversely skilled individuals. All too often this sensemaking step is neglected, and teams end up solving an irrelevant problem.

To paraphrase Chesterton, in order to see the right solution, one must first see the right problem.

Acknowledgements

Many thanks to Blair Hudson, William Azevedo and Anthony So for their contributions to this piece.

Written by K

October 18, 2016 at 6:23 pm

What is sensemaking?

with 15 comments

I’ve recently set up a consulting practice specializing in sensemaking and analytics. Most people understand the analytics bit, but many have questions about sensemaking. I got that question so many times that I decided to do a short (2.5 minute) whiteboard video explaining what the term means to me (and my definition is not the same as Wikipedia’s).

Here it is:

 

For those who prefer the written word, here’s the script (minus the advertising):

“Most organizations are very good at solving problems. This is no surprise: much of training, right from school to university, focuses on teaching us the skills required to solve problems. Now regardless of the specific technique used, the problem-solving process is essentially a logical or analytical one. It goes something like this:

  • Gather information about the problem.
  • Analyse the information.
  • Formulate candidate solutions.
  • Implement the solution of choice.

This so-called GAFI method works by breaking problems down into manageable parts, solving each of the parts separately and then assembling these into a solution. The method works very well for most scientific and engineering problems – even one  as complicated as sending a spacecraft to Saturn. Indeed, it is so successful that it underpins all of science and modern technology.

However, there is a serious gap in the GAFI method – it assumes that problems are given, it does not tell us how to formulate problems. And as the management luminary, Russell Ackoff once said:

Outside of school, problems are seldom given; they have to be taken, extracted from complex situations…”

The art of taking problems is what sensemaking is all about.

Unlike analytical thinking, which is purely logical, sensemaking involves such as collaboration, imagination and a healthy tolerance for ambiguity. It is an art that is absolutely essential for surviving…no, thriving, in the increasingly complex world of the 21st century.

The two modes of thinking – sensemaking and analytical – are as different as chalk and cheese but both are necessary for a successful outcome. We like to think of them as lying at opposite ends of a spectrum of thinking styles. When approaching a new situation or problem, one should always begin at the sensemaking end and move towards the analytical end as one understands the problem better. Unfortunately time pressures in corporate environments often force managers and employees into analytical mode without a full appreciation of the problem they are attempting solve. As a result the solutions are often less than optimal. Sensemaking techniques equip organisations with tools that cover the entire problem lifecycle, from definition to solution.”

As a closing remark (that might be construed as advertising…) I’ll mention that I’ve discussed a number of these techniques on Eight to Late. Here are a couple of examples:

The Approach: a dialogue mapping story

The dilemmas of enterprise IT

…and, of course, you can always have a look at my book or ping me for a no-obligation chat to find out more 🙂

Written by K

March 15, 2016 at 6:02 pm

%d bloggers like this: