Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Organizations’ Category

Uncertainty, ambiguity and the art of decision making

with 2 comments

A common myth about decision making in organisations is that it is, by and large, a rational process.   The term rational refers to decision-making methods that are based on the following broad steps:

  1. Identify available options.
  2. Develop criteria for rating options.
  3. Rate options according to criteria developed.
  4. Select the top-ranked option.

Although this appears to be a logical way to proceed it is often difficult to put into practice, primarily because of uncertainty about matters relating to the decision.

Uncertainty can manifest itself in a variety of ways: one could be uncertain about facts, the available options, decision criteria or even one’s own preferences for options.

In this post, I discuss the role of uncertainty in decision making and, more importantly, how one can make well-informed decisions in such situations.

A bit about uncertainty

It is ironic that the term uncertainty is itself vague when used in the context of decision making. There are at least five distinct senses in which it is used:

  1. Uncertainty about decision options.
  2. Uncertainty about one’s preferences for options.
  3. Uncertainty about what criteria are relevant to evaluating the options.
  4. Uncertainty about what data is needed (data relevance).
  5. Uncertainty about the data itself (data accuracy).

Each of these is qualitatively different: uncertainty about data accuracy (item 5 above) is very different from uncertainty regarding decision options (item 1). The former can potentially be dealt with using statistics whereas the latter entails learning more about the decision problem and its context, ideally from different perspectives. Put another way, the item 5 is essentially a technical matter whereas item 1 is a deeper issue that may have social, political and – as we shall see – even behavioural dimensions. It is therefore reasonable to expect that the two situations call for vastly different approaches.

Quantifiable uncertainty

A common problem in project management is the estimation of task durations. In this case, what’s requested is a “best guess” time (in hours or days) it will take to complete a task. Many project schedules represent task durations by point estimates, i.e.  by single numbers. The Gantt Chart shown in Figure 1 is a common example. In it, each task duration is represented by its expected duration. This is misleading because the single number conveys a sense of certainty that is unwarranted.  It is far more accurate, not to mention safer, to quote a range of possible durations.

Figure 1: Gantt Chart (courtesy Wikimedia)

Figure 1: Gantt Chart (courtesy Wikimedia)

In general, quantifiable uncertainties, such as those conveyed in estimates, should always be quoted as ranges – something along the following lines: task A may take anywhere between 2 and 8 days, with a most likely completion time of 4 days (Figure 2).

Figure 2: Task completion likelihood (3 point estimates)

Figure 2: Task completion likelihood (3 point estimates)

In this example, aside from stating that the task will finish sometime between 2 and 4 days, the estimator implicitly asserts that the likelihood of finishing before 2 days or after 8 days is zero.  Moreover, she also implies that some completion times are more likely than others. Although it may be difficult to quantify the likelihood exactly, one can begin by making simple (linear!) approximations as shown in Figure 3.

Figure 3: Simple probability distribution based on the estimates in Figure 2

Figure 3: Simple probability distribution based on the estimates in Fig 2

The key takeaway from the above is that quantifiable uncertainties are shapes rather than single numbers.  See this post and this one for details for how far this kind of reasoning can take you. That said, one should always be aware of the assumptions underlying the approximations. Failure to do so can be hazardous to the credibility of estimators!

Although I haven’t explicitly said so, estimation as described above has a subjective element. Among other things, the quality of an estimate depends on the judgement and experience of the estimator. As such, it is prone to being affected by errors of judgement and cognitive biases.  However, provided one keeps those caveats in mind, the probability-based approach described above is suited to situations in which uncertainties are quantifiable, at least in principle. That said, let’s move on to more complex situations in which uncertainties defy quantification.

Introducing ambiguity

The economist Frank Knight was possibly the first person to draw the distinction between quantifiable and unquantifiable uncertainties.  To make things really confusing, he called the former risk and the latter uncertainty. In his doctoral thesis, published in 1921, wrote:

…it will appear that a measurable uncertainty, or “risk” proper, as we shall call the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all. We shall accordingly restrict the term “uncertainty” to cases of the non-quantitative type (p.20)

Terminology has moved on since Knight’s time, the term uncertainty means lots of different things, depending on context. In this piece, we’ll use the term uncertainty to refer to quantifiable uncertainty (as in the task estimate of the previous section) and use ambiguity to refer to nonquantifiable uncertainty. In essence, then, we’ll use the term uncertainty for situations where we know what we’re measuring (i.e. the facts) but are uncertain about its numerical or categorical values whereas we’ll use the word ambiguity to refer to situations in which we are uncertain about what the facts  are or which facts are relevant.

As a test of understanding, you may want to classify each of the five points made in the second section of this post as either uncertain or ambiguous (Answers below)

Answer: 1 through 4 are ambiguous and 5 is uncertain.

How ambiguity manifests itself in decision problems

The distinction between uncertainty and ambiguity points to a problem with quantitative decision-making techniques such as cost-benefit analysis, multicriteria decision making methods or analytic hierarchy process. All these methods assume that decision makers are aware of all the available options, their preferences for them, the relevant evaluation criteria and the data needed. This is almost never the case for consequential decisions. To see why, let’s take a closer look at the different ways in which ambiguity can play out in the rational decision making process mentioned at the start of this article.

  1. The first step in the process is to identify available options. In the real world, however, options often cannot be enumerated or articulated fully. Furthermore, as options are articulated and explored, new options and sub-options tend to emerge. This is particularly true if the options depend on how future events unfold.
  2. The second step is to develop criteria for rating options. As anyone who has been involved in deciding on a contentious issue will confirm, it is extremely difficult to agree on a set of decision criteria for issues that affect different stakeholders in different ways.  Building a new road might improve commute times for one set of stakeholders but result in increased traffic in a residential area for others. The two criteria will be seen very differently by the two groups. In this case, it is very difficult for the two groups to agree on the relative importance of the criteria or even their legitimacy. Indeed, what constitutes a legitimate criterion is a matter of opinion.
  3. The third step is to rate options. The problem here is that real-world options often cannot be quantified or rated in a meaningful way. Many of life’s dilemmas fall into this category. For example, a decision to accept or decline a job offer is rarely made on the basis of material gain alone. Moreover, even where ratings are possible, they can be highly subjective. For example, when considering a job offer, one candidate may give more importance to financial matters whereas another might consider lifestyle-related matters (flexi-hours, commuting distance etc.) to be paramount. Another complication here is that there may not be enough information to settle the matter conclusively. As an example, investment decisions are often made on the basis of quantitative information that is based on questionable assumptions.

A key consequence of the above is that such ambiguous decision problems are socially complex – i.e. different stakeholders could have wildly different perspectives on the problem itself.   One could say the ambiguity experienced by an individual is compounded by the group.

Before going on I should point out that acute versions of such ambiguous decision problems go by many different names in the management literature. For example:

All these terms are more or less synonymous:  the root cause of the difficulty in every case is ambiguity (or unquantifiable uncertainty), which prevents a clear formulation of the problem.

Social complexity is hard enough to tackle as it is, but there’s another issue that makes things even harder: ambiguity invariably triggers negative emotions such as fear and anxiety in individuals who make up the group.  Studies in neuroscience have shown that in contrast to uncertainty, which evokes logical responses in people, ambiguity tends to stir up negative emotions while simultaneously suppressing the ability to think logically.  One can see this playing out in a group that is debating a contentious decision: stakeholders tend to get worked up over issues that touch on their values and identities, and this seems to limit their ability to look at the situation objectively.

Tackling ambiguity

Summarising the discussion thus far: rational decision making approaches are based on the assumption that stakeholders have a shared understanding of the decision problem as well as the facts and assumptions around it. These conditions are clearly violated in the case of ambiguous decision problems. Therefore, when confronted with a decision problem that has even a hint of ambiguity, the first order of the day is to help the group reach a shared understanding of the problem.  This is essentially an exercise in sensemaking, the art of collaborative problem formulation. However, this is far from straightforward because ambiguity tends to evoke negative emotions and attendant defensive behaviours.

The upshot of all this is that any approach to tackle ambiguity must begin by taking the concerns of individual stakeholders seriously.  Unless this is done, it will be impossible for the group to coalesce around a consensus decision. Indeed, ambiguity-laden decisions in organisations invariably fail when they overlook concerns of specific stakeholder groups.  The high failure rate of organisational change initiatives (60-70% according to this Deloitte report) is largely attributable to this point

There are a number of techniques that one can use to gather and synthesise diverse stakeholder viewpoints and thus reach a shared understanding of a complex or ambiguous problem. These techniques are often referred to as problem structuring methods (PSMs). I won’t go into these in detail here; for an example check out Paul Culmsee’s articles on dialogue mapping and Barry Johnson’s introduction to polarity management. There are many more techniques in the PSM stable. All of them are intended to help a group reconcile different viewpoints and thus reach a common basis from which one can proceed to the next step (i.e., make a decision on what should be done). In other words, these techniques help reduce ambiguity.

But there’s more to it than a bunch of techniques.  The main challenge is to create a holding environment that enables such techniques to work. I am sure readers have been involved in a meeting or situation where the outcome seems predetermined by management or has been undermined by self- interest. When stakeholders sense this, no amount of problem structuring is going to help.  In such situations one needs to first create the conditions for open dialogue to occur. This is precisely what a holding environment provides.

Creating such a holding environment is difficult in today’s corporate world, but not impossible. Note that this is not an idealist’s call for an organisational utopia. Rather, it involves the application of a practical set of tools that address the diverse, emotion-laden reactions that people often have when confronted with ambiguity.   It would take me too far afield to discuss PSMs and holding environments any further here. To find out more, check out my papers on holding environments and dialogue mapping in enterprise IT projects, and (for a lot more) the Heretic’s Guide series of books that I co-wrote with Paul Culmsee.

The point is simply this: in an ambiguous situation, a good decision – whatever it might be – is most likely to be reached by a consultative process that synthesises diverse viewpoints rather than by an individual or a clique.  However, genuine participation (the hallmark of a holding environment) in such a process will occur only after participants’ fears have been addressed.

Wrapping up

Standard approaches to decision making exhort managers and executives to begin with facts, and if none are available, to gather them diligently prior to making a decision. However, most real-life decisions are fraught with uncertainty so it may be best to begin with what one doesn’t know, and figure out how to make the possible decision under those “constraints of ignorance.” In this post I’ve attempted to outline what such an approach would entail. The key point is to figure out the kind uncertainty one is dealing with and choosing an approach that works for it. I’d argue that most decision making debacles stem from a failure to appreciate this point.

Of course, there’s a lot more to this approach than I can cover in the span of a post, but that’s a story for another time.

Note: This post is written as an introduction to the Data and Decision Making subject that is part of the core curriculum of the Master of Data Science and Innovation program, run by the Connected Intelligence Centre at UTS. I’m coordinating the subject this semester, and am honoured to be co-teaching it with my erstwhile colleague Sean Heffernan and my longtime collaborator Paul Culmsee.

Written by K

March 9, 2017 at 10:04 am

The law of requisite variety and its implications for enterprise IT

with 3 comments

Introduction

There are two  facets to the operation of IT systems and processes in organisations:  governance, the standards and regulations associated with a system or process; and execution, which relates to steering the actual work of the system or process in specific situations.

An example might help clarify the difference:

The purpose of project management is to keep projects on track. There are two aspects to this: one pertaining to the project management office (PMO) which is responsible for standards and regulations associated with managing projects in general, and the other relating to the day-to-day work of steering a particular project.  The two sometimes work at cross-purposes. For example, successful project managers know that much of their work is about navigate their projects through the potentially treacherous terrain of their organisations, an activity that sometimes necessitates working around, or even breaking, rules set by the PMO.

Governance and steering share a common etymological root: the word kybernetes, which means steersman in Greek.  It also happens to be the root word of Cybernetics  which is the science of regulation or control.   In this post,  I  apply a key principle of cybernetics to a couple of areas of enterprise IT.

Cybernetic systems

An oft quoted example of a cybernetic system is a thermostat, a device that regulates temperature based on inputs from the environment.  Most cybernetic systems are way more complicated than a thermostat. Indeed, some argue that the Earth is a huge cybernetic system. A smaller scale example is a system consisting of a car + driver wherein a driver responds to changes in the environment thereby controlling the motion of the car.

Cybernetic systems vary widely not just in size, but also in complexity. A thermostat is concerned only the ambient temperature whereas the driver in a car has to worry about a lot more (e.g. the weather, traffic, the condition of the road, kids squabbling in the back-seat etc.).   In general, the more complex the system and its processes, the larger the number of variables that are associated with it. Put another way, complex systems must be able to deal with a greater variety of disturbances than simple systems.

The law of requisite variety

It turns out there is a fundamental principle – the law of requisite variety– that governs the capacity of a system to respond to changes in its environment. The law is a quantitative statement about the different types of responses that a system needs to have in order to deal with the range of  disturbances it might experience.

According to this paper, the law of requisite variety asserts that:

The larger the variety of actions available to a control system, the larger the variety of perturbations it is able to compensate.

Mathematically:

V(E) > V(D) – V(R) – K

Where V represents variety, E represents the essential variable(s) to be controlled, D represents the disturbance, R the regulation and K the passive capacity of the system to absorb shocks. The terms are explained in brief below:

V(E) represents the set of  desired outcomes for the controlled environmental variable:  desired temperature range in the case of the thermostat,  successful outcomes (i.e. projects delivered on time and within budget) in the case of a project management office.

V(D) represents the variety of disturbances the system can be subjected to (the ways in which the temperature can change, the external and internal forces on a project)

V(R) represents the various ways in which a disturbance can be regulated (the regulator in a thermostat, the project tracking and corrective mechanisms prescribed by the PMO)

K represents the buffering capacity of the system – i.e. stored capacity to deal with unexpected disturbances.

I won’t say any more about the law of requisite variety as it would take me to far afield; the interested and technically minded reader is referred to the link above or this paper for more (full pdf here).

Implications for enterprise IT

In plain English, the law of requisite variety states that only “variety can absorb variety.”  As stated by Anthony Hodgson in an essay in this book, the law of requisite variety:

…leads to the somewhat counterintuitive observation that the regulator must have a sufficiently large variety of actions in order to ensure a sufficiently small variety of outcomes in the essential variables E. This principle has important implications for practical situations: since the variety of perturbations a system can potentially be confronted with is unlimited, we should always try maximize its internal variety (or diversity), so as to be optimally prepared for any foreseeable or unforeseeable contingency.

This is entirely consistent with our intuitive expectation that the best way to deal with the unexpected is to have a range of tools and approaches at ones disposal.

In the remainder of this piece, I’ll focus on the implications of the law for an issue that is high on the list of many corporate IT departments: the standardization of  IT systems and/or processes.

The main rationale behind standardizing an IT  process is to handle all possible demands (or use cases) via a small number of predefined responses.   When put this way, the connection to the law of requisite variety is clear: a request made upon a function such as a service desk or project management office (PMO) is a disturbance and the way they regulate or respond to it determines the outcome.

Requisite variety and the service desk

A service desk is a good example of a system that can be standardized. Although users may initially complain about having to log a ticket instead of calling Nathan directly, in time they get used to it, and may even start to see the benefits…particularly when Nathan goes on vacation.

The law of requisite variety tells us successful standardization requires that all possible demands made on the system be known and regulated by the  V(R)  term in the equation above. In case of a service desk this is dealt with by a hierarchy of support levels. 1st level support deals with routine calls (incidents and service requests in ITIL terminology) such as system access and simple troubleshooting. Calls that cannot be handled by this tier are escalated to the 2nd and 3rd levels as needed.  The assumption here is that, between them, the three support tiers should be able to handle majority of calls.

Slack  (the K term) relates to unexploited capacity.  Although needed in order to deal with unexpected surges in demand, slack is expensive to carry when one doesn’t need it.  Given this, it makes sense to incorporate such scenarios into the repertoire of the standard system responses (i.e the V(R) term) whenever possible.  One way to do this is to anticipate surges in demand and hire temporary staff to handle them. Another way  is to deal with infrequent scenarios outside the system- i.e. deem them out of scope for the service desk.

Service desk standardization is thus relatively straightforward to achieve provided:

  • The kinds of calls that come in are largely predictable.
  • The work can be routinized.
  • All non-routine work – such as an application enhancement request or a demand for a new system-  is  dealt with outside the system via (say) a change management process.

All this will be quite unsurprising and obvious to folks working in corporate IT. Now  let’s see what happens when we apply the law to a more complex system.

Requisite variety and the PMO

Many corporate IT leaders see the establishment of a PMO as a way to control costs and increase efficiency of project planning and execution.   PMOs attempt to do this by putting in place governance mechanisms. The underlying cause-effect assumption is that if appropriate rules and regulations are put in place, project execution will necessarily improve.  Although this sounds reasonable, it often does not work in practice: according to this article, a significant fraction of PMOs fail to deliver on the promise of improved project performance. Consider the following points quoted directly from the article:

  • “50% of project management offices close within 3 years (Association for Project Mgmt)”
  • “Since 2008, the correlated PMO implementation failure rate is over 50% (Gartner Project Manager 2014)”
  • “Only a third of all projects were successfully completed on time and on budget over the past year (Standish Group’s CHAOS report)”
  • “68% of stakeholders perceive their PMOs to be bureaucratic     (2013 Gartner PPM Summit)”
  • “Only 40% of projects met schedule, budget and quality goals (IBM Change Management Survey of 1500 execs)”

The article goes on to point out that the main reason for the statistics above is that there is a gap between what a PMO does and what the business expects it to do. For example, according to the Gartner review quoted in the article over 60% of the stakeholders surveyed believe their PMOs are overly bureaucratic.  I can’t vouch for the veracity of the numbers here as I cannot find the original paper. Nevertheless, anecdotal evidence (via various articles and informal conversations) suggests that a significant number of PMOs fail.

There is a curious contradiction between the case of the service desk and that of the PMO. In the former, process and methodology seem to work whereas in the latter they don’t.

Why?

The answer, as you might suspect, has to do with variety.  Projects and service requests are very different beasts. Among other things, they differ in:

  • Duration: A project typically goes over many months whereas a service request has a lifetime of days,
  • Technical complexity: A project involves many (initially ill-defined) technical tasks that have to be coordinated and whose outputs have to be integrated.  A service request typically consists one (or a small number) of well-defined tasks.
  • Social complexity: A project involves many stakeholder groups, with diverse interests and opinions. A service request typically involves considerably fewer stakeholders, with limited conflicts of opinions/interests.

It is not hard to see that these differences increase variety in projects compared to service requests. The reason that standardization (usually) works for service desks  but (often) fails for PMOs is that the PMOs are subjected a greater variety of disturbances than service desks.

The key point is that the increased variety in the case of the PMO precludes standardisation.  As the law of requisite variety tells us, there are two ways to deal with variety:  regulate it  or adapt to it. Most PMOs take the regulation route, leading to over-regulation and outcomes that are less than satisfactory. This is exactly what is reflected in the complaint about PMOs being overly bureaucratic. The solution simple and obvious solution is for PMOs to be more flexible– specifically, they must be able to adapt to the ever changing demands made upon them by their organisations’ projects.  In terms of the law of requisite variety, PMOs need to have the capacity to change the system response, V(R), on the fly. In practice this means recognising the uniqueness of requests by avoiding reflex, cookie cutter responses that characterise bureaucratic PMOs.

Wrapping up

The law of requisite variety is a general principle that applies to any regulated system.  In this post I applied the law to two areas of enterprise IT – service management and project governance – and  discussed why standardization works well  for the former but less satisfactorily for the latter. Indeed, in view of the considerable differences in the duration and complexity of service requests and projects, it is unreasonable to expect that standardization will work well for both.  The key takeaway from this piece is therefore a simple one: those who design IT functions should pay attention to the variety that the functions will have to cope with, and bear in mind that standardization works well only if variety is known and limited.

Written by K

December 12, 2016 at 9:00 pm

The Heretic’s Guide to Management – understanding ambiguity in the corporate world

with 7 comments

I am delighted to announce that my new business book, The Heretic’s Guide to Management: The Art of Harnessing Ambiguity, is now available in e-book and print formats. The book, co-written with Paul Culmsee, is a loose sequel to our previous tome, The Heretics Guide to Best Practices.

Many reviewers liked the writing style of our first book, which combined rigour with humour. This book continues in the same vein, so if you enjoyed the first one we hope you might like this one too. The new book is half the size of the first one and I considerably less idealistic too. In terms of subject matter, I could say “Ambiguity, Teddy Bears and Fetishes” and leave it at that…but that might leave you thinking that it’s not the kind of book you would want anyone to see on your desk!

Rest assured, The Heretic’s Guide to Management is not a corporate version of Fifty Shades of Grey. Instead, it aims to delve into the complex but fascinating ways in which ambiguity affects human behaviour. More importantly, it discusses how ambiguity can be harnessed in ways that achieve positive outcomes.  Most management techniques (ranging from strategic planning to operational budgeting) attempt to reduce ambiguity and thereby provide clarity. It is a profound irony of modern corporate life that they often end up doing the opposite: increasing ambiguity rather than reducing it.

On the surface, it is easy enough to understand why: organizations are complex entities so it is unreasonable to expect management models, such as those that fit neatly into a 2*2 matrix or a predetermined checklist, to work in the real world. In fact, expecting them to work as advertised is like colouring a paint-by-numbers Mona Lisa, expecting to recreate Da Vinci’s masterpiece. Ambiguity therefore invariably remains untamed, and reality reimposes itself no matter how alluring the model is.

It turns out that most of us have a deep aversion to situations that involve even a hint of ambiguity. Recent research in neuroscience has revealed the reason for this: ambiguity is processed in the parts of the brain which regulate our emotional responses. As a result, many people associate it with feelings of anxiety. When kids feel anxious, they turn to transitional objects such as teddy bears or security blankets. These objects provide them with a sense of stability when situations or events seem overwhelming. In this book, we show that as grown-ups we don’t stop using teddy bears – it is just that the teddies we use take a different, more corporate, form. Drawing on research, we discuss how management models, fads and frameworks are actually akin to teddy bears. They provide the same sense of comfort and certainty to corporate managers and minions as real teddies do to distressed kids.

A plain old Teddy

A Plain Teddy

Most children usually outgrow their need for teddies as they mature and learn to cope with their childhood fears. However, if development is disrupted or arrested in some way, the transitional object can become a fetish – an object that is held on to with a pathological intensity, simply for the comfort that it offers in the face of ambiguity. The corporate reliance on simplistic solutions for the complex challenges faced is akin to little Johnny believing that everything will be OK provided he clings on to Teddy.

When this happens, the trick is finding ways to help Johnny overcome his fear of ambiguity.

Ambiguity is a primal force that drives much of our behaviour. It is typically viewed negatively, something to be avoided or to be controlled.

A Sith Teddy

A Sith Teddy

The truth, however, is that ambiguity is a force that can be used in positive ways too. The Force that gave the Dark Side their power in the Star Wars movies was harnessed by the Jedi in positive ways.

A Jedi Teddy

A Jedi Teddy

Our book shows you how ambiguity, so common in the corporate world, can be harnessed to achieve the results you want.

The e-book is available via popular online outlets. Here are links to some:

Amazon Kindle

Google Play

Kobo

For those who prefer paperbacks, the print version is available here.

Thanks for your support 🙂

Written by K

July 12, 2016 at 10:30 pm

The “value add” tax – a riff on corporate communication

leave a comment »

A mainstay of team building workshops is the old “what can we do better” exercise.  Over the years I’ve noticed that “improving communication” is an item that comes up again and again in these events.  This is frustrating for managers. For example, during a team-building debrief some years ago, an exasperated executive remarked, “Oh don’t pay any attention to that [better communication], it keeps coming up no matter what we do.”

The executive had a point.  The organisation had invested much effort in establishing new channels of communication – social media, online, face-to-face forums etc.  The uptake, however, was disappointing:  turnout at the face-to-face meetings was consistently low as was use of other channels.

As far as management was concerned, they had done their job by establishing communication channels and making them available to all. What more could they  be expected to do? The matter was dismissed with a collective shrug of suit-clad shoulders…until the next team building event, when the issue was highlighted by employees yet again.

After much hand-wringing, the organisation embarked on another “better communication cycle.”  Much effort was expended…again, with the same disappointing results.

Anecdotal evidence via conversations with friends and collaborators suggests that variants of this story play out in many organisations. This makes the issue well worth exploring. I won’t be so presumptuous as to offer answers; I’m well aware that folks much better qualified than I have spent years attempting to do so. Instead I raise a point which, though often overlooked, might well have something to do with the lack of genuine communication in organisations.

Communication experts have long agreed that face-to-face dialogue is the most effective mode of communication. Backing for this comes from the interactional or pragmatic view, which is based on the premise that communication is more about building relationships than conveying information. Among other things, face-to-face communication enables the communicating parties to observe and interpret non-verbal signals such as facial expression and gestures and, as we all know, these often “say” much more than what’s being said.

A few months ago I started paying closer attention to non-verbal cues. This can be hard to do because people are good at disguising their feelings. Involuntary expressions indicative of people’s real thoughts can be fleeting. A flicker of worry, fear or anger is quickly covered by a mask of indifference.

In meetings, difficult topics tend to be couched in platitudinous language. Platitudes are empty words that sound great but can be interpreted in many different ways. Reconciling those differences often leads to pointless arguments that are emotionally draining. Perhaps this is why people prefer to take refuge in indifference.

A while ago I was sitting in a meeting where the phrase “value add activity” (sic) cropped up once, then again…and then many times over. Soon it was apparent that everyone in the room had a very different conception of what constituted a “value add activity.” Some argued that project management is a value add activity, others disagreed vehemently arguing that project management is a bureaucratic exercise and that real value lies in creating something. Round and round the arguments went but there was no agreement on what constituted a “value add activity.” The discussion generated a lot of heat but shed no light whatsoever on the term.

A problem with communication in the corporate world is that it is loaded with such platitudes. To make sense of these, people have to pay what I call a “value add” tax – the effort in reaching a consensus on what the platitudinous terms mean. This can be emotionally extortionate because platitudes often touch upon issues that affect people’s sense of well-being.

Indifference is easier because we can then pretend to understand and agree with each other when we would rather not understand, let alone agree, at all.

Written by K

November 19, 2015 at 8:02 am

Setting up an internal data analytics practice – some thoughts from a wayfarer

leave a comment »

Introduction

This year has been hugely exciting so far: I’ve been exploring and playing with various techniques that fall under the general categories of data mining and text analytics. What’s been particularly satisfying is that I’ve been fortunate to find meaningful applications for these techniques within my organization.

Although I have a fair way to travel yet, I’ve learnt that common wisdom about data analytics – especially the stuff that comes from software vendors and high-end consultancies – can be misleading, even plain wrong. Hence this post in which I dispatch some myths and share a few pointers on establishing data analytics capabilities within an organization.

Busting a few myths

Let’s get right to it by taking a critical look at a few myths about setting up an internal data analytics practice.

  1. Requires high-end technology and a big budget: this myth is easy to bust because I can speak from recent experience. No, you do not need cutting-edge technology or an oversized budget.   You can get started for with an outlay of 0$ – yes, that’s right, for free!  All you need to is the open-source statistical package R (check out this section of my article on text mining for more on installing and using R) and the willingness to roll-up your sleeves and learn (more about this  later).  No worries if you prefer to stick with familiar tools – you can even begin with Excel.
  2. Needs specialist skills: another myth floating around is that you need Phd level knowledge in statistics or applied mathematics to do practical work in analytics. Sorry, but that’s plain wrong. You do need a PhD to do research in the analytics and develop your own algorithms, but not if you want to apply algorithms written by others.Yes, you will need to develop an understanding of the algorithms you plan to use, a feel for how they work and the ability to tell whether the results make sense. There are many good resources that can help you develop these skills – see, for example, the outstanding books by James, Witten, Hastie and Tibshirani and Kuhn and Johnson.
  3. Must have sponsorship from the top: this one is possibly a little more controversial than the previous two. It could be argued that it is impossible to gain buy in for a new capability without sponsorship from top management. However, in my experience, it is OK to start small by finding potential internal “customers” for analytics services through informal conversations with folks in different functions.I started by having informal conversations with managers in two different areas: IT infrastructure and sales / marketing.  I picked these two areas because I knew that they had several gigabytes of under-exploited data – a good bit of it unstructured – and a lot of open questions that could potentially be answered (at least partially) via methods of data and text analytics.  It turned out I was right. I’m currently doing a number of proofs of concept and small projects in both these areas.  So you don’t need sponsorship from the top as long as you can get buy in from people who have problems they believe you can solve. If you deliver, they may even advocate your cause to their managers.

A caveat is in order at this point:  my organization is not the same as yours, so you may well need to follow a different path from mine. Nevertheless, I do believe that it is always possible to find a way to start without needing permission or incurring official wrath.  In that spirit, I now offer some suggestions to help kick-start your efforts

Getting started

As the truism goes, the hardest part of any new effort is getting started.  The first thing to keep in mind is to start small. This is true even if you have official sponsorship and a king-sized budget. It is very tempting to spend a lot of time garnering management support for investing in high-end technology.  Don’t do it!  Do the following instead:

  1. Develop an understanding of the problems faced by people you plan to approach: The best way to do this is to talk to analysts or frontline managers. In my case, I was fortunate to have access to some very savvy analysts in IT service management and marketing who gave me a slew of interesting ideas to pursue. A word of advice: it is best not to talk to senior managers until you have a few concrete results that you can quantify in terms of dollar values.
  2. Invest time and effort in understanding analytics algorithms and gaining practical experience with them: As mentioned earlier, I started with R – and I believe it is the best choice. Not just because it is free but also because there are a host of packages available to tackle just about any analytics problem you might encounter.  There are some excellent free resources available to get you started with R (check out this listing on the r-statistics blog, for example).It is important that you start cutting code as you learn. This will help you build a repertoire of techniques and approaches as you progress. If you get stuck when coding, chances are you will find a solution on the wonderful stackoverflow site.
  3. Evangelise, evangelise, evangelise: You are, in effect, trying to sell an idea to people within your organization. You therefore have to identify people who might be able to help you and then convince them that your idea has merit. The best way to do the latter is to have concrete examples of problems that you have tackled. This is a chicken-and-egg situation in that you can’t have any examples until you gain support.  I got support by approaching people I know well. I found that most – no, all – of them were happy to provide me with interesting ideas and access to their data.
  4. Begin with small (but real) problems: It is important to start with the “low-hanging fruit” – the problems that would take the least effort to solve. However, it is equally important to address real problems, i.e. those that matter to someone.
  5. Leverage your organisation’s existing data infrastructure: From what I’ve written thus far, I may have given you the impression that the tools of data analytics stand separate from your existing data infrastructure. Nothing could be further from the truth. In reality, I often do the initial work  (basic preprocessing and exploratory analysis) using my organisation’s relational database infrastructure. Relational databases have sophisticated analytical extensions to SQL as well as efficient bulk data cleansing and transport facilities. Using these make good sense, particularly if your R installation is on a desktop or laptop computer as it is in my case. Moreover, many enterprise database vendors now offer add-on options that integrate R with their products. This gives you the best of both worlds – relational and analytical capabilities on an enterprise-class platform.
  6. Build relationships with the data management team: Remember the work you are doing falls under the ambit of the group that is officially responsible for managing data in your organization. It is therefore important that you keep them informed of what you’re doing.  Sooner or later your paths will cross, and you want to be sure that there are no nasty surprises (for either side!) at that point. Moreover, if you build connections with them early, you may even find that the data management team supports your efforts.

Having waxed verbose, I should mention that my effort is work in progress and I do not know where it will lead. Nevertheless, I offer these suggestions as a wayfarer who is considerably further down the road from where he started.

Parting thoughts

You may have noticed that I’ve refrained from using the overused and over-hyped term “Big Data” in this piece. This is deliberate. Indeed, the techniques I have been using have nothing to do with the size of the datasets. To be honest, I’ve applied them to datasets ranging from a few thousand to a few hundred thousand records, both of which qualify as Very Small Data in today’s world.

Your vendor will be only too happy to sell you Big Data infrastructure that will set you back a good many dollars. However, the chances are good that you do not need it right now.  You’ll be much better off going back to them after you hit the limits of your current data processing infrastructure. Moreover, you’ll also be better informed about your needs then.

You may also be wondering why I haven’t said much about the composition of the analytics team (barring the point about not needing PhD statisticians) and how it should be organized.  The reason I haven’t done so is that I believe the right composition and organizational structure will emerge from the initial projects done and feedback received from internal customers. The resulting structure will be better suited to the organization than one that is imposed upfront.  Long time readers of this blog might recognize this as a tenet of emergent design.

Finally, I should reiterate that my efforts are still very much in progress and I know not where they will lead. However, even if they go nowhere, I would have learnt something about my organization and picked up a useful, practical skill. And that is good enough for me.

Written by K

September 3, 2015 at 8:28 pm

From inactivism to interactivism – managerial attitudes to planning

with 3 comments

Introduction

Managers display a range of attitudes towards planning for the future.  In an essay entitled Systems, Messes and Interactive Planning, the management guru/philosopher Russell Ackoff classified attitudes to organizational planning into four distinct types which I describe in detail below. I suspect you may recognise examples of each of these in your organisation…indeed, you might even see shades of yourself 🙂

Inactivism

This attitude, as its name suggests, is characterized by a lack of meaningful action. Inactivism is often displayed by managers in organisations that favour the status quo.  These organisations are happy with the way things are, and therefore see no need to change. However, lack of meaningful action does not mean lack of action. On the contrary, it often takes a great deal of effort to fend off change and keep things the way they are. As Ackoff states:

Inactive organizations require a great deal of activity to keep changes from being made. They accomplish nothing in a variety of ways. First, they require that all important decisions be made “at the top.” The route to the top is deliberately designed like an obstacle course. This keeps most recommendations for change from ever getting there. Those that do are likely to have been delayed enough to make them irrelevant when they reach their destination. Those proposals that reach the top are likely to be farther delayed, often by being sent back down or out for modification or evaluation. The organization thus behaves like a sponge and is about as active…

The inactive manager spends a lot of time and effort in ensuring that things remain the way they are. Hence they act only when a stituation forces them to. Ackoff puts it in his inimitable way by stating that, “Inactivist  managers tend to want what they get rather than get what they want.”

Reactivism

Reactivist managers are a step worse than inactivists  because they believe that disaster is already upon them. This is the type of manager who hankers after the “golden days of yore when things were much better than they are today.” As a result of their deep unease of where they are now, they may try to undo the status quo.  As Ackoff points out, unlike inactivists, reactivists do not ride the tide but try to swim against it.

Typically reactivist managers are wary of technology and new concepts. Moreover, they tend to give more importance to seniority and experience rather than proven competence. They also tend to be fans of simplistic solutions to complex problems…like “solving” the problem of a behind-schedule software project by throwing more people at it.

Preactivism

Preactivists are the opposite of reactivists in that they believe the future is going to be better than the past. Consequently, their efforts are geared towards understanding what the future will look like and how they can prepare for it.  Typically, preactive managers are concerned with facts, figures and forecasts; they are firm believers in scientific planning methods that they have learnt in management schools. As such, one might say that this is the most common species of manager in present  day organisations. Those who are not natural preactivists will fly the preactivist flag when they’re asked for their opinions by their managers because it’s the expected answer.

A key characteristic of preactivist managers is that they tend to revel in creating plans rather than implementing them. As Ackoff puts it, “Preactivists see planning as a sequence of discrete steps which terminate with acceptance or rejection of their plans. What happens to their plans is the responsibility of others.

Interactivism

Interactivists planners are not satisfied with the present, but unlike reactivists or preactivists, they do not hanker for the past, nor do they believe the future is automatically going to be better. They do want to make things better than they were or currently are, but they are continually adjusting their plans for the future by learning from and responding to events.  In short, they believe they can shape the future by their actions.

Experimentation is the hallmark of interactivists.  They are willing to try different approaches and learn from them. Although they believe in learning by experience, they do not want to wait for experiences to happen; they would rather induce them by (often small-scale) experimentation.

Ackoff labels interactivists as idealisers – people who pursue ideals they know cannot be achieved, but can be approximated or even reformulated in the light of new knowledge. As he puts it:

They treat ideals as relative absolutes: ultimate objectives whose formulation depends on our current knowledge and understanding of ourselves and our environment. Therefore, they require continuous reformulation in light of what we learn from approaching them.

To use a now fashionable term, interactivists are intrapreneurs.

Discussion

Although Ackoff shows a clear bias towards  interactivists in his article, he does mention that specific situations may call for other types of planners. As he puts it:

Despite my obvious bias in my characterization of these four postures, there are circumstances in which each is most appropriate. Put simply, if the internal and external dynamics of a system (the tide) are taking one where one wants to go and are doing so quickly enough, inactivism is appropriate. If the direction of change is right but the movement is too slow, preactivism is appropriate. If the change is taking one where one does not want to go and one prefers to stay where one is or was, reactivism is appropriate. However, if one is not willing to settle for the past, the present or the future that appears likely now, interactivism is appropriate.

The key point he makes is that inactivists and preactivists treat planning as a ritual because they see the future as something they cannot change. They can only plan for it (and hope for the best). Interactivists, on the other hand, look for opportunities to influence events and thus potentially change the future. Although both preactivists and interactivists are forward-looking, interactivists tend to be long-term thinkers as compared to preactivists who are more concerned about the short to medium term future.

Conclusion

Ackoff’s classification of planners in organisations is interesting because it highlights the kind of future-focused attitude that managers ought to take.  The sad fact, though, is that a significant number of managers are myopic preactivists, focused on this year’s performance targets rather than what their organisations might look like five or even ten years down the line. This is not the fault of individuals, though. The blame for the undue prevalence of myopic preactivism can be laid squarely on the deep-seated management dogma that rewards short-termism.

Written by K

August 20, 2015 at 9:30 pm

The façade of expertise

with 2 comments

Introduction

Since the 1980s, intangible assets, such as knowledge, have come to represent an ever-increasing proportion of an organisation’s net worth.  One of the problems associated with treating knowledge as an asset is that it is difficult to codify in its entirety. This is largely because knowledge is context and skill dependent, and these are hard to convey by any means other than experience. This is the well-known tacit versus explicit knowledge problem that I have written about at length elsewhere (see this post and this one, for example).  Although a recent development in knowledge management technology goes some way towards addressing the problem of context, it still looms large and is likely to for a while.

Although the problem mentioned above is well-known, it hasn’t stopped legions of consultants and professional organisations from attempting to codify and sell expertise: management consultancies and enterprise IT vendors being prime examples. This has given rise to the notion of a knowledge-intensive firm, an organization in which most work is said to be of an intellectual nature and where well-educated, qualified employees form the major part of the work force.   However, the slipperiness of knowledge mentioned in the previous paragraph suggests that the notion of a knowledge intensive firm (and, by implication, expertise) is problematic. Basically, if it is true that knowledge itself is elusive, and hard-to-codify, it raises the question as to what exactly such firms (and their employees) sell.

In this post, I shed some light on this question by drawing on an interesting paper by Mats Alvesson entitled, Knowledge Work: Ambiguity, Image and Identity (abstract only), as well as my experiences in dealing with IT services and consulting firms.

Background: the notion of a knowledge-intensive firm

The first point to note is that the notion of a knowledge-intensive firm is not particularly precise. Based on the definition offered above, it is clear that a wide variety of organisations may be classified as knowledge intensive firms. For example, management consultancies and enterprise software companies would fall into this category, as would law, accounting and research & development firms.  The same is true of the term knowledge work(er).

One of the implications of the vagueness of the term is that any claim to being a knowledge-intensive firm or knowledge worker can be contested. As Alvesson states:

It is difficult to substantiate knowledge-intensive companies and knowledge workers as distinct, uniform categories. The distinction between these and non- (or less) knowledge-intensive organization/non-knowledge   workers is not self-evident, as all organizations and work  involve “knowledge” and any evaluation of “intensiveness” is likely to be contestable. Nevertheless,  there are, in many crucial respects, differences  between many professional service and high-tech companies on the one hand, and more routinized service and industry companies on the other, e.g. in terms of broadly socially shared ideas about the significance of a long theoretical education and intellectual capacities for the work. It makes sense to refer to knowledge-intensive companies as a vague but meaningful category, with sufficient heuristic value to be useful. The category does not lend itself to precise definition or delimitation and it includes organizations which are neither unitary nor unique. Perhaps the claim to knowledge-intensiveness is one of the most distinguishing features…

The last line in the excerpt is particularly interesting to me because it resonates with my experience: having been through countless IT vendor and management consulting briefings on assorted products and services, it is clear that a large part of their pitch is aimed at establishing their credibility as experts in the field, even though they may not actually be so.

The ambiguity of knowledge work

Expertise in skill-based professions is generally unambiguous – an incompetent pilot will be exposed soon enough. In knowledge work, however, genuine expertise is often not so easily discernable. Alvesson highlights a number of factors that make this so.

Firstly, much of the day-to-day work of knowledge workers such as management consultants and IT experts involves routine matters – meetings, documentation etc. – that do not make great demands on their skills. Moreover, even when involved in one-off tasks such as projects, these workers are generally assigned tasks that they are familiar with. In general, therefore, the nature of their work requires them to follow already instituted processes and procedures.  A somewhat unexpected consequence of this is that incompetence can remain hidden for a long time.

A second issue is that the quality of so-called knowledge work is often hard to evaluate – indeed evaluations may require the engagement of independent experts! This is true even of relatively mundane expertise-based work. As Alvesson states:

Comparisons of the decisions of expert and novice auditors indicate no relationship  between the degree of expertise  (as indicated by experience)  and consensus; in high-risk and less standard situations, the experts’ consensus level was lower than that of novices. [An expert remarked that] “judging the quality of an audit is an extremely problematic exercise” and says that consumers of the audit service “have only a very limited insight into the quality of work undertaken by an audit firm”.

This is true of many different kinds of knowledge work.  As Alvesson tells us:

How can anyone tell whether a headhunting firm has found and recruited the best possible candidates or not…or if an audit has been carried out in a high-quality way?  Or  if  the  proposal by  strategic management consultants is optimal or even helpful, or not. Of course, sometimes one may observe whether something works or not (e.g. after the intervention of a plumber), but normally the issues concerned are not that simple in the context in which the concept of knowledge-intensiveness is frequently used. Here we are mainly dealing with complex and intangible phenomena.  Even if something seems to work, it might have worked even better or the cost of the intervention been much lower if another professional or organization had carried out the task.

In view of the above, it is unlikely that market mechanisms would be effective in sorting out the competent from the incompetent.  Indeed, my experience of dealing with major consulting firms (in IT) leads me believe that market mechanisms tend to make them clones of each other, at least in terms of their offerings and approach. This may be part of the reason why client firms tend to base their contracting decisions on the basis of cost or existing relationships – it makes sense to stick with the known, particularly when the alternatives offer choices akin to Pepsi vs Coke.

But that is not the whole story, experts are often hired for ulterior motives. On the one hand, they  might be hired because they confer legitimacy – “no one ever got fired for hiring McKinsey” is a quote I’ve heard more than a few times in many workplaces. On the other hand, they also make convenient scapegoats when the proverbial stuff hits the fan.

Image cultivation

One of the consequences of the ambiguity of knowledge-intensive work is that employees in such firms are forced to cultivate and maintain the image of being experts, and hence the stereotype of the suited, impeccably-groomed Big 4 consultant. As Alvesson points out, though, image cultivation goes beyond the individual employee:

This image must be  managed on different levels: professional-industrial, corporate and individual. Image may be targeted in specific acts and arrangements,  in visible symbols for public consumption but also in everyday behavior, within the organization and in interaction  with others. Thus image is not just of importance in marketing  and for attracting personnel but also in and after production.  Size and a big name  are  therefore important for  many knowledge-intensive companies – and here we perhaps have a major explanation  for all the mergers and acquisitions  in accounting, management consultancy and  other  professional service companies. A large size is reassuring. A well-known brand name substitutes for difficulties in establishing quality.

Another aspect of image cultivation is the use of rhetoric. Here are some examples taken from the websites of Big 4 consulting firms:

No matter the challenge, we focus on delivering practical and enduring results, and equipping our clients to grow and lead.” —McKinsey

We continue to redefine ourselves and set the bar higher to continually deliver quality for clients, our people, and the society in which we operate.” – Deloitte

Cutting through complexity” – KPMG

Creating value for our clients, people and communities in a changing world” – PWC

Some clients are savvy enough not to be taken in by the platitudinous statements listed above.  However, the fact that knowledge-intensive firms continue to use second-rate rhetoric to attract custom suggests that there are many customers who are easily taken in by marketing slogans.  These slogans are sometimes given an aura of plausibility via case-studies intended to back the claims made. However, more often than not the case studies are based on a selective presentation of facts that depict the firm in the best possible light.

A related point is that such firms often flaunt their current client list in order to attract new clientele. Lines like, “our client list includes 8 of top ten auto manufacturers in the world,” are not uncommon, the unstated implication being that if you are an auto manufacturer, you cannot afford not to engage us. The image cultivation process continues well after the consulting engagement is underway. Indeed, much of a consultant’s effort is directed at ensuring that the engagement will be extended.

Finally, it is important to point out the need to maintain an aura of specialness. Consultants and knowledge workers are valued for what they know. It is therefore in their interest to maintain a certain degree of exclusivity of knowledge. Guilds (such as the Project Management Institute) act as gatekeepers by endorsing the capabilities of knowledge workers through membership criteria based on experience and / or professional certification programs.

Maintaining the façade

Because knowledge workers deal with intangibles, they have to work harder to maintain their identities than those who have more practical skills. They are therefore more susceptible to the vagaries and arbitrariness of organisational life.  As Alvesson notes,

Given the high level of ambiguity and the fluidity of organizational  life and interactions with external actors, involving a strong dependence on somewhat arbitrary evaluations  and opinions of others, many knowledge-intensive workers must struggle more for the accomplishment,  maintenance and gradual change of self-identity, compared to workers whose competence and results are more materially grounded…Compared with people who invest less self- esteem in their work and who have lower expectations,  people in knowledge-intensive  companies are thus vulnerable to frustrations  contingent upon ambiguity of performance  and confirmation.

Knowledge workers are also more dependent on managerial confirmation of their competence and value. Indeed, unlike the case of the machinist or designer, a knowledge worker’s product rarely speaks for itself. It has to be “sold”, first  to management and then (possibly) to the client and the wider world.

The previous paragraphs of this section dealt with individual identity. However, this is not the whole story because organisations also play a key role in regulating the identities of their employees. Indeed, this is how they develop their brand. Alvesson notes four ways in which organisations do this:

  1. Corporate identity – large consulting firms are good examples of this. They regulate the identities of their employees through comprehensive training and acculturation programs. As a board member remarked to me recently, “I like working with McKinsey people, because I was once one myself and I know their approach and thinking processes.”
  2. Cultural programs – these are the near-mandatory organisational culture initiatives in large organisations. Such programs are usually based on a set of “guiding principles” which are intended to inform employees on how they should conduct themselves as employees and representatives of the organisation. As Alvesson notes, these are often more effective than formal structures.
  3. Normalisation – these are the disciplinary mechanisms that are triggered when an employee violates an organisational norm. Examples of this include formal performance management or official reprimands. Typically, though, the underlying issue is rarely addressed. For example, a failed project might result in a reprimand or poor performance review for the project manager, but the underlying systemic causes of failure are unlikely to be addressed…or even acknowledged.
  4. Subjectification – This is where employees mould themselves to fit their roles or job descriptions. A good example of this is when job applicants project themselves as having certain skills and qualities in their resumes and in interviews. If selected, they may spend the first few months in learning and internalizing what is acceptable and what is not. In time, the new behaviours are internalized and become a part of their personalities.

It is clear from the above that maintaining the façade of expertise in knowledge work involves considerable effort and manipulation, and has little to do with genuine knowledge. Indeed, it is perhaps because genuine expertise is so hard to identify that people and organisations strive to maintain appearances.

Conclusion

The ambiguous nature of knowledge requires (and enables!) consultants and technology vendors to maintain a façade of expertise. This is done through a careful cultivation of image via the rhetoric of marketing, branding and impression management.The onus is therefore on buyers to figure out if there’s anything of substance behind words and appearances. The volume of business enjoyed by big consulting firms suggests that this does not happen as often as it should, leading us to the inescapable conclusion that decision-makers in organisations are all too easily deceived by the facade of expertise.

Written by K

July 8, 2015 at 8:47 pm

%d bloggers like this: