Most machine learning algorithms involve minimising an error measure of some kind (this measure is often called an objective function or loss function). For example, the error measure in linear regression problems is the famous mean squared error – i.e. the averaged sum of the squared differences between the predicted and actual values. Like the mean squared error, most objective functions depend on all points in the training dataset. In this post, I describe the support vector machine (SVM) approach which focuses instead on finding the optimal separation boundary between datapoints that have different classifications. I’ll elaborate on what this means in the next section.
Here’s the plan in brief. I’ll begin with the rationale behind SVMs using a simple case of a binary (two class) dataset with a simple separation boundary (I’ll clarify what “simple” means in a minute). Following that, I’ll describe how this can be generalised to datasets with more complex boundaries. Finally, I’ll work through a couple of examples in R, illustrating the principles behind SVMs. In line with the general philosophy of my “Gentle Introduction to Data Science Using R” series, the focus is on developing an intuitive understanding of the algorithm along with a practical demonstration of its use through a toy example.
The basic idea behind SVMs is best illustrated by considering a simple case: a set of data points that belong to one of two classes, red and blue, as illustrated in figure 1 below. To make things simpler still, I have assumed that the boundary separating the two classes is a straight line, represented by the solid green line in the diagram. In the technical literature, such datasets are called linearly separable.
In the linearly separable case, there is usually a fair amount of freedom in the way a separating line can be drawn. Figure 2 illustrates this point: the two broken green lines are also valid separation boundaries. Indeed, because there is a non-zero distance between the two closest points between categories, there are an infinite number of possible separation lines. This, quite naturally, raises the question as to whether it is possible to choose a separation boundary that is optimal.
The short answer is, yes there is. One way to do this is to select a boundary line that maximises the margin, i.e. the distance between the separation boundary and the points that are closest to it. Such an optimal boundary is illustrated by the black brace in Figure 3. The really cool thing about this criterion is that the location of the separation boundary depends only on the points that are closest to it. This means, unlike other classification methods, the classifier does not depend on any other points in dataset. The directed lines between the boundary and the closest points on either side are called support vectors (these are the solid black lines in figure 3). A direct implication of this is that the fewer the support vectors, the better the generalizability of the boundary.
Although the above sounds great, it is of limited practical value because real data sets are seldom (if ever) linearly separable.
So, what can we do when dealing with real (i.e. non linearly separable) data sets?
A simple approach to tackle small deviations from linear separability is to allow a small number of points (those that are close to the boundary) to be misclassified. The number of possible misclassifications is governed by a free parameter C, which is called the cost. The cost is essentially the penalty associated with making an error: the higher the value of C, the less likely it is that the algorithm will misclassify a point.
This approach – which is called soft margin classification – is illustrated in Figure 4. Note the points on the wrong side of the separation boundary. We will demonstrate soft margin SVMs in the next section. (Note: At the risk of belabouring the obvious, the purely linearly separable case discussed in the previous para is simply is a special case of the soft margin classifier.)
Real life situations are much more complex and cannot be dealt with using soft margin classifiers. For example, as shown in Figure 5, one could have widely separated clusters of points that belong to the same classes. Such situations, which require the use of multiple (and nonlinear) boundaries, can sometimes be dealt with using a clever approach called the kernel trick.
The kernel trick
Recall that in the linearly separable (or soft margin) case, the SVM algorithm works by finding a separation boundary that maximises the margin, which is the distance between the boundary and the points closest to it. The distance here is the usual straight line distance between the boundary and the closest point(s). This is called the Euclidean distance in honour of the great geometer of antiquity. The point to note is that this process results in a separation boundary that is a straight line, which as Figure 5 illustrates, does not always work. In fact in most cases it won’t.
So what can we do? To answer this question, we have to take a bit of a detour…
What if we were able to generalize the notion of distance in a way that generates nonlinear separation boundaries? It turns out that this is possible. To see how, one has to first understand how the notion of distance can be generalized.
The key properties that any measure of distance must satisfy are:
- Non-negativity – a distance cannot be negative, a point that needs no further explanation I reckon 🙂
- Symmetry – that is, the distance between point A and point B is the same as the distance between point B and point A.
- Identity– the distance between a point and itself is zero.
- Triangle inequality – that is the sum of distances between point A and B and points B and C must be less than or equal to the distance between A and C (equality holds only if all three points lie along the same line).
Any mathematical object that displays the above properties is akin to a distance. Such generalized distances are called metrics and the mathematical space in which they live is called a metric space. Metrics are defined using special mathematical functions designed to satisfy the above conditions. These functions are known as kernels.
The essence of the kernel trick lies in mapping the classification problem to a metric space in which the problem is rendered separable via a separation boundary that is simple in the new space, but complex – as it has to be – in the original one. Generally, the transformed space has a higher dimensionality, with each of the dimensions being (possibly complex) combinations of the original problem variables. However, this is not necessarily a problem because in practice one doesn’t actually mess around with transformations, one just tries different kernels (the transformation being implicit in the kernel) and sees which one does the job. The check is simple: we simply test the predictions resulting from using different kernels against a held out subset of the data (as one would for any machine learning algorithm).
It turns out that a particular function – called the radial basis function kernel (RBF kernel) – is very effective in many cases. The RBF kernel is essentially a Gaussian (or Normal) function with the Euclidean distance between pairs of points as the variable (see equation 1 below). The basic rationale behind the RBF kernel is that it creates separation boundaries that it tends to classify points close together (in the Euclidean sense) in the original space in the same way. This is reflected in the fact that the kernel decays (i.e. drops off to zero) as the Euclidean distance between points increases.
The rate at which a kernel decays is governed by the parameter – the higher the value of , the more rapid the decay. This serves to illustrate that the RBF kernel is extremely flexible….but the flexibility comes at a price – the danger of overfitting for large values of . One should choose appropriate values of C and so as to ensure that the resulting kernel represents the best possible balance between flexibility and accuracy. We’ll discuss how this is done in practice later in this article.
Finally, though it is probably obvious, it is worth mentioning that the separation boundaries for arbitrary kernels are also defined through support vectors as in Figure 3. To reiterate a point made earlier, this means that a solution that has fewer support vectors is likely to be more robust than one with many. Why? Because the data points defining support vectors are ones that are most sensitive to noise- therefore the fewer, the better.
There are many other types of kernels, each with their own pros and cons. However, I’ll leave these for adventurous readers to explore by themselves. Finally, for a much more detailed….and dare I say, better… explanation of the kernel trick, I highly recommend this article by Eric Kim.
Support vector machines in R
In this demo we’ll use the svm interface that is implemented in the e1071 R package. This interface provides R programmers access to the comprehensive libsvm library written by Chang and Lin. I’ll use two toy datasets: the famous iris dataset available with the base R package and the sonar dataset from the mlbench package. I won’t describe details of the datasets as they are discussed at length in the documentation that I have linked to. However, it is worth mentioning the reasons why I chose these datasets:
- As mentioned earlier, no real life dataset is linearly separable, but the iris dataset is almost so. Consequently, it is a good illustration of using linear SVMs. Although one almost never uses these in practice, I have illustrated their use primarily for pedagogical reasons.
- The sonar dataset is a good illustration of the benefits of using RBF kernels in cases where the dataset is hard to visualise (60 variables in this case!). In general, one would almost always use RBF (or other nonlinear) kernels in practice.
With that said, let’s get right to it. I assume you have R and RStudio installed. For instructions on how to do this, have a look at the first article in this series. The processing preliminaries – loading libraries, data and creating training and test datasets are much the same as in my previous articles so I won’t dwell on these here. For completeness, however, I’ll list all the code so you can run it directly in R or R studio (a complete listing of the code can be found here):
The output from the SVM model show that there are 24 support vectors. If desired, these can be examined using the SV variable in the model – i.e via svm_model$SV.
The test prediction accuracy indicates that the linear performs quite well on this dataset, confirming that it is indeed near linearly separable. To check performance by class, one can create a confusion matrix as described in my post on random forests. I’ll leave this as an exercise for you. Another point is that we have used a soft-margin classification scheme with a cost C=1. You can experiment with this by explicitly changing the value of C. Again, I’ll leave this for you an exercise.
Before proceeding to the RBF kernel, I should mention a point that an alert reader may have noticed. The predicted variable, Species, can take on 3 values (setosa, versicolor and virginica). However, our discussion above dealt with a binary (2 valued) classification problem. This brings up the question as to how the algorithm deals multiclass classification problems – i.e those involving datasets with more than two classes. The libsvm algorithm (which svm uses) does this using a one-against-one classification strategy. Here’s how it works:
- Divide the dataset (assumed to have N classes) into N(N-1)/2 datasets that have two classes each.
- Solve the binary classification problem for each of these subsets
- Use a simple voting mechanism to assign a class to each data point.
Basically, each data point is assigned the most frequent classification it receives from all the binary classification problems it figures in.
With that said for the unrealistic linear classifier, let’s move to the real world. In the code below, I build SVM models using three different kernels
- Linear kernel (this is for comparison with the following 2 kernels).
- RBF kernel with default values for the parameters and .
- RBF kernel with optimal values for and . The optimal values are obtained using the tune.svm function (also available in e1071), which essentially builds models for multiple combinations of parameter values and selects the best.
OK, lets go:
I’ll leave you to examine the contents of the model. The important point to note here is that the performance of the model with the test set is quite dismal compared to the previous case. This simply indicates that the linear kernel is not appropriate here. Let’s take a look at what happens if we use the RBF kernel with default values for the parameters:
That’s a pretty decent improvement from the linear kernel. Let’s see if we can do better by doing some parameter tuning. To do this we first invoke tune.svm and use the parameters it gives us in the call to svm:
Which is fairly decent improvement on the un-optimised case.
This bring us to the end of this introductory exploration of SVMs in R. To recap, the distinguishing feature of SVMs in contrast to most other techniques is that they attempt to construct optimal separation boundaries between different categories.
SVMs are quite versatile and have been applied to a wide variety of domains ranging from chemistry to pattern recognition. They are best used in binary classification scenarios. This brings up a question as to where SVMs are to be preferred to other binary classification techniques such as logistic regression. The honest response is, “it depends” – but here are some points to keep in mind when choosing between the two. A general point to keep in mind is that SVM algorithms tend to be expensive both in terms of memory and computation, issues that can start to hurt as the size of the dataset increases.
Given all the above caveats and considerations, the best way to figure out whether an SVM approach will work for your problem may be to do what most machine learning practitioners do: try it out!
Data scientists are sometimes blind to the possibility that the predictions of their algorithms can have unforeseen negative effects on people. Ethical or social implications are easy to overlook when one finds interesting new patterns in data, especially if they promise significant financial gains. The Centrelink debt recovery debacle, recently reported in the Australian media, is a case in point.
Here is the story in brief:
Centrelink is an Australian Government organisation responsible for administering welfare services and payments to those in need. A major challenge such organisations face is ensuring that their clients are paid no less and no more than what is due to them. This is difficult because it involves crosschecking client income details across multiple systems owned by different government departments, a process that necessarily involves many assumptions. In July 2016, Centrelink unveiled an automated compliance system that compares income self-reported by clients to information held by the taxation office.
The problem is that the algorithm is flawed: it makes strong (and incorrect!) assumptions regarding the distribution of income across a financial year and, as a consequence, unfairly penalizes a number of legitimate benefit recipients. It is very likely that the designers and implementers of the algorithm did not fully understand the implications of their assumptions. Worse, from the errors made by the system, it appears they may not have adequately tested it either. But this did not stop them (or, quite possibly, their managers) from unleashing their algorithm on an unsuspecting public, causing widespread stress and distress. More on this a bit later.
Algorithms like the one described above are the subject of Cathy O’Neil’s aptly titled book, Weapons of Math Destruction. In the remainder of this article I discuss the main themes of the book. Just to be clear, this post is more riff than review. However, for those seeking an opinion, here’s my one-line version: I think the book should be read not only by data science practitioners, but also by those who use or are affected by their algorithms (which means pretty much everyone!).
Abstractions and assumptions
‘O Neil begins with the observation that data algorithms are mathematical models of reality, and are necessarily incomplete because several simplifying assumptions are invariably baked into them. This point is important and often overlooked so it is worth illustrating via an example.
When assessing a person’s suitability for a loan, a bank will want to know whether the person is a good risk. It is impossible to model creditworthiness completely because we do not know all the relevant variables and those that are known may be hard to measure. To make up for their ignorance, data scientists typically use proxy variables, i.e. variables that are believed to be correlated with the variable of interest and are also easily measurable. In the case of creditworthiness, proxy variables might be things like gender, age, employment status, residential postcode etc. Unfortunately many of these can be misleading, discriminatory or worse, both.
The Centrelink algorithm provides a good example of such a “double-whammy” proxy. The key variable it uses is the difference between the client’s annual income reported by the taxation office and self-reported annual income stated by the client. A large difference is taken to be an indicative of an incorrect payment and hence an outstanding debt. This simplistic assumption overlooks the fact that most affected people are not in steady jobs and therefore do not earn regular incomes over the course of a financial year (see this article by Michael Griffin, for a detailed example). Worse, this crude proxy places an unfair burden on vulnerable individuals for whom casual and part time work is a fact of life.
Worse still, for those wrongly targeted with a recovery notice, getting the errors sorted out is not a straightforward process. This is typical of a WMD. As ‘O Neil states in her book, “The human victims of WMDs…are held to a far higher standard of evidence than the algorithms themselves.” Perhaps this is because the algorithms are often opaque. But that’s a poor excuse. This is the only technical field where practitioners are held to a lower standard of accountability than those affected by their products.
‘O Neil’s sums it up rather nicely when she calls algorithms like the Centrelink one weapons of math destruction (WMD).
Self-fulfilling prophecies and feedback loops
A characteristic of WMD is that their predictions often become self-fulfilling prophecies. For example a person denied a loan by a faulty risk model is more likely to be denied again when he or she applies elsewhere, simply because it is on their record that they have been refused credit before. This kind of destructive feedback loop is typical of a WMD.
An example that ‘O Neil dwells on at length is a popular predictive policing program. Designed for efficiency rather than nuanced judgment, such algorithms measure what can easily be measured and act by it, ignoring the subtle contextual factors that inform the actions of experienced officers on the beat. Worse, they can lead to actions that can exacerbate the problem. For example, targeting young people of a certain demographic for stop and frisk actions can alienate them to a point where they might well turn to crime out of anger and exasperation.
As Goldratt famously said, “Tell me how you measure me and I’ll tell you how I’ll behave.”
This is not news: savvy managers have known about the dangers of managing by metrics for years. The problem is now exacerbated manyfold by our ability to implement and act on such metrics on an industrial scale, a trend that leads to a dangerous devaluation of human judgement in areas where it is most needed.
A related problem – briefly mentioned earlier – is that some of the important variables are known but hard to quantify in algorithmic terms. For example, it is known that community-oriented policing, where officers on the beat develop relationships with people in the community, leads to greater trust. The degree of trust is hard to quantify, but it is known that communities that have strong relationships with their police departments tend to have lower crime rates than similar communities that do not. Such important but hard-to-quantify factors are typically missed by predictive policing programs.
Ironically, although WMDs can cause destructive feedback loops, they are often not subjected to feedback themselves. O’Neil gives the example of algorithms that gauge the suitability of potential hires. These programs often use proxy variables such as IQ test results, personality tests etc. to predict employability. Candidates who are rejected often do not realise that they have been screened out by an algorithm. Further, it often happens that candidates who are thus rejected go on to successful careers elsewhere. However, this post-rejection information is never fed back to the algorithm because it impossible to do so.
In such cases, the only way to avoid being blackballed is to understand the rules set by the algorithm and play according to them. As ‘O Neil so poignantly puts it, “our lives increasingly depend on our ability to make our case to machines.” However, this can be difficult because it assumes that a) people know they are being assessed by an algorithm and 2) they have knowledge of how the algorithm works. In most hiring scenarios neither of these hold.
Just to be clear, not all data science models ignore feedback. For example, sabermetric algorithms used to assess player performance in Major League Baseball are continually revised based on latest player stats, thereby taking into account changes in performance.
Driven by data
In recent years, many workplaces have gradually seen the introduction to data-driven efficiency initiatives. Automated rostering, based on scheduling algorithms is an example. These algorithms are based on operations research techniques that were developed for scheduling complex manufacturing processes. Although appropriate for driving efficiency in manufacturing, these techniques are inappropriate for optimising shift work because of the effect they have on people. As O’ Neil states:
Scheduling software can be seen as an extension of just-in-time economy. But instead of lawn mower blades or cell phone screens showing up right on cue, it’s people, usually people who badly need money. And because they need money so desperately, the companies can bend their lives to the dictates of a mathematical model.
She correctly observes that an, “oversupply of low wage labour is the problem.” Employers know they can get away with treating people like machine parts because they have a large captive workforce. What makes this seriously scary is that vested interests can make it difficult to outlaw such exploitative practices. As ‘O Neil mentions:
Following [a] New York Times report on Starbucks’ scheduling practices, Democrats in Congress promptly drew up bills to rein in scheduling software. But facing a Republican majority fiercely opposed to government regulations, the chances that their bill would become law were nil. The legislation died.
Commercial interests invariably trump social and ethical issues, so it is highly unlikely that industry or government will take steps to curb the worst excesses of such algorithms without significant pressure from the general public. A first step towards this is to educate ourselves on how these algorithms work and the downstream social effects of their predictions.
Messing with your mind
There is an even more insidious way that algorithms mess with us. Hot on the heels of the recent US presidential election, there were suggestions that fake news items on Facebook may have influenced the results. Mark Zuckerberg denied this, but as this Casey Newton noted in this trenchant tweet, the denial leaves Facebook in “the awkward position of having to explain why they think they drive purchase decisions but not voting decisions.”
Be that as it may, the fact is Facebook’s own researchers have been conducting experiments to fine tune a tool they call the “voter megaphone”. Here’s what ‘O Neil says about it:
The idea was to encourage people to spread the word that they had voted. This seemed reasonable enough. By sprinkling people’s news feeds with “I voted” updates, Facebook was encouraging Americans – more that sixty-one million of them – to carry out their civic duty….by posting about people’s voting behaviour, the site was stoking peer pressure to vote. Studies have shown that the quiet satisfaction of carrying out a civic duty is less likely to move people than the possible judgement of friends and neighbours…The Facebook started out with a constructive and seemingly innocent goal to encourage people to vote. And it succeeded…researchers estimated that their campaign had increased turnout by 340,000 people. That’s a big enough crowd to swing entire states, and even national elections.
And if that’s not scary enough, try this:
For three months leading up to the election between President Obama and Mitt Romney, a researcher at the company….altered the news feed algorithm for about two million people, all of them politically engaged. The people got a higher proportion of hard news, as opposed to the usual cat videos, graduation announcements, or photos from Disney world….[the researcher] wanted to see if getting more [political] news from friends changed people’s political behaviour. Following the election [he] sent out surveys. The self-reported results that voter participation in this group inched up from 64 to 67 percent.
This might not sound like much, but considering the thin margins of recent presidential elections, it could be enough to change a result.
But it’s even more insidious. In a paper published in 2014, Facebook researchers showed that users’ moods can be influenced by the emotional content of their newsfeeds. Here’s a snippet from the abstract of the paper:
In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.
As you might imagine, there was a media uproar following which the lead researcher issued a clarification and Facebook officials duly expressed regret (but, as far as I know, not an apology). To be sure, advertisers have been exploiting this kind of “mind control” for years, but a public social media platform should (expect to) be held to a higher standard of ethics. Facebook has since reviewed its internal research practices, but the recent fake news affair shows that the story is to be continued.
Disarming weapons of math destruction
The Centrelink debt debacle, Facebook mood contagion experiments and the other case studies mentioned in the book illusrate the myriad ways in which Big Data algorithms have a pernicious effect on our day-to-day lives. Quite often people remain unaware of their influence, wondering why a loan was denied or a job application didn’t go their way. Just as often, they are aware of what is happening, but are powerless to change it – shift scheduling algorithms being a case in point.
This is not how it was meant to be. Technology was supposed to make life better for all, not just the few who wield it.
So what can be done? Here are some suggestions:
- To begin with, education is the key. We must work to demystify data science, create a general awareness of data science algorithms and how they work. O’ Neil’s book is an excellent first step in this direction (although it is very thin on details of how the algorithms work)
- Develop a code of ethics for data science practitioners. It is heartening to see that IEEE has recently come up with a discussion paper on ethical considerations for artificial intelligence and autonomous systems and ACM has proposed a set of principles for algorithmic transparency and accountability. However, I should also tag this suggestion with the warning that codes of ethics are not very effective as they can be easily violated. One has to – somehow – embed ethics in the DNA of data scientists. I believe, one way to do this is through practice-oriented education in which data scientists-in-training grapple with ethical issues through data challenges and hackathons. It is as Wittgenstein famously said, “it is clear that ethics cannot be articulated.” Ethics must be practiced.
- Put in place a system of reliable algorithmic audits within data science departments, particularly those that do work with significant social impact.
- Increase transparency a) by publishing information on how algorithms predict what they predict and b) by making it possible for those affected by the algorithm to access the data used to classify them as well as their classification, how it will be used and by whom.
- Encourage the development of algorithms that detect bias in other algorithms and correct it.
- Inspire aspiring data scientists to build models for the good.
It is only right that the last word in this long riff should go to ‘O Neil whose work inspired it. Towards the end of her book she writes:
Big Data processes codify the past. They do not invent the future. Doing that requires moral imagination, and that’s something that only humans can provide. We have to explicitly embed better values into our algorithms, creating Big Data models that follow our ethical lead. Sometimes that will mean putting fairness ahead of profit.
Excellent words for data scientists to live by.
There are two facets to the operation of IT systems and processes in organisations: governance, the standards and regulations associated with a system or process; and execution, which relates to steering the actual work of the system or process in specific situations.
An example might help clarify the difference:
The purpose of project management is to keep projects on track. There are two aspects to this: one pertaining to the project management office (PMO) which is responsible for standards and regulations associated with managing projects in general, and the other relating to the day-to-day work of steering a particular project. The two sometimes work at cross-purposes. For example, successful project managers know that much of their work is about navigate their projects through the potentially treacherous terrain of their organisations, an activity that sometimes necessitates working around, or even breaking, rules set by the PMO.
Governance and steering share a common etymological root: the word kybernetes, which means steersman in Greek. It also happens to be the root word of Cybernetics which is the science of regulation or control. In this post, I apply a key principle of cybernetics to a couple of areas of enterprise IT.
An oft quoted example of a cybernetic system is a thermostat, a device that regulates temperature based on inputs from the environment. Most cybernetic systems are way more complicated than a thermostat. Indeed, some argue that the Earth is a huge cybernetic system. A smaller scale example is a system consisting of a car + driver wherein a driver responds to changes in the environment thereby controlling the motion of the car.
Cybernetic systems vary widely not just in size, but also in complexity. A thermostat is concerned only the ambient temperature whereas the driver in a car has to worry about a lot more (e.g. the weather, traffic, the condition of the road, kids squabbling in the back-seat etc.). In general, the more complex the system and its processes, the larger the number of variables that are associated with it. Put another way, complex systems must be able to deal with a greater variety of disturbances than simple systems.
The law of requisite variety
It turns out there is a fundamental principle – the law of requisite variety– that governs the capacity of a system to respond to changes in its environment. The law is a quantitative statement about the different types of responses that a system needs to have in order to deal with the range of disturbances it might experience.
According to this paper, the law of requisite variety asserts that:
The larger the variety of actions available to a control system, the larger the variety of perturbations it is able to compensate.
V(E) > V(D) – V(R) – K
Where V represents variety, E represents the essential variable(s) to be controlled, D represents the disturbance, R the regulation and K the passive capacity of the system to absorb shocks. The terms are explained in brief below:
V(E) represents the set of desired outcomes for the controlled environmental variable: desired temperature range in the case of the thermostat, successful outcomes (i.e. projects delivered on time and within budget) in the case of a project management office.
V(D) represents the variety of disturbances the system can be subjected to (the ways in which the temperature can change, the external and internal forces on a project)
V(R) represents the various ways in which a disturbance can be regulated (the regulator in a thermostat, the project tracking and corrective mechanisms prescribed by the PMO)
K represents the buffering capacity of the system – i.e. stored capacity to deal with unexpected disturbances.
I won’t say any more about the law of requisite variety as it would take me to far afield; the interested and technically minded reader is referred to the link above or this paper for more (full pdf here).
Implications for enterprise IT
In plain English, the law of requisite variety states that only “variety can absorb variety.” As stated by Anthony Hodgson in an essay in this book, the law of requisite variety:
…leads to the somewhat counterintuitive observation that the regulator must have a sufficiently large variety of actions in order to ensure a sufficiently small variety of outcomes in the essential variables E. This principle has important implications for practical situations: since the variety of perturbations a system can potentially be confronted with is unlimited, we should always try maximize its internal variety (or diversity), so as to be optimally prepared for any foreseeable or unforeseeable contingency.
This is entirely consistent with our intuitive expectation that the best way to deal with the unexpected is to have a range of tools and approaches at ones disposal.
In the remainder of this piece, I’ll focus on the implications of the law for an issue that is high on the list of many corporate IT departments: the standardization of IT systems and/or processes.
The main rationale behind standardizing an IT process is to handle all possible demands (or use cases) via a small number of predefined responses. When put this way, the connection to the law of requisite variety is clear: a request made upon a function such as a service desk or project management office (PMO) is a disturbance and the way they regulate or respond to it determines the outcome.
Requisite variety and the service desk
A service desk is a good example of a system that can be standardized. Although users may initially complain about having to log a ticket instead of calling Nathan directly, in time they get used to it, and may even start to see the benefits…particularly when Nathan goes on vacation.
The law of requisite variety tells us successful standardization requires that all possible demands made on the system be known and regulated by the V(R) term in the equation above. In case of a service desk this is dealt with by a hierarchy of support levels. 1st level support deals with routine calls (incidents and service requests in ITIL terminology) such as system access and simple troubleshooting. Calls that cannot be handled by this tier are escalated to the 2nd and 3rd levels as needed. The assumption here is that, between them, the three support tiers should be able to handle majority of calls.
Slack (the K term) relates to unexploited capacity. Although needed in order to deal with unexpected surges in demand, slack is expensive to carry when one doesn’t need it. Given this, it makes sense to incorporate such scenarios into the repertoire of the standard system responses (i.e the V(R) term) whenever possible. One way to do this is to anticipate surges in demand and hire temporary staff to handle them. Another way is to deal with infrequent scenarios outside the system- i.e. deem them out of scope for the service desk.
Service desk standardization is thus relatively straightforward to achieve provided:
- The kinds of calls that come in are largely predictable.
- The work can be routinized.
- All non-routine work – such as an application enhancement request or a demand for a new system- is dealt with outside the system via (say) a change management process.
All this will be quite unsurprising and obvious to folks working in corporate IT. Now let’s see what happens when we apply the law to a more complex system.
Requisite variety and the PMO
Many corporate IT leaders see the establishment of a PMO as a way to control costs and increase efficiency of project planning and execution. PMOs attempt to do this by putting in place governance mechanisms. The underlying cause-effect assumption is that if appropriate rules and regulations are put in place, project execution will necessarily improve. Although this sounds reasonable, it often does not work in practice: according to this article, a significant fraction of PMOs fail to deliver on the promise of improved project performance. Consider the following points quoted directly from the article:
- “50% of project management offices close within 3 years (Association for Project Mgmt)”
- “Since 2008, the correlated PMO implementation failure rate is over 50% (Gartner Project Manager 2014)”
- “Only a third of all projects were successfully completed on time and on budget over the past year (Standish Group’s CHAOS report)”
- “68% of stakeholders perceive their PMOs to be bureaucratic (2013 Gartner PPM Summit)”
- “Only 40% of projects met schedule, budget and quality goals (IBM Change Management Survey of 1500 execs)”
The article goes on to point out that the main reason for the statistics above is that there is a gap between what a PMO does and what the business expects it to do. For example, according to the Gartner review quoted in the article over 60% of the stakeholders surveyed believe their PMOs are overly bureaucratic. I can’t vouch for the veracity of the numbers here as I cannot find the original paper. Nevertheless, anecdotal evidence (via various articles and informal conversations) suggests that a significant number of PMOs fail.
There is a curious contradiction between the case of the service desk and that of the PMO. In the former, process and methodology seem to work whereas in the latter they don’t.
The answer, as you might suspect, has to do with variety. Projects and service requests are very different beasts. Among other things, they differ in:
- Duration: A project typically goes over many months whereas a service request has a lifetime of days,
- Technical complexity: A project involves many (initially ill-defined) technical tasks that have to be coordinated and whose outputs have to be integrated. A service request typically consists one (or a small number) of well-defined tasks.
- Social complexity: A project involves many stakeholder groups, with diverse interests and opinions. A service request typically involves considerably fewer stakeholders, with limited conflicts of opinions/interests.
It is not hard to see that these differences increase variety in projects compared to service requests. The reason that standardization (usually) works for service desks but (often) fails for PMOs is that the PMOs are subjected a greater variety of disturbances than service desks.
The key point is that the increased variety in the case of the PMO precludes standardisation. As the law of requisite variety tells us, there are two ways to deal with variety: regulate it or adapt to it. Most PMOs take the regulation route, leading to over-regulation and outcomes that are less than satisfactory. This is exactly what is reflected in the complaint about PMOs being overly bureaucratic. The solution simple and obvious solution is for PMOs to be more flexible– specifically, they must be able to adapt to the ever changing demands made upon them by their organisations’ projects. In terms of the law of requisite variety, PMOs need to have the capacity to change the system response, V(R), on the fly. In practice this means recognising the uniqueness of requests by avoiding reflex, cookie cutter responses that characterise bureaucratic PMOs.
The law of requisite variety is a general principle that applies to any regulated system. In this post I applied the law to two areas of enterprise IT – service management and project governance – and discussed why standardization works well for the former but less satisfactorily for the latter. Indeed, in view of the considerable differences in the duration and complexity of service requests and projects, it is unreasonable to expect that standardization will work well for both. The key takeaway from this piece is therefore a simple one: those who design IT functions should pay attention to the variety that the functions will have to cope with, and bear in mind that standardization works well only if variety is known and limited.
“It isn’t that they can’t see the solution. It is that they can’t see the problem” – GK Chesterton
Examples of vendor-generated hype about data science are not hard to find, I found one on the very first site I visited: a large technology and services vendor who, in their own words, claim their analytics solutions help organisations “engage with data to answer the toughest business questions, uncover patterns and pursue breakthrough ideas.” I’ve deliberately avoided linking to the guilty party because there are many others that spout similar rhetoric.
Unfortunately it seems to work: according to Gartner, “by 2020, predictive and prescriptive analytics will attract 40% of enterprises’ net new investment in business intelligence and analytics.” This trend is accompanied by a concomitant increase in demand for data science education, fuelled by remarks along the lines that data science is “The Sexiest Job of the 21st Century.”
By and large, data science education tends to focus on algorithms and technology, but its practice involves much more. The vendor who claims that technology can help organisations grapple with “toughest business questions” and “pursue breakthrough ideas” is singularly silent about where these questions or ideas come from. Data is meaningless without a meaningful hypothesis. Problem is, in the real world questions or hypotheses aren’t obvious; one has to work to formulate them. As the management icon Russell Ackoff once said, “Outside of school, problems are seldom given; they have to be taken, extracted from complex situations…”
The art of taking problems is what sensemaking is all about.
Unfortunately, it is a skill that is typically ignored by data science educators.
Probably because it is hard to teach…but the good news is that it can be learnt. Like most tacit skills, sensemaking is best learnt by doing, that is, by formulating problems in real-world situations. Before I get to that, however, let’s take a brief detour.
Real world problems are characterised by ambiguity
An important aspect of real-world problems – as opposed to classroom ones – is that they are invariably fraught with ambiguity. For example, a customer’s requirements may be vague or the available data incomplete and messy. What this means is that there is no guarantee one will be able to formulate a well-posed problem, let alone get a useful answer. Worse, unlike a risk-based situation in which uncertainty can be quantified, one cannot even figure out the odds of success.
The human brain processes quantifiable uncertainty (aka risk) and ambiguity very differently. The former, which can be calculated, is dealt with by the prefrontal cortex which is responsible for decision making and goal-oriented thinking. Ambiguity, on the other hand, is processed by the amygdala, which deals with emotions. The upshot of this is that ambiguity evokes an emotional response, the most common one being anxiety.
Although some people are innately better at coping with anxiety than others, it is possible to get better at it by repeatedly putting oneself in high-pressure (yet safe) situations that are ambiguous. For data science students, hackathons provide a perfect opportunity to do this.
Ambiguity in data science – tales from two hackathons
Over the last two months, I’ve had the privilege of being a part of the Master of Data Science Innovation (MDSI) program run by the Connected Intelligence Centre at UTS. The course director, Theresa Anderson, sees hackathons as a great way for students to learn how to handle ambiguity. So, apart from regular coursework assignments, students are encouraged to participate in external hackathons sponsored by industry and government organisations. This gives them opportunities to gain practical experience in formulating problems in ambiguous and high-pressure environments.
Datacake at GovHack
A few MDSI student teams participated in a GovHack event earlier this year. Here’s what William Azevedo, a member of team that called themselves Datacake, wrote about his team’s problem formulation journey at the event :
The challenge is simple: the competitors should form teams, identify a problem and use data from government agencies from Australia and New Zealand to present a solution to the problem. Naturally, this solution should bring some benefit to the society.
I’m not sure I’d use the word simple…but the importance of problem formulation comes through quite clearly. Here’s how he and his team (called Datacake) went about it:
As a starting point, our team published an online survey to understand how safe people feel when walking on the streets, especially at night. As we didn’t have much time, we spread the message via social networks. In a couple of hours, we received 44 answers. It gave us enough information to back our idea.
Notice the process used in defining the problem – the team realised they did not know enough to define a meaningful problem so they went and got relevant data. Following this:
Our team analysed the answers of the survey, engaged in passionate discussions, took tips from the mentors, had lots of coffee and designed some cool diagrams on the blackboard.
…and then his description of the Aha moment when a good idea emerged:
Then the magic happened. We had this idea of merging information about crime, demographics, weather, land zoning and street illumination to provide a map of the safe and unsafe areas within a suburb.
An important point is that sensemaking is best done collaboratively. Since the problem is ambiguous or even undefined (as in this case) no individual has a privileged access to the “truth.” It is therefore important to bring diverse perspectives to bear on the problem. Indeed, sensemaking may be thought of as collaborative problem formulation and solving. In view of this it is interesting to hear what other members of Team Datacake had to say about their problem formulation process. Here’s a comment from Anthony So:
During the whole weekend we really forced ourselves to go deep and asked “Why is it happening? Why is it happening? Why is it happening?” every time we found an interesting pattern. We really wanted to understand the true root causes of those accidents. We didn’t want to stay at a descriptive level. We knew the answers were behavioural. We knew there were multiple problems and therefore require different answers and solutions. We did different techniques to do so: machine learning, stats, data visualisation. It didn’t matter which we used the only important point was how can we get to answers of those questions.
The specific area they looked at was pedestrian safety. They found that obvious variables, such as driver fatigue and hazards were not significant, so they started looking for other potential factors. Here’s how Anthony put it:
For instance we built a classification model on the severity of the accidents involving children but we didn’t use it to make predictions. We used it to identify the important features (and unimportant) for those cases. We found out that some of the variables related to the environment (Primary_hazardous_feature, Surface_condition, Weather…) and to the drivers (Fatigue_involved_in_crash…) were not important. This gave us a good indication that those accidents are mostly related directly to the behaviour of the children. So we kept diving further and further and found 3 postcodes with higher numbers of accidents than others. We focused on those 3 areas and we kept going deeper and deeper…
In the end Datacake came up with a few suggestions for improving pedestrian safety. They were awarded a prize for their efforts, so the problem they formulated and solved was clearly valuable to the sponsors.
A couple of weekends ago, Pepper Money, Australia’s largest non-bank lender sponsored a day long internal hackathon for MDSI students, with a hefty winner-take-all prize as an incentive. The challenge was quite open-ended, and had to do with helping the organisation develop a consistent brand voice. Participants were given a small corpus of text files from the organisation’s public and social media sites and were given very general guidelines on how to proceed. Details were left entirely to the teams.
As one might expect, most teams spent the first few hours struggling to define a relevant and tractable problem – relevance being paramount for the client and tractability for the teams. Being a mentor at the event, I was able observe how different teams handled this. Among other things, I was particularly impressed by how some teams with very little text mining experience were able to – in a few hours – come up with a good problem, an approach to solve it…and, most importantly, make decent progress by day’s end.
I won’t go into details except to say that the approaches were diverse, ranging from the somewhat philosophical to the very technical. A couple of examples:
- Using Aristotle’s notion of modes of persuasion to analyse and evaluate marketing material.
- Drawing on deep learning technology (Recurrent Neural Networks via Theano) to build a brand voice generator.
I was amazed at the diversity of solutions the groups came up with, and so were the other mentors and the sponsor. Blair Hudson, Innovation Portfolio Manager at Pepper Money, summed the day up very well when he said:
#PepxUTS was our first hackathon event, challenging students to build data science solutions in a day to allow everyone at Pepper to communicate using a consistent brand voice. Our Co-Group CEOs both joined in for judging and awarded the winners. It was a rewarding day for all involved
(For some vignettes from the day, check out the #PepxUTS hashtag on Twitter.)
The day’s experiences left me ever more convinced that hackathons are an excellent vehicle for learning and demonstrating the practical utility of sensemaking skills.
The two case studies highlight the benefits of sensemaking skills, both for students and organisations. On the one hand, students who participated got valuable experience in formulating problems collaboratively in high-pressure, high-ambiguity situations. This is a skill that cannot be learnt in classrooms, MOOCs or even in online data challenges (like Kaggle) where problems tend to be clearly defined. On the other hand, sponsoring organisations have benefited from new insights into longstanding problems.
Finally, it should be clear that although I’ve focused on educational settings, what I’ve said for students applies to organisational settings too: there’s nothing to stop organisations from using hackathons as a means to help their employees learn sensemaking skills.
To conclude, the main point I want to make is that the most important situations we encounter at work (and even in our personal lives) are usually fraught with ambiguity. Our first reaction is to jump into problem solving mode because it feels like the right thing to do. In reality, one is generally better off stepping back and taking the time to think the situation through, preferably with a group of diversely skilled individuals. All too often this sensemaking step is neglected, and teams end up solving an irrelevant problem.
To paraphrase Chesterton, in order to see the right solution, one must first see the right problem.
Many thanks to Blair Hudson, William Azevedo and Anthony So for their contributions to this piece.
In a previous post, I described how decision tree algorithms work and demonstrated their use via the rpart library in R. Decision trees work by splitting a dataset recursively. That is, subsets arising from a split are further split until a predetermined termination criterion is reached. At each step, a split is made based on the independent variable that results in the largest possible reduction in heterogeneity of the dependent variable.
(Note: readers unfamiliar with decision trees may want to read that post before proceeding)
The main drawback of decision trees is that they are prone to overfitting. The reason for this is that trees, if grown deep, are able to fit all kinds of variations in the data, including noise. Although it is possible to address this partially by pruning, the result often remains less than satisfactory. This is because makes a locally optimal choice at each split without any regard to whether the choice made is the best one overall. A poor split made in the initial stages can thus doom the model, a problem that cannot be fixed by post-hoc pruning.
In this post I describe random forests, a tree-based algorithm that addresses the above shortcoming of decision trees. I’ll first describe the intuition behind the algorithm via an analogy and then do a demo using the R randomForest library.
Motivating random forests
One of the reasons for the popularity of decision trees is that they reflect the way humans make decisions: by weighing up options at each stage and choosing the best one available. The analogy is particularly useful because it also suggests how decision trees can be improved.
One of the lifelines in the game show, Who Wants to be A Millionaire, is “Ask The Audience” wherein a contestant can ask the audience to vote on the answer to a question. The rationale here is that the majority response from a large number of independent decision makers is more likely to yield a correct answer than one from a randomly chosen person. There are two factors at play here:
- People have different experiences and will therefore draw upon different “data” to answer the question.
- People have different knowledge bases and preferences and will therefore draw upon different “variables” to make their choices at each stage in their decision process.
Taking a cue from the above, it seems reasonable to build many decision trees using:
- Different sets of training data.
- Randomly selected subsets of variables at each split of every decision tree.
Predictions can then made by taking the majority vote over all trees (for classification problems) or averaging results over all trees (for regression problems). This is essentially how the random forest algorithm works.
The net effect of the two strategies is to reduce overfitting by a) averaging over trees created from different samples of the dataset and b) decreasing the likelihood of a small set of strong predictors dominating the splits. The price paid is reduced interpretability as well as increased computational complexity. But then, there is no such thing as a free lunch.
The mechanics of the algorithm
Although we will not delve into the mathematical details of the algorithm, it is important to understand how two points made above are implemented in the algorithm.
Bootstrap aggregating… and a (rather cool) error estimate
A key feature of the algorithm is the use of multiple datasets for training individual decision trees. This is done via a neat statistical trick called bootstrap aggregating (also called bagging).
Here’s how bagging works:
Assume you have a dataset of size N. From this you create a sample (i.e. a subset) of size n (n less than or equal to N) by choosing n data points randomly with replacement. “Randomly” means every point in the dataset is equally likely to be chosen and “with replacement” means that a specific data point can appear more than once in the subset. Do this M times to create M equally-sized samples of size n each. It can be shown that this procedure, which statisticians call bootstrapping, is legit when samples are created from large datasets – that is, when N is large.
Because a bagged sample is created by selection with replacement, there will generally be some points that are not selected. In fact, it can be shown that, on the average, each sample will use about two-thirds of the available data points. This gives us a clever way to estimate the error as part of the process of model building.
For every data point, obtain predictions for trees in which the point was out of bag. From the result mentioned above, this will yield approximately M/3 predictions per data point (because a third of the data points are out of bag). Take the majority vote of these M/3 predictions as the predicted value for the data point. One can do this for the entire dataset. From these out of bag predictions for the whole dataset, we can estimate the overall error by computing a classification error (Count of correct predictions divided by N) for classification problems or the root mean squared error for regression problems. This means there is no need to have a separate test data set, which is kind of cool. However, if you have enough data, it is worth holding out some data for use as an independent test set. This is what we’ll do in the demo later.
Using subsets of predictor variables
Although bagging reduces overfitting somewhat, it does not address the issue completely. The reason is that in most datasets a small number of predictors tend to dominate the others. These predictors tend to be selected in early splits and thus influence the shapes and sizes of a significant fraction of trees in the forest. That is, strong predictors enhance correlations between trees which tends to come in the way of variance reduction.
A simple way to get around this problem is to use a random subset of variables at each split. This avoids over-representation of dominant variables and thus creates a more diverse forest. This is precisely what the random forest algorithm does.
Random forests in R
In what follows, I use the famous Glass dataset from the mlbench library. The dataset has 214 data points of six types of glass with varying metal oxide content and refractive indexes. I’ll first build a decision tree model based on the data using the rpart library (recursive partitioning) that I covered in an earlier article and then use then show how one can build a random forest model using the randomForest library. The rationale behind this is to compare the two models – single decision tree vs random forest. In the interests of space, I won’t explain details of the rpart here as I’ve covered it at length in the previous article. However, for completeness, I’ll list the demo code for it before getting into random forests.
Decision trees using rpart
Here’s the code listing for building a decision tree using rpart on the Glass dataset (please see my previous article for a full explanation of each step). Note that I have not used pruning as there is little benefit to be gained from it (Exercise for the reader: try this for yourself!).
Now, we know that decision tree algorithms tend to display high variance so the hit rate from any one tree is likely to be misleading. To address this we’ll generate a bunch of trees using different training sets (via random sampling) and calculate an average hit rate and spread (or standard deviation).
The decision tree algorithm gets it right about 69% of the time with a variation of about 5%. The variation isn’t too bad here, but the accuracy has hardly improved at all (Exercise for the reader: why?). Let’s see if we can do better using random forests.
As discussed earlier, a random forest algorithm works by averaging over multiple trees using bootstrapped samples. Also, it reduces the correlation between trees by splitting on a random subset of predictors at each node in tree construction. The key parameters for randomForest algorithm are the number of trees (ntree) and the number of variables to be considered for splitting (mtry). The algorithm sets a default of 500 for ntree and sets mtry to one-third the total number of predictors for classification problems and square root of the the number of predictors for regression. These defaults can be overridden by explicitly providing values for these variables.
The preliminary stuff – the creation of training and test datasets etc. – is much the same as for decision trees but I’ll list the code for completeness.
randomForest(formula = Type ~ ., data = trainGlass, importance = TRUE, xtest = testGlass[, -typeColNum], ntree = 1001)
The first thing to note is the out of bag error estimate is ~ 24%. Equivalently the hit rate is 76%, which is better than the 69% for decision trees. Secondly, you’ll note that the algorithm does a terrible job identifying type 3 and 6 glasses correctly. This could possibly be improved by a technique called boosting, which works by iteratively improving poor predictions made in earlier stages. I plan to look at boosting in a future post, but if you’re curious, check out the gbm package in R.
Finally, for completeness, let’s see how the test set does:
The test accuracy is better than the out of bag accuracy and there are some differences in the class errors as well. However, overall the two compare quite well and are significantly better than the results of the decision tree algorithm.
Random forest algorithms also give measures of variable importance. Computation of these is enabled by setting importance, a boolean parameter, to TRUE. The algorithm computes two measures of variable importance: mean decrease in Gini and mean decrease in accuracy. Brief explanations of these follow.
Mean decrease in Gini
When determining splits in individual trees, the algorithm looks for the largest class (in terms of population) and attempts to isolate it first. If this is not possible, it tries to do the best it can, always focusing on isolating the largest remaining class in every split.This is called the Gini splitting rule (see this article for a good explanation of the rule).
The “goodness of split” is measured by the Gini Impurity, . For a set containing K categories this is given by:
where is the fraction of the set that belongs to the ith category. Clearly, is 0 when the set is homogeneous or pure (1 class only) and is maximum when classes are equiprobable (for example, in a two class set the maximum occurs when and are 0.5). At each stage the algorithm chooses to split on the predictor that leads to the largest decrease in . The algorithm tracks this decrease for each predictor for all splits and all trees in the forest. The average is reported as the mean decrease in Gini.
Mean decrease in accuracy
The mean decrease in accuracy is calculated using the out of bag data points for each tree. The procedure goes as follows: when a particular tree is grown, the out of bag points are passed down the tree and the prediction accuracy (based on all out of bag points) recorded . The predictors are then randomly permuted and the out of bag prediction accuracy recalculated. The decrease in accuracy for a given predictor is the difference between the accuracy of the original (unpermuted) tree and the those obtained from the permuted trees in which the predictor was excluded. As in the previous case, the decrease in accuracy for each predictor can be computed and tracked as the algorithm progresses. These can then be averaged by predictor to yield a mean decrease in accuracy.
Variable importance plot
From the above, it would seem that the mean decrease in accuracy is a more global measure as it uses fully constructed trees in contrast to the Gini measure which is based on individual splits. In practice, however, there could be other reasons for choosing one over the other…but that is neither here nor there, if you set importance to TRUE, you’ll get both. The numerical measures of importance are returned in the randomForest object (Glass.rf in our case), but I won’t list them here. Instead, I’ll just print out the variable importance plots for the two measures as these give a good visual overview of the relative importance of variables. The code is a simple one-liner:
The plot is shown in Figure 1 below.
In this case the two measures are pretty consistent so it doesn’t really matter which one you choose.
Random forests are an example of a general class of techniques called ensemble methods. These techniques are based on the principle that averaging over a large number of not-so-good models yields a more reliable prediction than a single model. This is true only if models in the group are independent of each other, which is precisely what bootstrap aggregation and predictor subsetting are intended to achieve.
Although considerably more complex than decision trees, the logic behind random forests is not hard to understand. Indeed, the intuitiveness of the algorithm together with its ease of use and accuracy have made it very popular in the machine learning community.
I am delighted to announce that my new business book, The Heretic’s Guide to Management: The Art of Harnessing Ambiguity, is now available in e-book and print formats. The book, co-written with Paul Culmsee, is a loose sequel to our previous tome, The Heretics Guide to Best Practices.
Many reviewers liked the writing style of our first book, which combined rigour with humour. This book continues in the same vein, so if you enjoyed the first one we hope you might like this one too. The new book is half the size of the first one and I considerably less idealistic too. In terms of subject matter, I could say “Ambiguity, Teddy Bears and Fetishes” and leave it at that…but that might leave you thinking that it’s not the kind of book you would want anyone to see on your desk!
Rest assured, The Heretic’s Guide to Management is not a corporate version of Fifty Shades of Grey. Instead, it aims to delve into the complex but fascinating ways in which ambiguity affects human behaviour. More importantly, it discusses how ambiguity can be harnessed in ways that achieve positive outcomes. Most management techniques (ranging from strategic planning to operational budgeting) attempt to reduce ambiguity and thereby provide clarity. It is a profound irony of modern corporate life that they often end up doing the opposite: increasing ambiguity rather than reducing it.
On the surface, it is easy enough to understand why: organizations are complex entities so it is unreasonable to expect management models, such as those that fit neatly into a 2*2 matrix or a predetermined checklist, to work in the real world. In fact, expecting them to work as advertised is like colouring a paint-by-numbers Mona Lisa, expecting to recreate Da Vinci’s masterpiece. Ambiguity therefore invariably remains untamed, and reality reimposes itself no matter how alluring the model is.
It turns out that most of us have a deep aversion to situations that involve even a hint of ambiguity. Recent research in neuroscience has revealed the reason for this: ambiguity is processed in the parts of the brain which regulate our emotional responses. As a result, many people associate it with feelings of anxiety. When kids feel anxious, they turn to transitional objects such as teddy bears or security blankets. These objects provide them with a sense of stability when situations or events seem overwhelming. In this book, we show that as grown-ups we don’t stop using teddy bears – it is just that the teddies we use take a different, more corporate, form. Drawing on research, we discuss how management models, fads and frameworks are actually akin to teddy bears. They provide the same sense of comfort and certainty to corporate managers and minions as real teddies do to distressed kids.
Most children usually outgrow their need for teddies as they mature and learn to cope with their childhood fears. However, if development is disrupted or arrested in some way, the transitional object can become a fetish – an object that is held on to with a pathological intensity, simply for the comfort that it offers in the face of ambiguity. The corporate reliance on simplistic solutions for the complex challenges faced is akin to little Johnny believing that everything will be OK provided he clings on to Teddy.
When this happens, the trick is finding ways to help Johnny overcome his fear of ambiguity.
Ambiguity is a primal force that drives much of our behaviour. It is typically viewed negatively, something to be avoided or to be controlled.
The truth, however, is that ambiguity is a force that can be used in positive ways too. The Force that gave the Dark Side their power in the Star Wars movies was harnessed by the Jedi in positive ways.
Our book shows you how ambiguity, so common in the corporate world, can be harnessed to achieve the results you want.
The e-book is available via popular online outlets. Here are links to some:
For those who prefer paperbacks, the print version is available here.
Thanks for your support 🙂
It is well-known that data-driven stories are a great way to convey results of data science initiatives. What is perhaps not as well-known is that data science projects often have to begin with stories too. Without this “story before the story” there will be no project, no results and no data-driven stories to tell….
For those who prefer to read, here’s a transcript of the video in full:
In the beginning there is no data, let alone results…but there are ideas. So, long before we tell stories about data or results, we have to tell stories about our ideas. The aim of these stories is to get people to care about our ideas as much as we do and, more important, invest in them. Without their interest or investment there will be no results and no further stories to tell.
So one of the first things one has to do is craft a story about the idea…or the story before the story.
Once upon a time there was a CRM system. The system captured every customer interaction that occurred, whether it was by phone, email or face to face conversation. Many quantitative details of interactions were recorded, time, duration, type. And if the interaction led to a sale, the details of the sale were recorded too.
Almost as an aside, the system also gave sales people the opportunity to record their qualitative impressions as free text notes. As you might imagine, this information, though potentially valuable, was never analysed. Sure managers looked at notes in isolation from time to time when referring.to specific customer interactions, but there was no systematic analysis of the corpus as a whole. Nobody had thought it worthwhile to do this, possibly because it is difficult if not quite impossible to analyse unstructured information in the world of relational databases and SQL.
One day, an analyst was browsing data randomly in the system, as good analysts sometimes do. He came across a note that to him seemed like the epitome of a good note…it described what the interaction was about, the customer’s reactions and potential next steps all in a logical fashion.
This gave him an idea. Wouldn’t it be cool, he thought, if we could measure the quality of notes? Not only would this tell us something about the customer and the interaction, it may tell us something about the sales person as well.
The analyst was mega excited…but he realised he’d need help. He was an IT guy and as we all know, business folks in big corporations stopped listening to their IT guys long ago. So our IT guy had his work cut out for him.
After much cogitation, he decided to enlist the help of his friend, a strategic business analyst in the marketing department. This lady, who worked in marketing had the trust of the head of marketing. If she liked the idea, she might be able to help sell it to the head of marketing.
As it turned out, the business analyst loved the idea…more important, since she knew what the sales people do on a day to day basis, she could give the IT guy more ideas on how he could build quantitative measures of the quality of notes. For example, she suggested looking for emotion-laden words or mentions of competitor’s products and so on. The IT guy now had some concrete things to work on. The initial results gave them even more ideas, and soon they had more than enough to make a convincing pitch to the head of marketing.
It would take us too far afield to discuss details of the pitch, but what we will say is this: they avoided technical details, instead focusing on the strategic and innovative aspects of the work.
The marketing head liked the idea…what was there not to like? He agreed to support the effort, and the idea became a project….
…and yes, within months the project resulted in new insights into customer behaviour. But that is another story.