Eight to Late

Sensemaking and Analytics for Organizations

A gentle introduction to logistic regression and lasso regularisation using R

with 2 comments

In this day and age of artificial intelligence and deep learning, it is easy to forget that simple algorithms can work well for a surprisingly large range of practical business problems.  And the simplest place to start is with the granddaddy of data science algorithms: linear regression and its close cousin, logistic regression. Indeed, in his acclaimed MOOC and accompanying textbook, Yaser Abu-Mostafa spends a good portion of his time talking about linear methods, and with good reason too: linear methods are not only a good way to learn the key principles of machine learning, they can also be remarkably helpful in zeroing in on the most important predictors.

My main aim in this post is to provide a beginner level introduction to logistic regression using R and also introduce LASSO (Least Absolute Shrinkage and Selection Operator), a powerful feature selection technique that is very useful for regression problems. Lasso is essentially a regularization method. If you’re unfamiliar with the term, think of it as a way to reduce overfitting using less complicated functions (and if that means nothing to you, check out my prelude to machine learning).  One way to do this is to toss out less important variables, after checking that they aren’t important.  As we’ll discuss later, this can be done manually by examining p-values of coefficients and discarding those variables whose coefficients are not significant. However, this can become tedious for classification problems with many independent variables.  In such situations, lasso offers a neat way to model the dependent variable while automagically selecting significant variables by shrinking the coefficients of unimportant predictors to zero.  All this without having to mess around with p-values or obscure information criteria. How good is that?

Why not linear regression?

In linear regression one attempts to model a dependent variable (i.e. the one being predicted) using the best straight line fit to a set of predictor variables.  The best fit is usually taken to be one that minimises the root mean square error,  which is the sum of square of the differences between the actual and predicted values of the dependent variable. One can think of logistic regression as the equivalent of linear regression for a classification problem.  In what follows we’ll look at binary classification – i.e. a situation where the dependent variable takes on one of two possible values (Yes/No, True/False, 0/1 etc.).

First up, you might be wondering why one can’t use linear regression for such problems. The main reason is that classification problems are about determining class membership rather than predicting variable values, and linear regression is more naturally suited to the latter than the former. One could, in principle, use linear regression for situations where there is a natural ordering of categories like High, Medium and Low for example. However, one then has to map sub-ranges of the predicted values to categories. Moreover, since predicted values are potentially unbounded (in data as yet unseen) there remains a degree of arbitrariness associated with such a mapping.

Logistic regression sidesteps the aforementioned issues by modelling class probabilities instead.  Any input to the model yields a number lying between 0 and 1, representing the probability of class membership. One is still left with the problem of determining the threshold probability, i.e. the probability at which the category flips from one to the other.  By default this is set to p=0.5, but in reality it should be settled based on how the model will be used.  For example, for a marketing model that identifies potentially responsive customers, the threshold for a positive event might be set low (much less than 0.5) because the client does not really care about mailouts going to a non-responsive customer (the negative event). Indeed they may be more than OK with it as there’s always a chance – however small – that a non-responsive customer will actually respond.  As an opposing example, the cost of a false positive would be high in a machine learning application that grants access to sensitive information. In this case, one might want to set the threshold probability to a value closer to 1, say 0.9 or even higher. The point is, the setting an appropriate threshold probability is a business issue, not a technical one.

Logistic regression in brief

So how does logistic regression work?

For the discussion let’s assume that the outcome (predicted variable) and predictors are denoted by Y and X respectively and the two classes of interest are denoted by + and – respectively.  We wish to model the conditional probability that the outcome Y is +, given that the input variables (predictors) are X. The conditional probability is denoted by p(Y=+|X)   which we’ll abbreviate as p(X) since we know we are referring to the positive outcome Y=+.

As mentioned earlier, we are after the probability of class membership so we must ensure that the hypothesis function (a fancy word for the model) always lies between 0 and 1. The function assumed in logistic regression is:

p(X) = \dfrac{\exp^{\beta_0+\beta_1 X}}{1+\exp^{\beta_0 + \beta_1 X}} .....(1)

You can verify that p(X) does indeed lie between 0 and  1 as X varies from -\infty to \infty.  Typically, however, the values of X that make sense are bounded as shown in the example (stolen from Wikipedia) shown in Figure 1. The figure also illustrates the typical S-shaped  curve characteristic of logistic regression.

Figure 1: Logistic function

As an aside, you might be wondering where the name logistic comes from. An equivalent way of expressing the above equation is:

\log(\dfrac{p(X)}{1-p(X)}) = \beta_0+\beta_1 X .....(2)

The quantity on the left is the logarithm of the odds. So, the model is a linear regression of the log-odds, sometimes called logit, and hence the name logistic.

The problem is to find the values of \beta_0  and \beta_1 that results in a p(X) that most accurately classifies all the observed data points – that is, those that belong to the positive class have a probability as close as possible to 1 and those that belong to the negative class have a probability as close as possible to 0. One way to frame this problem is to say that we wish to maximise the product of these probabilities, often referred to as the likelihood:

\displaystyle\log ( {\prod_{i:Y_i=+} p(X_{i}) \prod_{j:Y_j=-}(1-p(X_{j}))})

Where \prod represents the products over i and j, which run over the +ve and –ve classed points respectively. This approach, called maximum likelihood estimation, is quite common in many machine learning settings, especially those involving probabilities.

It should be noted that in practice one works with the log likelihood because it is easier to work with mathematically. Moreover, one minimises the negative  log likelihood which, of course, is the same as maximising the log likelihood.  The quantity one minimises is thus:

L = - \displaystyle\log ( {\prod_{i:Y_i=+} p(X_{i}) \prod_{j:Y_j=-}(1-p(X_{j}))}).....(3)

However, these are technical details that I mention only for completeness. As you will see next, they have little bearing on the practical use of logistic regression.

Logistic regression in R – an example

In this example, we’ll use the logistic regression option implemented within the glm function that comes with the base R installation. This function fits a class of models collectively known as generalized linear models. We’ll apply the function to the Pima Indian Diabetes dataset that comes with the mlbench package. The code is quite straightforward – particularly if you’ve read earlier articles in my “gentle introduction” series – so I’ll just list the code below  noting that the logistic regression option is invoked by setting family=”binomial”  in the glm function call.

Here we go (a pdf listing of the code can be found here):

#set working directory if needed (modify path as needed)
#setwd(“C:/Users/Kailash/Documents/logistic”)
#load required library
library(mlbench)
#load Pima Indian Diabetes dataset
data(“PimaIndiansDiabetes”)
#set seed to ensure reproducible results
set.seed(42)
#split into training and test sets
PimaIndiansDiabetes[,”train”] <- ifelse(runif(nrow(PimaIndiansDiabetes))<0.8,1,0)
#separate training and test sets
trainset <- PimaIndiansDiabetes[PimaIndiansDiabetes$train==1,]
testset <- PimaIndiansDiabetes[PimaIndiansDiabetes$train==0,]
#get column index of train flag
trainColNum <- grep(“train”,names(trainset))
#remove train flag column from train and test sets
trainset <- trainset[,-trainColNum]
testset <- testset[,-trainColNum]
#get column index of predicted variable in dataset
typeColNum <- grep(“diabetes”,names(PimaIndiansDiabetes))
#build model
glm_model <- glm(diabetes~.,data = trainset, family = binomial)
summary(glm_model)
Call:
glm(formula = diabetes ~ ., family = binomial, data = trainset)
<<output edited>>
Coefficients:
            Estimate  Std. Error z value Pr(>|z|)
(Intercept)-8.1485021 0.7835869 -10.399  < 2e-16 ***
pregnant    0.1200493 0.0355617   3.376  0.000736 ***
glucose     0.0348440 0.0040744   8.552  < 2e-16 ***
pressure   -0.0118977 0.0057685  -2.063  0.039158 *
triceps     0.0053380 0.0076523   0.698  0.485449
insulin    -0.0010892 0.0009789  -1.113  0.265872
mass        0.0775352 0.0161255   4.808  1.52e-06 ***
pedigree    1.2143139 0.3368454   3.605  0.000312 ***
age         0.0117270 0.0103418   1.134  0.256816
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#predict probabilities on testset
#type=”response” gives probabilities, type=”class” gives class
glm_prob <- predict.glm(glm_model,testset[,-typeColNum],type=”response”)
#which classes do these probabilities refer to? What are 1 and 0?
contrasts(PimaIndiansDiabetes$diabetes)
    pos
neg 0
pos 1
#make predictions
##…first create vector to hold predictions (we know 0 refers to neg now)
glm_predict <- rep(“neg”,nrow(testset))
glm_predict[glm_prob>.5] <- “pos”
#confusion matrix
table(pred=glm_predict,true=testset$diabetes)
glm_predict  neg pos
        neg    90 22
        pos     8 33
#accuracy
mean(glm_predict==testset$diabetes)
[1] 0.8039216

 

Although this seems pretty good, we aren’t quite done because there is an issue that is lurking under the hood. To see this, let’s examine the information output from the model summary, in particular the coefficient estimates (i.e. estimates for \beta) and their significance. Here’s a summary of the information contained in the table:

  • Column 2 in the table lists coefficient estimates.
  • Column 3 list s the standard error of the estimates (the larger the standard error, the less confident we are about the estimate)
  • Column 4 the z statistic (which is the coefficient estimate (column 2) divided by the standard error of the estimate (column 3)) and
  • The last column (Pr(>|z|) lists the p-value, which is the probability of getting the listed estimate assuming the predictor has no effect. In essence, the smaller the p-value, the more significant the estimate is likely to be.

From the table we can conclude that only 4 predictors are significant – pregnant, glucose, mass and pedigree (and possibly a fifth – pressure). The other variables have little predictive power and worse, may contribute to overfitting.  They should, therefore, be eliminated and we’ll do that in a minute. However, there’s an important point to note before we do so…

In this case we have only 9 variables, so are able to identify the significant ones by a manual inspection of p-values.  As you can well imagine, such a process will quickly become tedious as the number of predictors increases. Wouldn’t it be be nice if there were an algorithm that could somehow automatically shrink the coefficients of these variables or (better!) set them to zero altogether?  It turns out that this is precisely what  lasso and its close cousin, ridge regression, do.

Ridge and Lasso

Recall that the values of the logistic regression coefficients \beta_0  and \beta_1 are found by minimising the negative log likelihood described in equation (3).  Ridge and lasso regularization work by adding a penalty term to the log likelihood function.  In the case of ridge regression, the penalty term is \beta_1^2 and in the case of lasso, it is |\beta_1| (Remember, \beta_1  is a vector, with as many components as there are predictors).  The quantity to be minimised in the two cases is thus:

L +\lambda \sum \beta_1^2.....(4) – for ridge regression,

and

L +\lambda \sum |\beta_1|.....(5) – for lasso regression.

Where \lambda is a free parameter which is usually selected in such a way that the resulting model minimises the out of sample error. Typically, the optimal value of \lambda is found using grid search with cross-validation, a process akin to the one described in my discussion on cost-complexity parameter  estimation in decision trees. Most canned algorithms provide methods to do this; the one we’ll use in the next section is no exception.

In the case of ridge regression, the effect of the penalty term is to shrink the coefficients that contribute most to the error. Put another way, it reduces the magnitude of the coefficients that contribute to increasing L.  In contrast, in  the case of lasso regression, the effect of the penalty term is to set the these coefficients exactly to zero! This is cool because what it mean that lasso regression works like a feature selector that picks out the most important coefficients, i.e. those that are most predictive (and have the lowest p-values).

Let’s illustrate this through an example. We’ll use the glmnet package which implements a combined version of ridge and lasso (called elastic net). Instead of minimising (4) or (5) above, glmnet minimises:

L +\lambda[ (1-\alpha)\sum [\beta_1^2 + \alpha\sum|\beta_1|]....(6)

where \alpha controls the “mix” of ridge and lasso regularisation, with \alpha=0 being “pure” ridge and  \alpha=1 being “pure” lasso.

Lasso regularisation using glmnet

Let’s reanalyse the Pima Indian Diabetes dataset using glmnet with \alpha=1 (pure lasso). Before diving into code, it is worth noting that glmnet:

  • does not have a formula interface, so one has to input the predictors as a matrix and the class labels as a vector.
  • does not accept categorical predictors, so one has to convert these to numeric values before passing them to glmnet.

The glmnet function model.matrix creates the matrix and also converts categorical predictors to appropriate dummy variables.

Another important point to note is that we’ll use the function cv.glmnet, which automatically performs a grid search to find the optimal value of \lambda.

OK, enough said, here we go:

#load required library
library(glmnet)
#convert training data to matrix format
x <- model.matrix(diabetes~.,trainset)
#convert class to numerical variable
y <- ifelse(trainset$diabetes==”pos”,1,0)
#perform grid search to find optimal value of lambda
#family= binomial => logistic regression, alpha=1 => lasso
# check docs to explore other type.measure options
cv.out <- cv.glmnet(x,y,alpha=1,family=”binomial”,type.measure = “mse” )
#plot result
plot(cv.out)

 

The plot is shown in Figure 2 below:

Figure 2: Error as a function of lambda (select lambda that minimises error)

The plot shows that the log of the optimal value of lambda (i.e. the one that minimises the root mean square error) is approximately -5. The exact value can be viewed by examining the variable min_lambda in the code below.

 

#min value of lambda
min_lambda <- cv.out$lambda.min
#regression coefficients
coef(cv.out)
10 x 1 sparse Matrix of class “dgCMatrix”
                      1
(Intercept) -4.61706681
(Intercept)  .
pregnant     0.03077434
glucose      0.02314107
pressure     .
triceps      .
insulin      .
mass         0.02779252
pedigree     0.20999511
age          .

 

The output shows that only those variables that we had determined to be significant on the basis of p-values have non-zero coefficients. The coefficients of all other variables have been set to zero by the algorithm! Lasso has reduced the complexity of the fitting function massively…and you are no doubt wondering what effect this  has on accuracy. Let’s see by running the model against our test data:

 

#get test data
x_test <- model.matrix(diabetes~.,testset)
#predict class, type=”class”
lasso_prob <- predict.glm(glm_model,testset[,-typeColNum],type=”response”)
#translate probabilities to predictions
lasso_predict <- rep(“neg”,nrow(testset))
lasso_predict[lasso_prob>.5] <- “pos”
#confusion matrix
table(pred=lasso_predict,true=testset$diabetes)
pred  neg pos
   neg 92 22
  pos  6 33
#accuracy
mean(lasso_predict==testset$diabetes)
[1] 0.8169935

 

Which is the same as what we got with the more complex model. So, we get  the same out-of-sample accuracy as we did before, and we do so using a way simpler function (4 non-zero coefficients) than the original one (9  nonzero coefficients). What this means is that the simpler function does at least as good a job fitting the signal in the data as the more complicated one.  The bias-variance tradeoff tells us that the simpler function should be preferred because it is less likely to overfit the training data.

Paraphrasing William of Ockhamall other things being equal, a simple hypothesis should be preferred over a complex one.

Wrapping up

In this post I have tried to provide a detailed introduction to logistic regression, one of the simplest (and oldest) classification techniques in the machine learning practitioners arsenal. Despite it’s simplicity (or I should say, because of it!) logistic regression works well for many business applications which often have a simple decision boundary. Moreover, because of its simplicity it is less prone to overfitting than flexible methods such as decision trees. Further, as we have shown, variables that contribute to overfitting can be eliminated using lasso (or ridge) regularisation, without compromising out-of-sample accuracy. Given these advantages and its inherent simplicity, it isn’t surprising that logistic regression remains a workhorse for data scientists.

Written by K

July 11, 2017 at 10:00 pm

The improbability of success

with 2 comments

Anyone who has tidied up after a toddler intuitively understands that making a mess is far easier than creating order. The fundamental reason for this is that the number of messy states in the universe (or a toddler’s room) far outnumbers the ordered ones.  As this point might not be obvious, I’ll demonstrate it via a simple thought experiment involving marbles:

Throw three marbles onto a flat surface.  When the marbles come to rest, you are most likely to end up with a random configuration  as in Figure 1.

Figure 1: A random configuration of 3 marbles

Indeed, you’d be extremely surprised if the three ended up being collinear as in Figure 2.   Note that Figure 2 is just one example of many collinear possibilities, but the point I’m making is that if the marbles are thrown randomly, they are more likely to end up in a random state than a lined-up one.

Figure 2: an unlikely (ordered) configuration

This raises a couple of questions:

Question: On what basis can one claim that the collinear configuration is tidier or more ordered than the non-collinear one?

Naive answer:  It looks more ordered. Yes, tidiness is in the eye of the beholder so it is necessarily subjective. However, I’ll wager that if one took a poll, an overwhelming number of people would say that the configuration in Figure 2 is more ordered than the one in Figure 1.

More sophisticated answer : The “state” of collinear marbles can be described using 2 parameters, the slope and intercept of the straight line that three marbles lie on (in any coordinate system) whereas the description of the nonlinear state requires 3 parameters. The first state is tidier because it requires fewer parameters.  Another way to think about is that the line can be described by two marbles; the third one is redundant as far as the description of the state is concerned.

Question: Why is a tidier configuration less likely than a messy one?

Answer:  May be you see this intuitively and need no proof, but here’s one just in case. Imagine rolling the three marbles one after the other. The first two, regardless of where they end up, will necessarily lie along a line (two points lie on the straight line joining them). Now, I think it is easy to see that if we throw the third marble randomly, it is highly unlikely end up on that line. Indeed, for the third marble to end up exactly on the same straight line requires a coincidence of near cosmic proportions.

I know, I know, this is not a proof, but I trust it makes the point.

Now, although it is near impossible to get to a collinear end state via random throws, it is possible to approximate it by changing the way we throw the marbles. Here’s how:

  1. Throw the marbles consecutively rather than in one go.
  2. When throwing the third marble, adjust its initial speed and direction in a way that takes into account the positions of the two marbles that are already on the surface. Remember these two already define a straight line.

The third throw is no longer random because it is designed to maximise the chance that the last marble will get as close as possible to the straight line defined by the first two. Done right, you’ll end up with something closer to the configuration in Figure 3 rather than the one in Figure 2.

Figure 3: an “approximately ordered” state

Now you’re probably wondering what this has to do with success. I’ll make the connection via an example that will be familiar to many readers of this blog: an organisation’s strategy. However, as I will reiterate later, the arguments I present are very general and can be applied to just about any initiative or situation.

Typically, a strategy sets out goals for an organisation and a plan to achieve them in a specified timeframe. The goals define a number of desirable outcomes, or states which, by design, are constrained to belong to a (very) small subset of all possible states the organisation can end up in.  In direct analogy with the simple model discussed above it is clear that, left to its own devices, the organisation is more likely to end up in one of the much overwhelmingly larger number of “failed states” than one of the successful ones.  Notwithstanding the popular quote about there being many roads to success, in reality there are a great many more roads to failure.

Of course, that’s precisely why organisations are never “left to their own devices.” Indeed, a strategic plan specifies actions that are intended to make a successful state more likely than an unsuccessful one. However, no plan can guarantee success; it can, at best, make it more likely. As in the marble game, success is ultimately a matter of chance, even when we take actions to make it more likely.

If we accept this, the key question becomes: how can one design a strategy that improves the odds of success?  The marble analogy suggests a way to do this is to:

  1. Define success in terms of an end state that is a natural extension of your current state.
  2. Devise a plan to (approximately) achieve that end state. Such a plan will necessarily build on the current state rather than change it wholesale. Successful change is an evolutionary process rather than a revolutionary one.

My contention is that these points are often ignored by management strategists. More often than not, they will define an end state based on a textbook idealisation, consulting model or (horror!) best practice. The marble analogy shows why copying others is unlikely to succeed.

Figure 4 shows a variant of the marble game in which we have two sets of marbles (or organisations!), one blue, as before, and the other red.


Figure 4: Two distinct configurations of marbles (or organisations)

Now, it is considerably harder to align an additional marble with both sets of marbles than the blue one alone. Here’s why…

To align with both sets, the new marble has to end up close to the point that lies at the intersection of the blue and red lines in Figure 5. In contrast, to align with the blue set alone, all that’s needed is for it to get close to any point on the blue line.

QED!

Figure 5: Why copying others is not a good idea (see text for explanation)

Finally, on a broader note, it should be clear that the arguments made above go beyond organisational strategies. They apply to pretty much any planned action, whether at work or in one’s personal life.

So, to sum up: when developing an organisational (or personal) strategy, the first step is to understand where you are and then identify the minimal actions you need to take in order to get to an “improved” state that is consistent with  your current one. Yes, this is akin to the incremental and evolutionary approach that Agilistas and Leaners have been banging on about for years. However, their prescriptions focus on specific areas: software development and process improvement.  My point is that the basic principles are way broader because they are a direct consequence of a fundamental fact regarding the relative likelihood of order and disorder in a toddler’s room, an organisation, or even the universe at large.

Written by K

April 4, 2017 at 9:16 pm

Uncertainty, ambiguity and the art of decision making

with 3 comments

A common myth about decision making in organisations is that it is, by and large, a rational process.   The term rational refers to decision-making methods that are based on the following broad steps:

  1. Identify available options.
  2. Develop criteria for rating options.
  3. Rate options according to criteria developed.
  4. Select the top-ranked option.

Although this appears to be a logical way to proceed it is often difficult to put into practice, primarily because of uncertainty about matters relating to the decision.

Uncertainty can manifest itself in a variety of ways: one could be uncertain about facts, the available options, decision criteria or even one’s own preferences for options.

In this post, I discuss the role of uncertainty in decision making and, more importantly, how one can make well-informed decisions in such situations.

A bit about uncertainty

It is ironic that the term uncertainty is itself vague when used in the context of decision making. There are at least five distinct senses in which it is used:

  1. Uncertainty about decision options.
  2. Uncertainty about one’s preferences for options.
  3. Uncertainty about what criteria are relevant to evaluating the options.
  4. Uncertainty about what data is needed (data relevance).
  5. Uncertainty about the data itself (data accuracy).

Each of these is qualitatively different: uncertainty about data accuracy (item 5 above) is very different from uncertainty regarding decision options (item 1). The former can potentially be dealt with using statistics whereas the latter entails learning more about the decision problem and its context, ideally from different perspectives. Put another way, the item 5 is essentially a technical matter whereas item 1 is a deeper issue that may have social, political and – as we shall see – even behavioural dimensions. It is therefore reasonable to expect that the two situations call for vastly different approaches.

Quantifiable uncertainty

A common problem in project management is the estimation of task durations. In this case, what’s requested is a “best guess” time (in hours or days) it will take to complete a task. Many project schedules represent task durations by point estimates, i.e.  by single numbers. The Gantt Chart shown in Figure 1 is a common example. In it, each task duration is represented by its expected duration. This is misleading because the single number conveys a sense of certainty that is unwarranted.  It is far more accurate, not to mention safer, to quote a range of possible durations.

Figure 1: Gantt Chart (courtesy Wikimedia)

Figure 1: Gantt Chart (courtesy Wikimedia)

In general, quantifiable uncertainties, such as those conveyed in estimates, should always be quoted as ranges – something along the following lines: task A may take anywhere between 2 and 8 days, with a most likely completion time of 4 days (Figure 2).

Figure 2: Task completion likelihood (3 point estimates)

Figure 2: Task completion likelihood (3 point estimates)

In this example, aside from stating that the task will finish sometime between 2 and 4 days, the estimator implicitly asserts that the likelihood of finishing before 2 days or after 8 days is zero.  Moreover, she also implies that some completion times are more likely than others. Although it may be difficult to quantify the likelihood exactly, one can begin by making simple (linear!) approximations as shown in Figure 3.

Figure 3: Simple probability distribution based on the estimates in Figure 2

Figure 3: Simple probability distribution based on the estimates in Fig 2

The key takeaway from the above is that quantifiable uncertainties are shapes rather than single numbers.  See this post and this one for details for how far this kind of reasoning can take you. That said, one should always be aware of the assumptions underlying the approximations. Failure to do so can be hazardous to the credibility of estimators!

Although I haven’t explicitly said so, estimation as described above has a subjective element. Among other things, the quality of an estimate depends on the judgement and experience of the estimator. As such, it is prone to being affected by errors of judgement and cognitive biases.  However, provided one keeps those caveats in mind, the probability-based approach described above is suited to situations in which uncertainties are quantifiable, at least in principle. That said, let’s move on to more complex situations in which uncertainties defy quantification.

Introducing ambiguity

The economist Frank Knight was possibly the first person to draw the distinction between quantifiable and unquantifiable uncertainties.  To make things really confusing, he called the former risk and the latter uncertainty. In his doctoral thesis, published in 1921, wrote:

…it will appear that a measurable uncertainty, or “risk” proper, as we shall call the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all. We shall accordingly restrict the term “uncertainty” to cases of the non-quantitative type (p.20)

Terminology has moved on since Knight’s time, the term uncertainty means lots of different things, depending on context. In this piece, we’ll use the term uncertainty to refer to quantifiable uncertainty (as in the task estimate of the previous section) and use ambiguity to refer to nonquantifiable uncertainty. In essence, then, we’ll use the term uncertainty for situations where we know what we’re measuring (i.e. the facts) but are uncertain about its numerical or categorical values whereas we’ll use the word ambiguity to refer to situations in which we are uncertain about what the facts  are or which facts are relevant.

As a test of understanding, you may want to classify each of the five points made in the second section of this post as either uncertain or ambiguous (Answers below)

Answer: 1 through 4 are ambiguous and 5 is uncertain.

How ambiguity manifests itself in decision problems

The distinction between uncertainty and ambiguity points to a problem with quantitative decision-making techniques such as cost-benefit analysis, multicriteria decision making methods or analytic hierarchy process. All these methods assume that decision makers are aware of all the available options, their preferences for them, the relevant evaluation criteria and the data needed. This is almost never the case for consequential decisions. To see why, let’s take a closer look at the different ways in which ambiguity can play out in the rational decision making process mentioned at the start of this article.

  1. The first step in the process is to identify available options. In the real world, however, options often cannot be enumerated or articulated fully. Furthermore, as options are articulated and explored, new options and sub-options tend to emerge. This is particularly true if the options depend on how future events unfold.
  2. The second step is to develop criteria for rating options. As anyone who has been involved in deciding on a contentious issue will confirm, it is extremely difficult to agree on a set of decision criteria for issues that affect different stakeholders in different ways.  Building a new road might improve commute times for one set of stakeholders but result in increased traffic in a residential area for others. The two criteria will be seen very differently by the two groups. In this case, it is very difficult for the two groups to agree on the relative importance of the criteria or even their legitimacy. Indeed, what constitutes a legitimate criterion is a matter of opinion.
  3. The third step is to rate options. The problem here is that real-world options often cannot be quantified or rated in a meaningful way. Many of life’s dilemmas fall into this category. For example, a decision to accept or decline a job offer is rarely made on the basis of material gain alone. Moreover, even where ratings are possible, they can be highly subjective. For example, when considering a job offer, one candidate may give more importance to financial matters whereas another might consider lifestyle-related matters (flexi-hours, commuting distance etc.) to be paramount. Another complication here is that there may not be enough information to settle the matter conclusively. As an example, investment decisions are often made on the basis of quantitative information that is based on questionable assumptions.

A key consequence of the above is that such ambiguous decision problems are socially complex – i.e. different stakeholders could have wildly different perspectives on the problem itself.   One could say the ambiguity experienced by an individual is compounded by the group.

Before going on I should point out that acute versions of such ambiguous decision problems go by many different names in the management literature. For example:

All these terms are more or less synonymous:  the root cause of the difficulty in every case is ambiguity (or unquantifiable uncertainty), which prevents a clear formulation of the problem.

Social complexity is hard enough to tackle as it is, but there’s another issue that makes things even harder: ambiguity invariably triggers negative emotions such as fear and anxiety in individuals who make up the group.  Studies in neuroscience have shown that in contrast to uncertainty, which evokes logical responses in people, ambiguity tends to stir up negative emotions while simultaneously suppressing the ability to think logically.  One can see this playing out in a group that is debating a contentious decision: stakeholders tend to get worked up over issues that touch on their values and identities, and this seems to limit their ability to look at the situation objectively.

Tackling ambiguity

Summarising the discussion thus far: rational decision making approaches are based on the assumption that stakeholders have a shared understanding of the decision problem as well as the facts and assumptions around it. These conditions are clearly violated in the case of ambiguous decision problems. Therefore, when confronted with a decision problem that has even a hint of ambiguity, the first order of the day is to help the group reach a shared understanding of the problem.  This is essentially an exercise in sensemaking, the art of collaborative problem formulation. However, this is far from straightforward because ambiguity tends to evoke negative emotions and attendant defensive behaviours.

The upshot of all this is that any approach to tackle ambiguity must begin by taking the concerns of individual stakeholders seriously.  Unless this is done, it will be impossible for the group to coalesce around a consensus decision. Indeed, ambiguity-laden decisions in organisations invariably fail when they overlook concerns of specific stakeholder groups.  The high failure rate of organisational change initiatives (60-70% according to this Deloitte report) is largely attributable to this point

There are a number of techniques that one can use to gather and synthesise diverse stakeholder viewpoints and thus reach a shared understanding of a complex or ambiguous problem. These techniques are often referred to as problem structuring methods (PSMs). I won’t go into these in detail here; for an example check out Paul Culmsee’s articles on dialogue mapping and Barry Johnson’s introduction to polarity management. There are many more techniques in the PSM stable. All of them are intended to help a group reconcile different viewpoints and thus reach a common basis from which one can proceed to the next step (i.e., make a decision on what should be done). In other words, these techniques help reduce ambiguity.

But there’s more to it than a bunch of techniques.  The main challenge is to create a holding environment that enables such techniques to work. I am sure readers have been involved in a meeting or situation where the outcome seems predetermined by management or has been undermined by self- interest. When stakeholders sense this, no amount of problem structuring is going to help.  In such situations one needs to first create the conditions for open dialogue to occur. This is precisely what a holding environment provides.

Creating such a holding environment is difficult in today’s corporate world, but not impossible. Note that this is not an idealist’s call for an organisational utopia. Rather, it involves the application of a practical set of tools that address the diverse, emotion-laden reactions that people often have when confronted with ambiguity.   It would take me too far afield to discuss PSMs and holding environments any further here. To find out more, check out my papers on holding environments and dialogue mapping in enterprise IT projects, and (for a lot more) the Heretic’s Guide series of books that I co-wrote with Paul Culmsee.

The point is simply this: in an ambiguous situation, a good decision – whatever it might be – is most likely to be reached by a consultative process that synthesises diverse viewpoints rather than by an individual or a clique.  However, genuine participation (the hallmark of a holding environment) in such a process will occur only after participants’ fears have been addressed.

Wrapping up

Standard approaches to decision making exhort managers and executives to begin with facts, and if none are available, to gather them diligently prior to making a decision. However, most real-life decisions are fraught with uncertainty so it may be best to begin with what one doesn’t know, and figure out how to make the possible decision under those “constraints of ignorance.” In this post I’ve attempted to outline what such an approach would entail. The key point is to figure out the kind uncertainty one is dealing with and choosing an approach that works for it. I’d argue that most decision making debacles stem from a failure to appreciate this point.

Of course, there’s a lot more to this approach than I can cover in the span of a post, but that’s a story for another time.

Note: This post is written as an introduction to the Data and Decision Making subject that is part of the core curriculum of the Master of Data Science and Innovation program, run by the Connected Intelligence Centre at UTS. I’m coordinating the subject this semester, and am honoured to be co-teaching it with my erstwhile colleague Sean Heffernan and my longtime collaborator Paul Culmsee.

Written by K

March 9, 2017 at 10:04 am

A prelude to machine learning

with 6 comments

What is machine learning?

The term machine learning gets a lot of airtime in the popular and trade press these days. As I started writing this article, I did a quick search for recent news headlines that contained this term. Here are the top three results with datelines within three days of the search:

http://venturebeat.com/2017/02/01/beyond-the-gimmick-implementing-effective-machine-learning-vb-live/

http://www.infoworld.com/article/3164249/artificial-intelligence/new-big-data-tools-for-machine-learning-spring-from-home-of-spark-and-mesos.html

http://www.infoworld.com/article/3163525/analytics/review-the-best-frameworks-for-machine-learning-and-deep-learning.html

The truth about hype usually tends to be quite prosaic and so it is in this case. Machine learning, as Professor Yaser Abu-Mostafa  puts it, is simply about “learning from data.”  And although the professor is referring to computers, this is so for humans too – we learn through patterns discerned from sensory data. As he states in the first few lines of his wonderful (but mathematically demanding!) book entitled, Learning From Data:

If you show a picture to a three-year-old and ask if there’s a tree in it, you will likely get a correct answer. If you ask a thirty year old what the definition of a tree is, you will likely get an inconclusive answer. We didn’t learn what a tree is by studying a [model] of what trees [are]. We learned by looking at trees. In other words, we learned from data.

In other words, the three year old forms a model of what constitutes a tree through a process of discerning a common pattern between all objects that grown-ups around her label “trees.” (the data). She can then “predict” that something is (or is not) a tree by applying this model to new instances presented to her.

This is exactly what happens in machine learning: the computer (or more correctly, the algorithm) builds a predictive model of a variable (like “treeness”) based on patterns it discerns in data.  The model can then be applied to predict the value of the variable (e.g. is it a tree  or not) in new instances.

With that said for an introduction, it is worth contrasting this machine-driven process of model building with the traditional approach of building mathematical models to predict phenomena as in, say,  physics and engineering.

What are models good for?

Physicists and engineers model phenomena using physical laws and mathematics. The aim of such modelling is both to understand and predict natural phenomena.  For example, a physical law such as Newton’s Law of Gravitation is itself a model – it helps us understand how gravity works and make predictions about (say) where Mars is going to be six months from now.  Indeed, all theories and laws of physics are but models that have wide applicability.

(Aside: Models are typically expressed via differential equations.  Most differential equations are hard to solve analytically (or exactly), so scientists use computers to solve them numerically.  It is important to note that in this case computers are used as calculation tools, they play no role in model-building.)

As mentioned earlier, the role of models in the sciences is twofold – understanding and prediction. In contrast, in machine learning the focus is usually on prediction rather than understanding.  The predictive successes of machine learning have led certain commentators to claim that scientific theory building is obsolete and science can advance by crunching data alone.  Such claims are overblown, not to mention, hubristic, for although a data scientist may be able to predict with accuracy, he or she may not be able to tell you why a particular prediction is obtained. This lack of understanding can mislead and can even have harmful consequences, a point that’s worth unpacking in some detail…

Assumptions, assumptions

A model of a real world process or phenomenon is necessarily a simplification. This is essentially because it is impossible to isolate a process or phenomenon from the rest of the world. As a consequence it is impossible to know for certain that the model one has built has incorporated all the interactions that influence the process / phenomenon of interest. It is quite possible that potentially important variables have been overlooked.

The selection of variables that go into a model is based on assumptions. In the case of model building in physics, these assumptions are made upfront and are thus clear to anybody who takes the trouble to read the underlying theory. In machine learning, however, the assumptions are harder to see because they are implicit in the data and the algorithm. This can be a problem when data is biased or an algorithm opaque.

Problem of bias and opacity become more acute as datasets increase in size and algorithms become more complex, especially when applied to social issues that have serious human consequences. I won’t go into this here, but for examples the interested reader may want to have a look at Cathy O’Neil’s book, Weapons of Math Destruction, or my article on the dark side of data science.

As an aside, I should point out that although assumptions are usually obvious in traditional modelling, they are often overlooked out of sheer laziness or, more charitably, lack of awareness. This can have disastrous consequences. The global financial crisis of 2008 can – to some extent – be blamed on the failure of trading professionals to understand assumptions behind the model that was used to calculate the value of collateralised debt obligations.

It all starts with a straight line….

Now that we’ve taken a tour of some of the key differences between model building in the old and new worlds, we are all set to start talking about machine learning proper.

I should begin by admitting that I overstated the point about opacity: there are some machine learning algorithms that are transparent as can possibly be. Indeed, chances are you know the  algorithm I’m going to discuss next, either from an introductory statistics course in university or from plotting relationships between two variables in your favourite spreadsheet.  Yea, you may have guessed that I’m referring to linear regression.

In its simplest avatar, linear regression attempts to fit a straight line to a set of data points in two dimensions. The two dimensions correspond to a dependent variable (traditionally denoted by y) and an independent variable (traditionally denoted by x).    An example of such a fitted line is shown in Figure 1.  Once such a line is obtained, one can “predict” the value of the dependent variable for any value of the independent variable.  In terms of our earlier discussion, the line is the model.

Figure 1: Linear Regression

Figure 1: Linear Regression

Figure 1 also serves to illustrate that linear models are going to be inappropriate in most real world situations (the straight line does not fit the data well). But it is not so hard to devise methods to fit more complicated functions.

The important point here is that since machine learning is about finding functions that accurately predict dependent variables for as yet unknown values of the independent variables, most algorithms make explicit or implicit choices about the form of these functions.

Complexity versus simplicity

At first sight it seems a no-brainer that complicated functions will work better than simple ones. After all, if we choose a nonlinear function with lots of parameters, we should be able to fit a complex data set better than a linear function can (See Figure 2 – the complicated function fits the datapoints better than the straight line).   But there’s catch: although the ability to fit a dataset increases with the flexibility of the fitting function,  increasing complexity beyond a point will invariably reduce predictive power.  Put another way, a complex enough function may fit the known data points perfectly but, as a consequence, will inevitably perform poorly on unknown data. This is an important point so let’s look at it in greater detail.

Figure 2: Simple and complex fitting functions

Figure 2: Simple and complex fitting function (courtesy: Wikimedia)

Recall that the aim of machine learning is to predict values of the dependent variable for as yet unknown values of the independent variable(s).  Given a finite (and usually, very limited) dataset, how do we build a model that we can have some confidence in? The usual strategy is to partition the dataset into two subsets, one containing 60 to 80% of the data (called the training set) and the other containing the remainder (called the test set). The model is then built – i.e. an appropriate function fitted – using the training data and verified against the test data. The verification process consists of comparing the predicted values of the dependent variable with the known values for the test set.

Now, it should be intuitively clear that the more complicated the function, the better it will fit the training data.

Question: Why?

Answer: Because complicated functions have more free parameters – for example, linear functions of a single (dependent) variable have two parameters (slope and intercept), quadratics have three, cubics four and so on.  The mathematician, John von Neumann is believed to have said, “With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” See this post for a nice demonstration of the literal truth of his words.

Put another way, complex functions are wrigglier than simple ones, and – by suitable adjustment of parameters – their “wriggliness” can be adjusted to fit the training data better than functions that are less wriggly. Figure 2 illustrates this point well.

This may sound like you can have your cake and eat it too: choose a complicated enough function and you can fit both the training and test data well. Not so! Keep in mind that the resulting model (fitted function) is built using the training set alone, so a good fit to the test data is not guaranteed.  In fact, it is intuitively clear that a function that fits the training data perfectly (as in Figure 2) is likely to do a terrible job on the test data.

Question: Why?

Answer:  Remember, as far as the model is concerned, the test data is unknown. Hence, the greater the wriggliness in the trained model, the less likely it is to fit the test data well. Remember, once the model is fitted to the training data, you have no freedom to tweak parameters any further.

This tension between simplicity and complexity of models is one of the key principles of machine learning and is called the bias-variance tradeoff. Bias here refers to lack of flexibility and variance, the reducible error. In general simpler functions have greater bias and lower variance and complex functions, the opposite.  Much of the subtlety of machine learning lies in developing an understanding of how to arrive at the right level of complexity for the problem at hand –  that is, how to tweak parameters so that the resulting function fits the training data just well enough so as to generalise well to unknown data.

Note: those who are curious to learn more about the bias-variance tradeoff may want to have a look at this piece.  For details on how to achieve an optimal tradeoff, search for articles on regularization in machine learning.

Unlocking unstructured data

The discussion thus far has focused primarily on quantitative or enumerable data (numbers and categories) that’s stored in  a structured format – i.e. as columns and rows in a spreadsheet or database table). This is fine as it goes, but the fact is that much of the data in organisations is unstructured, the most common examples being text documents and audio-visual media. This data is virtually impossible to analyse computationally using relational database technologies  (such as SQL) that are commonly used by organisations.

The situation has changed dramatically in the last decade or so. Text analysis techniques that once required expensive software and high-end computers have now been implemented in open source languages such as Python and R, and can be run on personal computers.  For problems that require computing power and memory beyond that, cloud technologies make it possible to do so cheaply. In my opinion, the ability to analyse textual data is the most important advance in data technologies in the last decade or so. It unlocks a world of possibilities for the curious data analyst. Just think, all those comment fields in your survey data can now be analysed in a way that was never possible in the relational world!

There is a general impression that text analysis is hard.  Although some of the advanced techniques can take a little time to wrap one’s head around, the basics are simple enough. Yea, I really mean that – for proof, check out my tutorial on the topic.

Wrapping up

I could go on for a while. Indeed, I was planning to delve into a few algorithms of increasing complexity (from regression to trees and forests to neural nets) and then close with a brief peek at some of the more recent headline-grabbing developments like deep learning. However, I realised that such an exploration would be too long and (perhaps more importantly) defeat the main intent of this piece which is to give starting students an idea of what machine learning is about, and how it differs from preexisting techniques of data analysis. I hope I have succeeded, at least partially, in achieving that aim.

For those who are interested in learning more about machine learning algorithms, I can suggest having a look at my “Gentle Introduction to Data Science using R” series of articles. Start with the one on text analysis (link in last line of previous section) and then move on to clustering, topic modelling, naive Bayes, decision trees, random forests and support vector machines. I’m slowly adding to the list as I find the time, so please do check back again from time to time.

Note: This post is written as an introduction to the Data, Algorithms and Meaning subject that is part of the core curriculum of the Master of Data Science and Innovation program, run by the Connected Intelligence Centre at UTS. I’m coordinating the subject this semester, and will be co-teaching it with Stephan Curiskis.

Written by K

February 23, 2017 at 3:12 pm

 A gentle introduction to support vector machines using R

with 5 comments

Introduction

Most machine learning algorithms involve minimising an error measure of some kind (this measure is often called an objective function or loss function).  For example, the error measure in linear regression problems is the famous mean squared error – i.e. the averaged sum of the squared differences between the predicted and actual values. Like the mean squared error, most objective functions depend on all points in the training dataset.  In this post, I describe the support vector machine (SVM) approach which focuses instead on finding the optimal separation boundary between datapoints that have different classifications.  I’ll elaborate on what this means in the next section.

Here’s the plan in brief. I’ll begin with the rationale behind SVMs using a simple case of a binary (two class) dataset with a simple separation boundary (I’ll clarify what “simple” means in a minute).  Following that, I’ll describe how this can be generalised to datasets with more complex boundaries. Finally, I’ll work through a couple of examples in R, illustrating the principles behind SVMs. In line with the general philosophy of my “Gentle Introduction to Data Science Using R” series, the focus is on developing an intuitive understanding of the algorithm along with a practical demonstration of its use through a toy example.

The rationale

The basic idea behind SVMs is best illustrated by considering a simple case:  a set of data points that belong to one of two classes, red and blue, as illustrated in figure 1 below. To make things simpler still, I have assumed that the boundary separating the two classes is a straight line, represented by the solid green line in the diagram.  In the technical literature, such datasets are called linearly separable.

Figure 1:

Figure 1: Linearly separable data

In the linearly separable case, there is usually a fair amount of freedom in the way a separating line can be drawn. Figure 2 illustrates this point: the two broken green lines are also valid separation boundaries. Indeed, because there is a non-zero distance between the two closest points between categories, there are an infinite number of possible separation lines. This, quite naturally, raises the question as to whether it is possible to choose a separation boundary that is optimal.

Figure 2: Illustrating multiple separation boundaries

Figure 2: Illustrating multiple separation boundaries

The short answer is, yes there is. One way to do this is to select a boundary line that maximises the margin, i.e. the distance between the separation boundary and the points that are closest to it.  Such an optimal boundary is illustrated by the black brace in Figure 3.  The really cool thing about this criterion is that the location of the separation boundary depends only on the points that are closest to it. This means, unlike other classification methods, the classifier does not depend on any other points in dataset. The directed lines between the boundary and the closest points on either side are called support vectors (these are the solid black lines in figure 3). A direct implication of this is that the fewer the support vectors, the better the generalizability of the boundary.

Figure 3: Optimal separation boundary in linearly separable case

Although the above sounds great, it is of limited practical value because real data sets are seldom (if ever) linearly separable.

So, what can we do when dealing with real (i.e. non linearly separable) data sets?

A simple approach to tackle small deviations from linear separability is to allow a small number of points (those that are close to the boundary) to be misclassified.  The number of possible misclassifications is governed by a free parameter C, which is called the cost.  The cost is essentially the penalty associated with making an error: the higher the value of C, the less likely it is that the algorithm will misclassify a point.

This approach – which is called soft margin classification – is illustrated in Figure 4. Note the points on the wrong side of the separation boundary.  We will demonstrate soft margin SVMs in the next section.  (Note:  At the risk of belabouring the obvious, the purely linearly separable case discussed in the previous para is simply is a special case of the soft margin classifier.)

Figure 3: Soft margin classifier (linearly separable data)

Figure 4: Soft margin classifier (linearly separable data)

Real life situations are much more complex and cannot be dealt with using soft margin classifiers. For example, as shown in Figure 5, one could have widely separated clusters of points that belong to the same classes. Such situations, which require the use of multiple (and nonlinear) boundaries, can sometimes be dealt with using a clever approach called the kernel trick.

Figure 5: Non-linearly separable data

Figure 5: Non-linearly separable data

The kernel trick

Recall that in the linearly separable (or soft margin) case, the SVM algorithm works by finding a separation boundary that maximises the margin, which is the distance between the boundary and the points closest to it. The distance here is the usual straight line distance between the boundary and the closest point(s). This is called the Euclidean distance in honour of the great geometer of antiquity. The point to note is that this process results in a separation boundary that is a straight line, which as Figure 5 illustrates, does not always work. In fact in most cases it won’t.

So what can we do? To answer this question, we have to take a bit of a detour…

What if we were able to generalize the notion of distance in a way that generates nonlinear separation boundaries? It turns out that this is possible. To see how, one has to first understand how the notion of distance can be generalized.

The key properties that any measure of distance must satisfy are:

  1. Non-negativity – a distance cannot be negative, a point that needs no further explanation I reckon 🙂
  2. Symmetry – that is, the distance between point A and point B is the same as the distance between point B and point A.
  3. Identity– the distance between a point and itself is zero.
  4. Triangle inequality – that is the sum of distances between point A and B and points B and C must be less than or equal to the distance between A and C (equality holds only if all three points lie along the same line).

Any mathematical object that displays the above properties is akin to a distance. Such generalized distances are called metrics and the mathematical space in which they live is called a metric space. Metrics are defined using special mathematical functions designed to satisfy the above conditions. These functions are known as kernels.

The essence of the kernel trick lies in mapping the classification problem to a  metric space in which the problem is rendered separable via a separation boundary that is simple in the new space, but complex – as it has to be – in the original one. Generally, the transformed space has a higher dimensionality, with each of the dimensions being (possibly complex) combinations of the original problem variables. However, this is not necessarily a problem because in practice one doesn’t actually mess around with transformations, one just tries different kernels (the transformation being implicit in the kernel) and sees which one does the job. The check is simple: we simply test the predictions resulting from using different kernels against a held out subset of the data (as one would for any machine learning algorithm).

It turns out that a particular function – called the radial basis function kernel  (RBF kernel) – is very effective in many cases.  The RBF kernel is essentially a Gaussian (or Normal) function with the Euclidean distance between pairs of points as the variable (see equation 1 below).   The basic rationale behind the RBF kernel is that it creates separation boundaries that it tends to classify points close together (in the Euclidean sense) in the original space in the same way. This is reflected in the fact that the kernel decays (i.e. drops off to zero) as the Euclidean distance between points increases.

\exp (-\gamma |\mathbf{x-y}|)....(1)

The rate at which a kernel decays is governed by the parameter \gamma – the higher the value of \gamma, the more rapid the decay.  This serves to illustrate that the RBF kernel is extremely flexible….but the flexibility comes at a price – the danger of overfitting for large values of \gamma .  One should choose appropriate values of C and \gamma so as to ensure that the resulting kernel represents the best possible balance between flexibility and accuracy. We’ll discuss how this is done in practice later in this article.

Finally, though it is probably obvious, it is worth mentioning that the separation boundaries for arbitrary kernels are also defined through support vectors as in Figure 3.  To reiterate a point made earlier, this means that a solution that has fewer support vectors is likely to be more robust than one with many. Why? Because the data points defining support vectors are ones that are most sensitive to noise- therefore the fewer, the better.

There are many other types of kernels, each with their own pros and cons. However, I’ll leave these for adventurous readers to explore by themselves.  Finally, for a much more detailed….and dare I say, better… explanation of the kernel trick, I highly recommend this article by Eric Kim.

Support vector machines in R

In this demo we’ll use the svm interface that is implemented in the e1071 R package. This interface provides R programmers access to the comprehensive libsvm library written by Chang and Lin. I’ll use two toy datasets: the famous iris dataset available with the base R package and the sonar dataset from the mlbench package. I won’t describe details of the datasets as they are discussed at length in the documentation that I have linked to. However, it is worth mentioning the reasons why I chose these datasets:

  1. As mentioned earlier, no real life dataset is linearly separable, but the iris dataset is almost so. Consequently, it is a good illustration of using linear SVMs. Although one almost never uses these in practice, I have illustrated their use primarily for pedagogical reasons.
  2. The sonar dataset is a good illustration of the benefits of using RBF kernels in cases where the dataset is hard to visualise (60 variables in this case!). In general, one would almost always use RBF (or other nonlinear) kernels in practice.

With that said, let’s get right to it. I assume you have R and RStudio installed. For instructions on how to do this, have a look at the first article in this series. The processing preliminaries – loading libraries, data and creating training and test datasets are much the same as in my previous articles so I won’t dwell on these here. For completeness, however, I’ll list all the code so you can run it directly in R or R studio (a complete listing of the code can be found here):

#set working directory if needed (modify path as needed)
setwd(“C:/Users/Kailash/Documents/svm”)
#load required library
library(e1071)
#load built-in iris dataset
data(iris)
#set seed to ensure reproducible results
set.seed(42)
#split into training and test sets
iris[,”train”] <- ifelse(runif(nrow(iris))<0.8,1,0)
#separate training and test sets
trainset <- iris[iris$train==1,]
testset <- iris[iris$train==0,]
#get column index of train flag
trainColNum <- grep("train",names(trainset))
#remove train flag column from train and test sets
trainset <- trainset[,-trainColNum]
testset <- testset[,-trainColNum]
#get column index of predicted variable in dataset
typeColNum <- grep("Species",names(iris))
#build model – linear kernel and C-classification (soft margin) with default cost (C=1)
svm_model <- svm(Species~ ., data=trainset, method="C-classification", kernel="linear")
svm_model
Call:
svm(formula = Species ~ ., data = trainset, method = “C-classification”, kernel = “linear”)
Parameters:
SVM-Type: C-classification
SVM-Kernel: linear
cost: 1
gamma: 0.25
Number of Support Vectors: 24
#training set predictions
pred_train <-predict(svm_model,trainset)
mean(pred_train==trainset$Species)
[1] 0.9826087
#test set predictions
pred_test <-predict(svm_model,testset)
mean(pred_test==testset$Species)
[1] 0.9142857

 

The output from the SVM model show that there are 24 support vectors. If desired, these can be examined using the SV variable in the model – i.e via svm_model$SV.

The test prediction accuracy indicates that the linear performs quite well on this dataset, confirming that it is indeed near linearly separable. To check performance by class, one can create a confusion matrix as described in my post on random forests. I’ll leave this as an exercise for you.  Another point is that  we have used a soft-margin classification scheme with a cost C=1. You can experiment with this by explicitly changing the value of C. Again, I’ll leave this for you an exercise.

Before proceeding to the RBF kernel, I should mention a point that an alert reader may have noticed. The predicted variable, Species, can take on 3 values (setosa, versicolor and virginica). However, our discussion above dealt with a binary (2 valued) classification problem. This brings up the question as to how the algorithm deals multiclass classification problems – i.e those involving datasets with more than two classes. The libsvm algorithm (which svm uses) does this using a one-against-one classification strategy. Here’s how it works:

  1. Divide the dataset (assumed to have N classes) into N(N-1)/2 datasets that have two classes each.
  2. Solve the binary classification problem for each of these subsets
  3. Use a simple voting mechanism to assign a class to each data point.

Basically, each data point is assigned the most frequent classification it receives from all the binary classification problems it figures in.

With that said for the unrealistic linear classifier, let’s move to the real world.  In the code below, I build SVM models using three different kernels

  1.  Linear kernel (this is for comparison with the following 2 kernels).
  2. RBF kernel with default values for the parameters C and \gamma.
  3. RBF kernel with optimal values for C and \gamma. The optimal values are obtained using the tune.svm function (also available in e1071), which essentially builds models for multiple combinations of parameter values and selects the best.

OK, lets go:

#load required library (assuming e1071 is already loaded)
library(mlbench)
#load Sonar dataset
data(Sonar)
#set seed to ensure reproducible results
set.seed(42)
#split into training and test sets
Sonar[,”train”] <- ifelse(runif(nrow(Sonar))<0.8,1,0)
#separate training and test sets
trainset <- Sonar[Sonar$train==1,]
testset <- Sonar[Sonar$train==0,]
#get column index of train flag
trainColNum <- grep("train",names(trainset))
#remove train flag column from train and test sets
trainset <- trainset[,-trainColNum]
testset <- testset[,-trainColNum]
#get column index of predicted variable in dataset
typeColNum <- grep("Class",names(Sonar))
#build model – linear kernel and C-classification with default cost (C=1)
svm_model <- svm(Class~ ., data=trainset, method="C-classification", kernel="linear")
#training set predictions
pred_train <-predict(svm_model,trainset)
mean(pred_train==trainset$Class)
[1] 0.969697
#test set predictions
pred_test <-predict(svm_model,testset)
mean(pred_test==testset$Class)
[1] 0.6046512

I’ll leave you to examine the contents of the model. The important point to note here is that the performance of the model with the test set is quite dismal compared to the previous case. This simply indicates that the linear kernel is not appropriate here.  Let’s take a look at what happens if we use the RBF kernel with default values for the parameters:

#build model: radial kernel, default params
svm_model <- svm(Class~ ., data=trainset, method="C-classification", kernel="radial")
#print params
svm_model$cost
[1] 1
svm_model$gamma
[1] 0.01666667
#training set predictions
pred_train <-predict(svm_model,trainset)
mean(pred_train==trainset$Class)
[1] 0.9878788
#test set predictions
pred_test <-predict(svm_model,testset)
mean(pred_test==testset$Class)
[1] 0.7674419

That’s a pretty decent improvement from the linear kernel. Let’s see if we can do better by doing some parameter tuning. To do this we first invoke tune.svm and use the parameters it gives us in the call to svm:

#find optimal parameters in a specified range
tune_out <- tune.svm(x=trainset[,-typeColNum],y=trainset[,typeColNum],gamma=10^(-3:3),cost=c(0.01,0.1,1,10,100,1000),kernel="radial")
#print best values of cost and gamma
tune_out$best.parameters$cost
[1] 10
tune_out$best.parameters$gamma
[1] 0.01
#build model
svm_model <- svm(Class~ ., data=trainset, method="C-classification", kernel="radial",cost=tune_out$best.parameters$cost,gamma=tune_out$best.parameters$gamma)
#training set predictions
pred_train <-predict(svm_model,trainset)
mean(pred_train==trainset$Class)
[1] 1
#test set predictions
pred_test <-predict(svm_model,testset)
mean(pred_test==testset$Class)
[1] 0.8139535

Which is fairly decent improvement on the un-optimised case.

Wrapping up

This bring us to the end of this introductory exploration of SVMs in R. To recap, the distinguishing feature of SVMs in contrast to most other techniques is that they attempt to construct optimal separation boundaries between different categories.

SVMs  are quite versatile and have been applied to a wide variety of domains ranging from chemistry to pattern recognition. They are best used in binary classification scenarios. This brings up a question as to where SVMs are to be preferred to other binary classification techniques such as logistic regression. The honest response is, “it depends” – but here are some points to keep in mind when choosing between the two. A general point to keep in mind is that SVM  algorithms tend to be expensive both in terms of memory and computation, issues that can start to hurt as the size of the dataset increases.

Given all the above caveats and considerations, the best way  to figure out whether an SVM approach will work for your problem may be to do what most machine learning practitioners do: try it out!

Written by K

February 7, 2017 at 8:27 pm

The dark side of data science

with 9 comments

Data scientists are sometimes blind to the possibility that the predictions of their algorithms can have unforeseen negative effects on people. Ethical or social implications are easy to overlook when one finds interesting new patterns in data, especially if they promise significant financial gains. The Centrelink debt recovery debacle, recently reported in the Australian media, is a case in point.

Here is the story in brief:

Centrelink is an Australian Government organisation responsible for administering welfare services and payments to those in need. A major challenge such organisations face is ensuring that their clients are paid no less and no more than what is due to them. This is difficult because it involves crosschecking client income details across multiple systems owned by different government departments, a process that necessarily involves many assumptions. In July 2016, Centrelink unveiled an automated compliance system that compares income self-reported by clients to information held by the taxation office.

The problem is that the algorithm is flawed: it makes strong (and incorrect!) assumptions regarding the distribution of income across a financial year and, as a consequence, unfairly penalizes a number of legitimate benefit recipients.  It is very likely that the designers and implementers of the algorithm did not fully understand the implications of their assumptions. Worse, from the errors made by the system, it appears they may not have adequately tested it either.  But this did not stop them (or, quite possibly, their managers) from unleashing their algorithm on an unsuspecting public, causing widespread stress and distress.  More on this a bit later.

Algorithms like the one described above are the subject of Cathy O’Neil’s aptly titled book, Weapons of Math Destruction.  In the remainder of this article I discuss the main themes of the book.  Just to be clear, this post is more riff than review. However, for those seeking an opinion, here’s my one-line version: I think the book should be read not only by data science practitioners, but also by those who use or are affected by their algorithms (which means pretty much everyone!).

Abstractions and assumptions

‘O Neil begins with the observation that data algorithms are mathematical models of reality, and are necessarily incomplete because several simplifying assumptions are invariably baked into them. This point is important and often overlooked so it is worth illustrating via an example.

When assessing a person’s suitability for a loan, a bank will want to know whether the person is a good risk. It is impossible to model creditworthiness completely because we do not know all the relevant variables and those that are known may be hard to measure. To make up for their ignorance, data scientists typically use proxy variables, i.e. variables that are believed to be correlated with the variable of interest and are also easily measurable. In the case of creditworthiness, proxy variables might be things like gender, age, employment status, residential postcode etc.  Unfortunately many of these can be misleading, discriminatory or worse, both.

The Centrelink algorithm provides a good example of such a “double-whammy” proxy. The key variable it uses is the difference between the client’s annual income reported by the taxation office and self-reported annual income stated by the client. A large difference is taken to be an indicative of an incorrect payment and hence an outstanding debt. This simplistic assumption overlooks the fact that most affected people are not in steady jobs and therefore do not earn regular incomes over the course of a financial year (see this article by Michael Griffin, for a detailed example).  Worse, this crude proxy places an unfair burden on vulnerable individuals for whom casual and part time work is a fact of life.

Worse still, for those wrongly targeted with a recovery notice, getting the errors sorted out is not a straightforward process. This is typical of a WMD. As ‘O Neil states in her book, “The human victims of WMDs…are held to a far higher standard of evidence than the algorithms themselves.”  Perhaps this is because the algorithms are often opaque. But that’s a poor excuse.  This is the only technical field where practitioners are held to a lower standard of accountability than those affected by their products.

‘O Neil’s sums it up rather nicely when she calls algorithms like the Centrelink one  weapons of math destruction (WMD).

Self-fulfilling prophecies and feedback loops

A characteristic of WMD is that their predictions often become self-fulfilling prophecies. For example a person denied a loan by a faulty risk model is more likely to be denied again when he or she applies elsewhere, simply because it is on their record that they have been refused credit before. This kind of destructive feedback loop is typical of a WMD.

An example that ‘O Neil dwells on at length is a popular predictive policing program. Designed for efficiency rather than nuanced judgment, such algorithms measure what can easily be measured and act by it, ignoring the subtle contextual factors that inform the actions of experienced officers on the beat. Worse, they can lead to actions that can exacerbate the problem. For example, targeting young people of a certain demographic for stop and frisk actions can alienate them to a point where they might well turn to crime out of anger and exasperation.

As Goldratt famously said, “Tell me how you measure me and I’ll tell you how I’ll behave.”

This is not news: savvy managers have known about the dangers of managing by metrics for years. The problem is now exacerbated manyfold by our ability to implement and act on such metrics on an industrial scale, a trend that leads to a dangerous devaluation of human judgement in areas where it is most needed.

A related problem – briefly mentioned earlier – is that some of the important variables are known but hard to quantify in algorithmic terms. For example, it is known that community-oriented policing, where officers on the beat develop relationships with people in the community, leads to greater trust. The degree of trust is hard to quantify, but it is known that communities that have strong relationships with their police departments tend to have lower crime rates than similar communities that do not.  Such important but hard-to-quantify factors are typically missed by predictive policing programs.

Blackballed!

Ironically, although WMDs can cause destructive feedback loops, they are often not subjected to feedback themselves. O’Neil gives the example of algorithms that gauge the suitability of potential hires.  These programs often use proxy variables such as IQ test results, personality tests etc. to predict employability.  Candidates who are rejected often do not realise that they have been screened out by an algorithm. Further, it often happens that candidates who are thus rejected go on to successful careers elsewhere. However, this post-rejection information is never fed back to the algorithm because it impossible to do so.

In such cases, the only way to avoid being blackballed is to understand the rules set by the algorithm and play according to them. As ‘O Neil so poignantly puts it, “our lives increasingly depend on our ability to make our case to machines.” However, this can be difficult because it assumes that a) people know they are being assessed by an algorithm and 2) they have knowledge of how the algorithm works. In most hiring scenarios neither of these hold.

Just to be clear, not all data science models ignore feedback. For example, sabermetric algorithms used to assess player performance in Major League Baseball are continually revised based on latest player stats, thereby taking into account changes in performance.

Driven by data

In recent years, many workplaces have gradually seen the introduction to data-driven efficiency initiatives. Automated rostering, based on scheduling algorithms is an example. These algorithms are based on operations research techniques that were developed for scheduling complex manufacturing processes. Although appropriate for driving efficiency in manufacturing, these techniques are inappropriate for optimising shift work because of the effect they have on people. As O’ Neil states:

Scheduling software can be seen as an extension of just-in-time economy. But instead of lawn mower blades or cell phone screens showing up right on cue, it’s people, usually people who badly need money. And because they need money so desperately, the companies can bend their lives to the dictates of a mathematical model.

She correctly observes that an, “oversupply of low wage labour is the problem.” Employers know they can get away with treating people like machine parts because they have a large captive workforce.  What makes this seriously scary is that vested interests can make it difficult to outlaw such exploitative practices. As ‘O Neil mentions:

Following [a] New York Times report on Starbucks’ scheduling practices, Democrats in Congress promptly drew up bills to rein in scheduling software. But facing a Republican majority fiercely opposed to government regulations, the chances that their bill would become law were nil. The legislation died.

Commercial interests invariably trump social and ethical issues, so it is highly unlikely that industry or government will take steps to curb the worst excesses of such algorithms without significant pressure from the general public. A first step towards this is to educate ourselves on how these algorithms work and the downstream social effects of their predictions.

Messing with your mind

There is an even more insidious way that algorithms mess with us. Hot on the heels of the recent US presidential election, there were suggestions that fake news items on Facebook may have influenced the results.  Mark Zuckerberg denied this, but as this Casey Newton noted in this trenchant tweet, the denial leaves Facebook in “the awkward position of having to explain why they think they drive purchase decisions but not voting decisions.”

Be that as it may, the fact is Facebook’s own researchers have been conducting experiments to fine tune a tool they call the “voter megaphone”. Here’s what ‘O Neil says about it:

The idea was to encourage people to spread the word that they had voted. This seemed reasonable enough. By sprinkling people’s news feeds with “I voted” updates, Facebook was encouraging Americans – more that sixty-one million of them – to carry out their civic duty….by posting about people’s voting behaviour, the site was stoking peer pressure to vote. Studies have shown that the quiet satisfaction of carrying out a civic duty is less likely to move people than the possible judgement of friends and neighbours…The Facebook started out with a constructive and seemingly innocent goal to encourage people to vote. And it succeeded…researchers estimated that their campaign had increased turnout by 340,000 people. That’s a big enough crowd to swing entire states, and even national elections.

And if that’s not scary enough, try this:

For three months leading up to the election between President Obama and Mitt Romney, a researcher at the company….altered the news feed algorithm for about two million people, all of them politically engaged. The people got a higher proportion of hard news, as opposed to the usual cat videos, graduation announcements, or photos from Disney world….[the researcher] wanted to see  if getting more [political] news from friends changed people’s political behaviour. Following the election [he] sent out surveys. The self-reported results that voter participation in this group inched up from 64 to 67 percent.

This might not sound like much, but considering the thin margins of recent presidential elections, it could be enough to change a result.

But it’s even more insidious.  In a paper published in 2014, Facebook researchers showed that users’ moods can be influenced by the emotional content of their newsfeeds. Here’s a snippet from the abstract of the paper:

In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.

As you might imagine, there was a media uproar following which  the lead researcher issued a clarification and  Facebook officials duly expressed regret (but, as far as I know, not an apology).  To be sure, advertisers have been exploiting this kind of “mind control” for years, but a public social media platform should (expect to) be held to a higher standard of ethics. Facebook has since reviewed its internal research practices, but the recent fake news affair shows that the story is to be continued.

Disarming weapons of math destruction

The Centrelink debt debacle, Facebook mood contagion experiments and the other case studies mentioned in the book illusrate the myriad ways in which Big Data algorithms have a pernicious effect on our day-to-day lives. Quite often people remain unaware of their influence, wondering why a loan was denied or a job application didn’t go their way. Just as often, they are aware of what is happening, but are powerless to change it – shift scheduling algorithms being a case in point.

This is not how it was meant to be. Technology was supposed to make life better for all, not just the few who wield it.

So what can be done? Here are some suggestions:

  • To begin with, education is the key. We must work to demystify data science, create a general awareness of data science algorithms and how they work. O’ Neil’s book is an excellent first step in this direction (although it is very thin on details of how the algorithms work)
  • Develop a code of ethics for data science practitioners. It is heartening to see that IEEE has recently come up with a discussion paper on ethical considerations for artificial intelligence and autonomous systems and ACM has proposed a set of principles for algorithmic transparency and accountability.  However, I should also tag this suggestion with the warning that codes of ethics are not very effective as they can be easily violated. One has to – somehow – embed ethics in the DNA of data scientists. I believe, one way to do this is through practice-oriented education in which data scientists-in-training grapple with ethical issues through data challenges and hackathons. It is as Wittgenstein famously said, “it is clear that ethics cannot be articulated.” Ethics must be practiced.
  • Put in place a system of reliable algorithmic audits within data science departments, particularly those that do work with significant social impact.
  • Increase transparency a) by publishing information on how algorithms predict what they predict and b) by making it possible for those affected by the algorithm to access the data used to classify them as well as their classification, how it will be used and by whom.
  • Encourage the development of algorithms that detect bias in other algorithms and correct it.
  • Inspire aspiring data scientists to build models for the good.

It is only right that the last word in this long riff should go to ‘O Neil whose work inspired it. Towards the end of her book she writes:

Big Data processes codify the past. They do not invent the future. Doing that requires moral imagination, and that’s something that only humans can provide. We have to explicitly embed better values into our algorithms, creating Big Data models that follow our ethical lead. Sometimes that will mean putting fairness ahead of profit.

Excellent words for data scientists to live by.

Written by K

January 17, 2017 at 8:38 pm

The law of requisite variety and its implications for enterprise IT

with 3 comments

Introduction

There are two  facets to the operation of IT systems and processes in organisations:  governance, the standards and regulations associated with a system or process; and execution, which relates to steering the actual work of the system or process in specific situations.

An example might help clarify the difference:

The purpose of project management is to keep projects on track. There are two aspects to this: one pertaining to the project management office (PMO) which is responsible for standards and regulations associated with managing projects in general, and the other relating to the day-to-day work of steering a particular project.  The two sometimes work at cross-purposes. For example, successful project managers know that much of their work is about navigate their projects through the potentially treacherous terrain of their organisations, an activity that sometimes necessitates working around, or even breaking, rules set by the PMO.

Governance and steering share a common etymological root: the word kybernetes, which means steersman in Greek.  It also happens to be the root word of Cybernetics  which is the science of regulation or control.   In this post,  I  apply a key principle of cybernetics to a couple of areas of enterprise IT.

Cybernetic systems

An oft quoted example of a cybernetic system is a thermostat, a device that regulates temperature based on inputs from the environment.  Most cybernetic systems are way more complicated than a thermostat. Indeed, some argue that the Earth is a huge cybernetic system. A smaller scale example is a system consisting of a car + driver wherein a driver responds to changes in the environment thereby controlling the motion of the car.

Cybernetic systems vary widely not just in size, but also in complexity. A thermostat is concerned only the ambient temperature whereas the driver in a car has to worry about a lot more (e.g. the weather, traffic, the condition of the road, kids squabbling in the back-seat etc.).   In general, the more complex the system and its processes, the larger the number of variables that are associated with it. Put another way, complex systems must be able to deal with a greater variety of disturbances than simple systems.

The law of requisite variety

It turns out there is a fundamental principle – the law of requisite variety– that governs the capacity of a system to respond to changes in its environment. The law is a quantitative statement about the different types of responses that a system needs to have in order to deal with the range of  disturbances it might experience.

According to this paper, the law of requisite variety asserts that:

The larger the variety of actions available to a control system, the larger the variety of perturbations it is able to compensate.

Mathematically:

V(E) > V(D) – V(R) – K

Where V represents variety, E represents the essential variable(s) to be controlled, D represents the disturbance, R the regulation and K the passive capacity of the system to absorb shocks. The terms are explained in brief below:

V(E) represents the set of  desired outcomes for the controlled environmental variable:  desired temperature range in the case of the thermostat,  successful outcomes (i.e. projects delivered on time and within budget) in the case of a project management office.

V(D) represents the variety of disturbances the system can be subjected to (the ways in which the temperature can change, the external and internal forces on a project)

V(R) represents the various ways in which a disturbance can be regulated (the regulator in a thermostat, the project tracking and corrective mechanisms prescribed by the PMO)

K represents the buffering capacity of the system – i.e. stored capacity to deal with unexpected disturbances.

I won’t say any more about the law of requisite variety as it would take me to far afield; the interested and technically minded reader is referred to the link above or this paper for more (full pdf here).

Implications for enterprise IT

In plain English, the law of requisite variety states that only “variety can absorb variety.”  As stated by Anthony Hodgson in an essay in this book, the law of requisite variety:

…leads to the somewhat counterintuitive observation that the regulator must have a sufficiently large variety of actions in order to ensure a sufficiently small variety of outcomes in the essential variables E. This principle has important implications for practical situations: since the variety of perturbations a system can potentially be confronted with is unlimited, we should always try maximize its internal variety (or diversity), so as to be optimally prepared for any foreseeable or unforeseeable contingency.

This is entirely consistent with our intuitive expectation that the best way to deal with the unexpected is to have a range of tools and approaches at ones disposal.

In the remainder of this piece, I’ll focus on the implications of the law for an issue that is high on the list of many corporate IT departments: the standardization of  IT systems and/or processes.

The main rationale behind standardizing an IT  process is to handle all possible demands (or use cases) via a small number of predefined responses.   When put this way, the connection to the law of requisite variety is clear: a request made upon a function such as a service desk or project management office (PMO) is a disturbance and the way they regulate or respond to it determines the outcome.

Requisite variety and the service desk

A service desk is a good example of a system that can be standardized. Although users may initially complain about having to log a ticket instead of calling Nathan directly, in time they get used to it, and may even start to see the benefits…particularly when Nathan goes on vacation.

The law of requisite variety tells us successful standardization requires that all possible demands made on the system be known and regulated by the  V(R)  term in the equation above. In case of a service desk this is dealt with by a hierarchy of support levels. 1st level support deals with routine calls (incidents and service requests in ITIL terminology) such as system access and simple troubleshooting. Calls that cannot be handled by this tier are escalated to the 2nd and 3rd levels as needed.  The assumption here is that, between them, the three support tiers should be able to handle majority of calls.

Slack  (the K term) relates to unexploited capacity.  Although needed in order to deal with unexpected surges in demand, slack is expensive to carry when one doesn’t need it.  Given this, it makes sense to incorporate such scenarios into the repertoire of the standard system responses (i.e the V(R) term) whenever possible.  One way to do this is to anticipate surges in demand and hire temporary staff to handle them. Another way  is to deal with infrequent scenarios outside the system- i.e. deem them out of scope for the service desk.

Service desk standardization is thus relatively straightforward to achieve provided:

  • The kinds of calls that come in are largely predictable.
  • The work can be routinized.
  • All non-routine work – such as an application enhancement request or a demand for a new system-  is  dealt with outside the system via (say) a change management process.

All this will be quite unsurprising and obvious to folks working in corporate IT. Now  let’s see what happens when we apply the law to a more complex system.

Requisite variety and the PMO

Many corporate IT leaders see the establishment of a PMO as a way to control costs and increase efficiency of project planning and execution.   PMOs attempt to do this by putting in place governance mechanisms. The underlying cause-effect assumption is that if appropriate rules and regulations are put in place, project execution will necessarily improve.  Although this sounds reasonable, it often does not work in practice: according to this article, a significant fraction of PMOs fail to deliver on the promise of improved project performance. Consider the following points quoted directly from the article:

  • “50% of project management offices close within 3 years (Association for Project Mgmt)”
  • “Since 2008, the correlated PMO implementation failure rate is over 50% (Gartner Project Manager 2014)”
  • “Only a third of all projects were successfully completed on time and on budget over the past year (Standish Group’s CHAOS report)”
  • “68% of stakeholders perceive their PMOs to be bureaucratic     (2013 Gartner PPM Summit)”
  • “Only 40% of projects met schedule, budget and quality goals (IBM Change Management Survey of 1500 execs)”

The article goes on to point out that the main reason for the statistics above is that there is a gap between what a PMO does and what the business expects it to do. For example, according to the Gartner review quoted in the article over 60% of the stakeholders surveyed believe their PMOs are overly bureaucratic.  I can’t vouch for the veracity of the numbers here as I cannot find the original paper. Nevertheless, anecdotal evidence (via various articles and informal conversations) suggests that a significant number of PMOs fail.

There is a curious contradiction between the case of the service desk and that of the PMO. In the former, process and methodology seem to work whereas in the latter they don’t.

Why?

The answer, as you might suspect, has to do with variety.  Projects and service requests are very different beasts. Among other things, they differ in:

  • Duration: A project typically goes over many months whereas a service request has a lifetime of days,
  • Technical complexity: A project involves many (initially ill-defined) technical tasks that have to be coordinated and whose outputs have to be integrated.  A service request typically consists one (or a small number) of well-defined tasks.
  • Social complexity: A project involves many stakeholder groups, with diverse interests and opinions. A service request typically involves considerably fewer stakeholders, with limited conflicts of opinions/interests.

It is not hard to see that these differences increase variety in projects compared to service requests. The reason that standardization (usually) works for service desks  but (often) fails for PMOs is that the PMOs are subjected a greater variety of disturbances than service desks.

The key point is that the increased variety in the case of the PMO precludes standardisation.  As the law of requisite variety tells us, there are two ways to deal with variety:  regulate it  or adapt to it. Most PMOs take the regulation route, leading to over-regulation and outcomes that are less than satisfactory. This is exactly what is reflected in the complaint about PMOs being overly bureaucratic. The solution simple and obvious solution is for PMOs to be more flexible– specifically, they must be able to adapt to the ever changing demands made upon them by their organisations’ projects.  In terms of the law of requisite variety, PMOs need to have the capacity to change the system response, V(R), on the fly. In practice this means recognising the uniqueness of requests by avoiding reflex, cookie cutter responses that characterise bureaucratic PMOs.

Wrapping up

The law of requisite variety is a general principle that applies to any regulated system.  In this post I applied the law to two areas of enterprise IT – service management and project governance – and  discussed why standardization works well  for the former but less satisfactorily for the latter. Indeed, in view of the considerable differences in the duration and complexity of service requests and projects, it is unreasonable to expect that standardization will work well for both.  The key takeaway from this piece is therefore a simple one: those who design IT functions should pay attention to the variety that the functions will have to cope with, and bear in mind that standardization works well only if variety is known and limited.

Written by K

December 12, 2016 at 9:00 pm

%d bloggers like this: