Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Data Science’ Category

A gentle introduction to logistic regression and lasso regularisation using R

with 2 comments

In this day and age of artificial intelligence and deep learning, it is easy to forget that simple algorithms can work well for a surprisingly large range of practical business problems.  And the simplest place to start is with the granddaddy of data science algorithms: linear regression and its close cousin, logistic regression. Indeed, in his acclaimed MOOC and accompanying textbook, Yaser Abu-Mostafa spends a good portion of his time talking about linear methods, and with good reason too: linear methods are not only a good way to learn the key principles of machine learning, they can also be remarkably helpful in zeroing in on the most important predictors.

My main aim in this post is to provide a beginner level introduction to logistic regression using R and also introduce LASSO (Least Absolute Shrinkage and Selection Operator), a powerful feature selection technique that is very useful for regression problems. Lasso is essentially a regularization method. If you’re unfamiliar with the term, think of it as a way to reduce overfitting using less complicated functions (and if that means nothing to you, check out my prelude to machine learning).  One way to do this is to toss out less important variables, after checking that they aren’t important.  As we’ll discuss later, this can be done manually by examining p-values of coefficients and discarding those variables whose coefficients are not significant. However, this can become tedious for classification problems with many independent variables.  In such situations, lasso offers a neat way to model the dependent variable while automagically selecting significant variables by shrinking the coefficients of unimportant predictors to zero.  All this without having to mess around with p-values or obscure information criteria. How good is that?

Why not linear regression?

In linear regression one attempts to model a dependent variable (i.e. the one being predicted) using the best straight line fit to a set of predictor variables.  The best fit is usually taken to be one that minimises the root mean square error,  which is the sum of square of the differences between the actual and predicted values of the dependent variable. One can think of logistic regression as the equivalent of linear regression for a classification problem.  In what follows we’ll look at binary classification – i.e. a situation where the dependent variable takes on one of two possible values (Yes/No, True/False, 0/1 etc.).

First up, you might be wondering why one can’t use linear regression for such problems. The main reason is that classification problems are about determining class membership rather than predicting variable values, and linear regression is more naturally suited to the latter than the former. One could, in principle, use linear regression for situations where there is a natural ordering of categories like High, Medium and Low for example. However, one then has to map sub-ranges of the predicted values to categories. Moreover, since predicted values are potentially unbounded (in data as yet unseen) there remains a degree of arbitrariness associated with such a mapping.

Logistic regression sidesteps the aforementioned issues by modelling class probabilities instead.  Any input to the model yields a number lying between 0 and 1, representing the probability of class membership. One is still left with the problem of determining the threshold probability, i.e. the probability at which the category flips from one to the other.  By default this is set to p=0.5, but in reality it should be settled based on how the model will be used.  For example, for a marketing model that identifies potentially responsive customers, the threshold for a positive event might be set low (much less than 0.5) because the client does not really care about mailouts going to a non-responsive customer (the negative event). Indeed they may be more than OK with it as there’s always a chance – however small – that a non-responsive customer will actually respond.  As an opposing example, the cost of a false positive would be high in a machine learning application that grants access to sensitive information. In this case, one might want to set the threshold probability to a value closer to 1, say 0.9 or even higher. The point is, the setting an appropriate threshold probability is a business issue, not a technical one.

Logistic regression in brief

So how does logistic regression work?

For the discussion let’s assume that the outcome (predicted variable) and predictors are denoted by Y and X respectively and the two classes of interest are denoted by + and – respectively.  We wish to model the conditional probability that the outcome Y is +, given that the input variables (predictors) are X. The conditional probability is denoted by p(Y=+|X)   which we’ll abbreviate as p(X) since we know we are referring to the positive outcome Y=+.

As mentioned earlier, we are after the probability of class membership so we must ensure that the hypothesis function (a fancy word for the model) always lies between 0 and 1. The function assumed in logistic regression is:

p(X) = \dfrac{\exp^{\beta_0+\beta_1 X}}{1+\exp^{\beta_0 + \beta_1 X}} .....(1)

You can verify that p(X) does indeed lie between 0 and  1 as X varies from -\infty to \infty.  Typically, however, the values of X that make sense are bounded as shown in the example (stolen from Wikipedia) shown in Figure 1. The figure also illustrates the typical S-shaped  curve characteristic of logistic regression.

Figure 1: Logistic function

As an aside, you might be wondering where the name logistic comes from. An equivalent way of expressing the above equation is:

\log(\dfrac{p(X)}{1-p(X)}) = \beta_0+\beta_1 X .....(2)

The quantity on the left is the logarithm of the odds. So, the model is a linear regression of the log-odds, sometimes called logit, and hence the name logistic.

The problem is to find the values of \beta_0  and \beta_1 that results in a p(X) that most accurately classifies all the observed data points – that is, those that belong to the positive class have a probability as close as possible to 1 and those that belong to the negative class have a probability as close as possible to 0. One way to frame this problem is to say that we wish to maximise the product of these probabilities, often referred to as the likelihood:

\displaystyle\log ( {\prod_{i:Y_i=+} p(X_{i}) \prod_{j:Y_j=-}(1-p(X_{j}))})

Where \prod represents the products over i and j, which run over the +ve and –ve classed points respectively. This approach, called maximum likelihood estimation, is quite common in many machine learning settings, especially those involving probabilities.

It should be noted that in practice one works with the log likelihood because it is easier to work with mathematically. Moreover, one minimises the negative  log likelihood which, of course, is the same as maximising the log likelihood.  The quantity one minimises is thus:

L = - \displaystyle\log ( {\prod_{i:Y_i=+} p(X_{i}) \prod_{j:Y_j=-}(1-p(X_{j}))}).....(3)

However, these are technical details that I mention only for completeness. As you will see next, they have little bearing on the practical use of logistic regression.

Logistic regression in R – an example

In this example, we’ll use the logistic regression option implemented within the glm function that comes with the base R installation. This function fits a class of models collectively known as generalized linear models. We’ll apply the function to the Pima Indian Diabetes dataset that comes with the mlbench package. The code is quite straightforward – particularly if you’ve read earlier articles in my “gentle introduction” series – so I’ll just list the code below  noting that the logistic regression option is invoked by setting family=”binomial”  in the glm function call.

Here we go (a pdf listing of the code can be found here):

#set working directory if needed (modify path as needed)
#setwd(“C:/Users/Kailash/Documents/logistic”)
#load required library
library(mlbench)
#load Pima Indian Diabetes dataset
data(“PimaIndiansDiabetes”)
#set seed to ensure reproducible results
set.seed(42)
#split into training and test sets
PimaIndiansDiabetes[,”train”] <- ifelse(runif(nrow(PimaIndiansDiabetes))<0.8,1,0)
#separate training and test sets
trainset <- PimaIndiansDiabetes[PimaIndiansDiabetes$train==1,]
testset <- PimaIndiansDiabetes[PimaIndiansDiabetes$train==0,]
#get column index of train flag
trainColNum <- grep(“train”,names(trainset))
#remove train flag column from train and test sets
trainset <- trainset[,-trainColNum]
testset <- testset[,-trainColNum]
#get column index of predicted variable in dataset
typeColNum <- grep(“diabetes”,names(PimaIndiansDiabetes))
#build model
glm_model <- glm(diabetes~.,data = trainset, family = binomial)
summary(glm_model)
Call:
glm(formula = diabetes ~ ., family = binomial, data = trainset)
<<output edited>>
Coefficients:
            Estimate  Std. Error z value Pr(>|z|)
(Intercept)-8.1485021 0.7835869 -10.399  < 2e-16 ***
pregnant    0.1200493 0.0355617   3.376  0.000736 ***
glucose     0.0348440 0.0040744   8.552  < 2e-16 ***
pressure   -0.0118977 0.0057685  -2.063  0.039158 *
triceps     0.0053380 0.0076523   0.698  0.485449
insulin    -0.0010892 0.0009789  -1.113  0.265872
mass        0.0775352 0.0161255   4.808  1.52e-06 ***
pedigree    1.2143139 0.3368454   3.605  0.000312 ***
age         0.0117270 0.0103418   1.134  0.256816
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#predict probabilities on testset
#type=”response” gives probabilities, type=”class” gives class
glm_prob <- predict.glm(glm_model,testset[,-typeColNum],type=”response”)
#which classes do these probabilities refer to? What are 1 and 0?
contrasts(PimaIndiansDiabetes$diabetes)
    pos
neg 0
pos 1
#make predictions
##…first create vector to hold predictions (we know 0 refers to neg now)
glm_predict <- rep(“neg”,nrow(testset))
glm_predict[glm_prob>.5] <- “pos”
#confusion matrix
table(pred=glm_predict,true=testset$diabetes)
glm_predict  neg pos
        neg    90 22
        pos     8 33
#accuracy
mean(glm_predict==testset$diabetes)
[1] 0.8039216

 

Although this seems pretty good, we aren’t quite done because there is an issue that is lurking under the hood. To see this, let’s examine the information output from the model summary, in particular the coefficient estimates (i.e. estimates for \beta) and their significance. Here’s a summary of the information contained in the table:

  • Column 2 in the table lists coefficient estimates.
  • Column 3 list s the standard error of the estimates (the larger the standard error, the less confident we are about the estimate)
  • Column 4 the z statistic (which is the coefficient estimate (column 2) divided by the standard error of the estimate (column 3)) and
  • The last column (Pr(>|z|) lists the p-value, which is the probability of getting the listed estimate assuming the predictor has no effect. In essence, the smaller the p-value, the more significant the estimate is likely to be.

From the table we can conclude that only 4 predictors are significant – pregnant, glucose, mass and pedigree (and possibly a fifth – pressure). The other variables have little predictive power and worse, may contribute to overfitting.  They should, therefore, be eliminated and we’ll do that in a minute. However, there’s an important point to note before we do so…

In this case we have only 9 variables, so are able to identify the significant ones by a manual inspection of p-values.  As you can well imagine, such a process will quickly become tedious as the number of predictors increases. Wouldn’t it be be nice if there were an algorithm that could somehow automatically shrink the coefficients of these variables or (better!) set them to zero altogether?  It turns out that this is precisely what  lasso and its close cousin, ridge regression, do.

Ridge and Lasso

Recall that the values of the logistic regression coefficients \beta_0  and \beta_1 are found by minimising the negative log likelihood described in equation (3).  Ridge and lasso regularization work by adding a penalty term to the log likelihood function.  In the case of ridge regression, the penalty term is \beta_1^2 and in the case of lasso, it is |\beta_1| (Remember, \beta_1  is a vector, with as many components as there are predictors).  The quantity to be minimised in the two cases is thus:

L +\lambda \sum \beta_1^2.....(4) – for ridge regression,

and

L +\lambda \sum |\beta_1|.....(5) – for lasso regression.

Where \lambda is a free parameter which is usually selected in such a way that the resulting model minimises the out of sample error. Typically, the optimal value of \lambda is found using grid search with cross-validation, a process akin to the one described in my discussion on cost-complexity parameter  estimation in decision trees. Most canned algorithms provide methods to do this; the one we’ll use in the next section is no exception.

In the case of ridge regression, the effect of the penalty term is to shrink the coefficients that contribute most to the error. Put another way, it reduces the magnitude of the coefficients that contribute to increasing L.  In contrast, in  the case of lasso regression, the effect of the penalty term is to set the these coefficients exactly to zero! This is cool because what it mean that lasso regression works like a feature selector that picks out the most important coefficients, i.e. those that are most predictive (and have the lowest p-values).

Let’s illustrate this through an example. We’ll use the glmnet package which implements a combined version of ridge and lasso (called elastic net). Instead of minimising (4) or (5) above, glmnet minimises:

L +\lambda[ (1-\alpha)\sum [\beta_1^2 + \alpha\sum|\beta_1|]....(6)

where \alpha controls the “mix” of ridge and lasso regularisation, with \alpha=0 being “pure” ridge and  \alpha=1 being “pure” lasso.

Lasso regularisation using glmnet

Let’s reanalyse the Pima Indian Diabetes dataset using glmnet with \alpha=1 (pure lasso). Before diving into code, it is worth noting that glmnet:

  • does not have a formula interface, so one has to input the predictors as a matrix and the class labels as a vector.
  • does not accept categorical predictors, so one has to convert these to numeric values before passing them to glmnet.

The glmnet function model.matrix creates the matrix and also converts categorical predictors to appropriate dummy variables.

Another important point to note is that we’ll use the function cv.glmnet, which automatically performs a grid search to find the optimal value of \lambda.

OK, enough said, here we go:

#load required library
library(glmnet)
#convert training data to matrix format
x <- model.matrix(diabetes~.,trainset)
#convert class to numerical variable
y <- ifelse(trainset$diabetes==”pos”,1,0)
#perform grid search to find optimal value of lambda
#family= binomial => logistic regression, alpha=1 => lasso
# check docs to explore other type.measure options
cv.out <- cv.glmnet(x,y,alpha=1,family=”binomial”,type.measure = “mse” )
#plot result
plot(cv.out)

 

The plot is shown in Figure 2 below:

Figure 2: Error as a function of lambda (select lambda that minimises error)

The plot shows that the log of the optimal value of lambda (i.e. the one that minimises the root mean square error) is approximately -5. The exact value can be viewed by examining the variable min_lambda in the code below.

 

#min value of lambda
min_lambda <- cv.out$lambda.min
#regression coefficients
coef(cv.out)
10 x 1 sparse Matrix of class “dgCMatrix”
                      1
(Intercept) -4.61706681
(Intercept)  .
pregnant     0.03077434
glucose      0.02314107
pressure     .
triceps      .
insulin      .
mass         0.02779252
pedigree     0.20999511
age          .

 

The output shows that only those variables that we had determined to be significant on the basis of p-values have non-zero coefficients. The coefficients of all other variables have been set to zero by the algorithm! Lasso has reduced the complexity of the fitting function massively…and you are no doubt wondering what effect this  has on accuracy. Let’s see by running the model against our test data:

 

#get test data
x_test <- model.matrix(diabetes~.,testset)
#predict class, type=”class”
lasso_prob <- predict.glm(glm_model,testset[,-typeColNum],type=”response”)
#translate probabilities to predictions
lasso_predict <- rep(“neg”,nrow(testset))
lasso_predict[lasso_prob>.5] <- “pos”
#confusion matrix
table(pred=lasso_predict,true=testset$diabetes)
pred  neg pos
   neg 92 22
  pos  6 33
#accuracy
mean(lasso_predict==testset$diabetes)
[1] 0.8169935

 

Which is the same as what we got with the more complex model. So, we get  the same out-of-sample accuracy as we did before, and we do so using a way simpler function (4 non-zero coefficients) than the original one (9  nonzero coefficients). What this means is that the simpler function does at least as good a job fitting the signal in the data as the more complicated one.  The bias-variance tradeoff tells us that the simpler function should be preferred because it is less likely to overfit the training data.

Paraphrasing William of Ockhamall other things being equal, a simple hypothesis should be preferred over a complex one.

Wrapping up

In this post I have tried to provide a detailed introduction to logistic regression, one of the simplest (and oldest) classification techniques in the machine learning practitioners arsenal. Despite it’s simplicity (or I should say, because of it!) logistic regression works well for many business applications which often have a simple decision boundary. Moreover, because of its simplicity it is less prone to overfitting than flexible methods such as decision trees. Further, as we have shown, variables that contribute to overfitting can be eliminated using lasso (or ridge) regularisation, without compromising out-of-sample accuracy. Given these advantages and its inherent simplicity, it isn’t surprising that logistic regression remains a workhorse for data scientists.

Written by K

July 11, 2017 at 10:00 pm

A prelude to machine learning

with 6 comments

What is machine learning?

The term machine learning gets a lot of airtime in the popular and trade press these days. As I started writing this article, I did a quick search for recent news headlines that contained this term. Here are the top three results with datelines within three days of the search:

http://venturebeat.com/2017/02/01/beyond-the-gimmick-implementing-effective-machine-learning-vb-live/

http://www.infoworld.com/article/3164249/artificial-intelligence/new-big-data-tools-for-machine-learning-spring-from-home-of-spark-and-mesos.html

http://www.infoworld.com/article/3163525/analytics/review-the-best-frameworks-for-machine-learning-and-deep-learning.html

The truth about hype usually tends to be quite prosaic and so it is in this case. Machine learning, as Professor Yaser Abu-Mostafa  puts it, is simply about “learning from data.”  And although the professor is referring to computers, this is so for humans too – we learn through patterns discerned from sensory data. As he states in the first few lines of his wonderful (but mathematically demanding!) book entitled, Learning From Data:

If you show a picture to a three-year-old and ask if there’s a tree in it, you will likely get a correct answer. If you ask a thirty year old what the definition of a tree is, you will likely get an inconclusive answer. We didn’t learn what a tree is by studying a [model] of what trees [are]. We learned by looking at trees. In other words, we learned from data.

In other words, the three year old forms a model of what constitutes a tree through a process of discerning a common pattern between all objects that grown-ups around her label “trees.” (the data). She can then “predict” that something is (or is not) a tree by applying this model to new instances presented to her.

This is exactly what happens in machine learning: the computer (or more correctly, the algorithm) builds a predictive model of a variable (like “treeness”) based on patterns it discerns in data.  The model can then be applied to predict the value of the variable (e.g. is it a tree  or not) in new instances.

With that said for an introduction, it is worth contrasting this machine-driven process of model building with the traditional approach of building mathematical models to predict phenomena as in, say,  physics and engineering.

What are models good for?

Physicists and engineers model phenomena using physical laws and mathematics. The aim of such modelling is both to understand and predict natural phenomena.  For example, a physical law such as Newton’s Law of Gravitation is itself a model – it helps us understand how gravity works and make predictions about (say) where Mars is going to be six months from now.  Indeed, all theories and laws of physics are but models that have wide applicability.

(Aside: Models are typically expressed via differential equations.  Most differential equations are hard to solve analytically (or exactly), so scientists use computers to solve them numerically.  It is important to note that in this case computers are used as calculation tools, they play no role in model-building.)

As mentioned earlier, the role of models in the sciences is twofold – understanding and prediction. In contrast, in machine learning the focus is usually on prediction rather than understanding.  The predictive successes of machine learning have led certain commentators to claim that scientific theory building is obsolete and science can advance by crunching data alone.  Such claims are overblown, not to mention, hubristic, for although a data scientist may be able to predict with accuracy, he or she may not be able to tell you why a particular prediction is obtained. This lack of understanding can mislead and can even have harmful consequences, a point that’s worth unpacking in some detail…

Assumptions, assumptions

A model of a real world process or phenomenon is necessarily a simplification. This is essentially because it is impossible to isolate a process or phenomenon from the rest of the world. As a consequence it is impossible to know for certain that the model one has built has incorporated all the interactions that influence the process / phenomenon of interest. It is quite possible that potentially important variables have been overlooked.

The selection of variables that go into a model is based on assumptions. In the case of model building in physics, these assumptions are made upfront and are thus clear to anybody who takes the trouble to read the underlying theory. In machine learning, however, the assumptions are harder to see because they are implicit in the data and the algorithm. This can be a problem when data is biased or an algorithm opaque.

Problem of bias and opacity become more acute as datasets increase in size and algorithms become more complex, especially when applied to social issues that have serious human consequences. I won’t go into this here, but for examples the interested reader may want to have a look at Cathy O’Neil’s book, Weapons of Math Destruction, or my article on the dark side of data science.

As an aside, I should point out that although assumptions are usually obvious in traditional modelling, they are often overlooked out of sheer laziness or, more charitably, lack of awareness. This can have disastrous consequences. The global financial crisis of 2008 can – to some extent – be blamed on the failure of trading professionals to understand assumptions behind the model that was used to calculate the value of collateralised debt obligations.

It all starts with a straight line….

Now that we’ve taken a tour of some of the key differences between model building in the old and new worlds, we are all set to start talking about machine learning proper.

I should begin by admitting that I overstated the point about opacity: there are some machine learning algorithms that are transparent as can possibly be. Indeed, chances are you know the  algorithm I’m going to discuss next, either from an introductory statistics course in university or from plotting relationships between two variables in your favourite spreadsheet.  Yea, you may have guessed that I’m referring to linear regression.

In its simplest avatar, linear regression attempts to fit a straight line to a set of data points in two dimensions. The two dimensions correspond to a dependent variable (traditionally denoted by y) and an independent variable (traditionally denoted by x).    An example of such a fitted line is shown in Figure 1.  Once such a line is obtained, one can “predict” the value of the dependent variable for any value of the independent variable.  In terms of our earlier discussion, the line is the model.

Figure 1: Linear Regression

Figure 1: Linear Regression

Figure 1 also serves to illustrate that linear models are going to be inappropriate in most real world situations (the straight line does not fit the data well). But it is not so hard to devise methods to fit more complicated functions.

The important point here is that since machine learning is about finding functions that accurately predict dependent variables for as yet unknown values of the independent variables, most algorithms make explicit or implicit choices about the form of these functions.

Complexity versus simplicity

At first sight it seems a no-brainer that complicated functions will work better than simple ones. After all, if we choose a nonlinear function with lots of parameters, we should be able to fit a complex data set better than a linear function can (See Figure 2 – the complicated function fits the datapoints better than the straight line).   But there’s catch: although the ability to fit a dataset increases with the flexibility of the fitting function,  increasing complexity beyond a point will invariably reduce predictive power.  Put another way, a complex enough function may fit the known data points perfectly but, as a consequence, will inevitably perform poorly on unknown data. This is an important point so let’s look at it in greater detail.

Figure 2: Simple and complex fitting functions

Figure 2: Simple and complex fitting function (courtesy: Wikimedia)

Recall that the aim of machine learning is to predict values of the dependent variable for as yet unknown values of the independent variable(s).  Given a finite (and usually, very limited) dataset, how do we build a model that we can have some confidence in? The usual strategy is to partition the dataset into two subsets, one containing 60 to 80% of the data (called the training set) and the other containing the remainder (called the test set). The model is then built – i.e. an appropriate function fitted – using the training data and verified against the test data. The verification process consists of comparing the predicted values of the dependent variable with the known values for the test set.

Now, it should be intuitively clear that the more complicated the function, the better it will fit the training data.

Question: Why?

Answer: Because complicated functions have more free parameters – for example, linear functions of a single (dependent) variable have two parameters (slope and intercept), quadratics have three, cubics four and so on.  The mathematician, John von Neumann is believed to have said, “With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” See this post for a nice demonstration of the literal truth of his words.

Put another way, complex functions are wrigglier than simple ones, and – by suitable adjustment of parameters – their “wriggliness” can be adjusted to fit the training data better than functions that are less wriggly. Figure 2 illustrates this point well.

This may sound like you can have your cake and eat it too: choose a complicated enough function and you can fit both the training and test data well. Not so! Keep in mind that the resulting model (fitted function) is built using the training set alone, so a good fit to the test data is not guaranteed.  In fact, it is intuitively clear that a function that fits the training data perfectly (as in Figure 2) is likely to do a terrible job on the test data.

Question: Why?

Answer:  Remember, as far as the model is concerned, the test data is unknown. Hence, the greater the wriggliness in the trained model, the less likely it is to fit the test data well. Remember, once the model is fitted to the training data, you have no freedom to tweak parameters any further.

This tension between simplicity and complexity of models is one of the key principles of machine learning and is called the bias-variance tradeoff. Bias here refers to lack of flexibility and variance, the reducible error. In general simpler functions have greater bias and lower variance and complex functions, the opposite.  Much of the subtlety of machine learning lies in developing an understanding of how to arrive at the right level of complexity for the problem at hand –  that is, how to tweak parameters so that the resulting function fits the training data just well enough so as to generalise well to unknown data.

Note: those who are curious to learn more about the bias-variance tradeoff may want to have a look at this piece.  For details on how to achieve an optimal tradeoff, search for articles on regularization in machine learning.

Unlocking unstructured data

The discussion thus far has focused primarily on quantitative or enumerable data (numbers and categories) that’s stored in  a structured format – i.e. as columns and rows in a spreadsheet or database table). This is fine as it goes, but the fact is that much of the data in organisations is unstructured, the most common examples being text documents and audio-visual media. This data is virtually impossible to analyse computationally using relational database technologies  (such as SQL) that are commonly used by organisations.

The situation has changed dramatically in the last decade or so. Text analysis techniques that once required expensive software and high-end computers have now been implemented in open source languages such as Python and R, and can be run on personal computers.  For problems that require computing power and memory beyond that, cloud technologies make it possible to do so cheaply. In my opinion, the ability to analyse textual data is the most important advance in data technologies in the last decade or so. It unlocks a world of possibilities for the curious data analyst. Just think, all those comment fields in your survey data can now be analysed in a way that was never possible in the relational world!

There is a general impression that text analysis is hard.  Although some of the advanced techniques can take a little time to wrap one’s head around, the basics are simple enough. Yea, I really mean that – for proof, check out my tutorial on the topic.

Wrapping up

I could go on for a while. Indeed, I was planning to delve into a few algorithms of increasing complexity (from regression to trees and forests to neural nets) and then close with a brief peek at some of the more recent headline-grabbing developments like deep learning. However, I realised that such an exploration would be too long and (perhaps more importantly) defeat the main intent of this piece which is to give starting students an idea of what machine learning is about, and how it differs from preexisting techniques of data analysis. I hope I have succeeded, at least partially, in achieving that aim.

For those who are interested in learning more about machine learning algorithms, I can suggest having a look at my “Gentle Introduction to Data Science using R” series of articles. Start with the one on text analysis (link in last line of previous section) and then move on to clustering, topic modelling, naive Bayes, decision trees, random forests and support vector machines. I’m slowly adding to the list as I find the time, so please do check back again from time to time.

Note: This post is written as an introduction to the Data, Algorithms and Meaning subject that is part of the core curriculum of the Master of Data Science and Innovation program, run by the Connected Intelligence Centre at UTS. I’m coordinating the subject this semester, and will be co-teaching it with Stephan Curiskis.

Written by K

February 23, 2017 at 3:12 pm

 A gentle introduction to support vector machines using R

with 5 comments

Introduction

Most machine learning algorithms involve minimising an error measure of some kind (this measure is often called an objective function or loss function).  For example, the error measure in linear regression problems is the famous mean squared error – i.e. the averaged sum of the squared differences between the predicted and actual values. Like the mean squared error, most objective functions depend on all points in the training dataset.  In this post, I describe the support vector machine (SVM) approach which focuses instead on finding the optimal separation boundary between datapoints that have different classifications.  I’ll elaborate on what this means in the next section.

Here’s the plan in brief. I’ll begin with the rationale behind SVMs using a simple case of a binary (two class) dataset with a simple separation boundary (I’ll clarify what “simple” means in a minute).  Following that, I’ll describe how this can be generalised to datasets with more complex boundaries. Finally, I’ll work through a couple of examples in R, illustrating the principles behind SVMs. In line with the general philosophy of my “Gentle Introduction to Data Science Using R” series, the focus is on developing an intuitive understanding of the algorithm along with a practical demonstration of its use through a toy example.

The rationale

The basic idea behind SVMs is best illustrated by considering a simple case:  a set of data points that belong to one of two classes, red and blue, as illustrated in figure 1 below. To make things simpler still, I have assumed that the boundary separating the two classes is a straight line, represented by the solid green line in the diagram.  In the technical literature, such datasets are called linearly separable.

Figure 1:

Figure 1: Linearly separable data

In the linearly separable case, there is usually a fair amount of freedom in the way a separating line can be drawn. Figure 2 illustrates this point: the two broken green lines are also valid separation boundaries. Indeed, because there is a non-zero distance between the two closest points between categories, there are an infinite number of possible separation lines. This, quite naturally, raises the question as to whether it is possible to choose a separation boundary that is optimal.

Figure 2: Illustrating multiple separation boundaries

Figure 2: Illustrating multiple separation boundaries

The short answer is, yes there is. One way to do this is to select a boundary line that maximises the margin, i.e. the distance between the separation boundary and the points that are closest to it.  Such an optimal boundary is illustrated by the black brace in Figure 3.  The really cool thing about this criterion is that the location of the separation boundary depends only on the points that are closest to it. This means, unlike other classification methods, the classifier does not depend on any other points in dataset. The directed lines between the boundary and the closest points on either side are called support vectors (these are the solid black lines in figure 3). A direct implication of this is that the fewer the support vectors, the better the generalizability of the boundary.

Figure 3: Optimal separation boundary in linearly separable case

Although the above sounds great, it is of limited practical value because real data sets are seldom (if ever) linearly separable.

So, what can we do when dealing with real (i.e. non linearly separable) data sets?

A simple approach to tackle small deviations from linear separability is to allow a small number of points (those that are close to the boundary) to be misclassified.  The number of possible misclassifications is governed by a free parameter C, which is called the cost.  The cost is essentially the penalty associated with making an error: the higher the value of C, the less likely it is that the algorithm will misclassify a point.

This approach – which is called soft margin classification – is illustrated in Figure 4. Note the points on the wrong side of the separation boundary.  We will demonstrate soft margin SVMs in the next section.  (Note:  At the risk of belabouring the obvious, the purely linearly separable case discussed in the previous para is simply is a special case of the soft margin classifier.)

Figure 3: Soft margin classifier (linearly separable data)

Figure 4: Soft margin classifier (linearly separable data)

Real life situations are much more complex and cannot be dealt with using soft margin classifiers. For example, as shown in Figure 5, one could have widely separated clusters of points that belong to the same classes. Such situations, which require the use of multiple (and nonlinear) boundaries, can sometimes be dealt with using a clever approach called the kernel trick.

Figure 5: Non-linearly separable data

Figure 5: Non-linearly separable data

The kernel trick

Recall that in the linearly separable (or soft margin) case, the SVM algorithm works by finding a separation boundary that maximises the margin, which is the distance between the boundary and the points closest to it. The distance here is the usual straight line distance between the boundary and the closest point(s). This is called the Euclidean distance in honour of the great geometer of antiquity. The point to note is that this process results in a separation boundary that is a straight line, which as Figure 5 illustrates, does not always work. In fact in most cases it won’t.

So what can we do? To answer this question, we have to take a bit of a detour…

What if we were able to generalize the notion of distance in a way that generates nonlinear separation boundaries? It turns out that this is possible. To see how, one has to first understand how the notion of distance can be generalized.

The key properties that any measure of distance must satisfy are:

  1. Non-negativity – a distance cannot be negative, a point that needs no further explanation I reckon 🙂
  2. Symmetry – that is, the distance between point A and point B is the same as the distance between point B and point A.
  3. Identity– the distance between a point and itself is zero.
  4. Triangle inequality – that is the sum of distances between point A and B and points B and C must be less than or equal to the distance between A and C (equality holds only if all three points lie along the same line).

Any mathematical object that displays the above properties is akin to a distance. Such generalized distances are called metrics and the mathematical space in which they live is called a metric space. Metrics are defined using special mathematical functions designed to satisfy the above conditions. These functions are known as kernels.

The essence of the kernel trick lies in mapping the classification problem to a  metric space in which the problem is rendered separable via a separation boundary that is simple in the new space, but complex – as it has to be – in the original one. Generally, the transformed space has a higher dimensionality, with each of the dimensions being (possibly complex) combinations of the original problem variables. However, this is not necessarily a problem because in practice one doesn’t actually mess around with transformations, one just tries different kernels (the transformation being implicit in the kernel) and sees which one does the job. The check is simple: we simply test the predictions resulting from using different kernels against a held out subset of the data (as one would for any machine learning algorithm).

It turns out that a particular function – called the radial basis function kernel  (RBF kernel) – is very effective in many cases.  The RBF kernel is essentially a Gaussian (or Normal) function with the Euclidean distance between pairs of points as the variable (see equation 1 below).   The basic rationale behind the RBF kernel is that it creates separation boundaries that it tends to classify points close together (in the Euclidean sense) in the original space in the same way. This is reflected in the fact that the kernel decays (i.e. drops off to zero) as the Euclidean distance between points increases.

\exp (-\gamma |\mathbf{x-y}|)....(1)

The rate at which a kernel decays is governed by the parameter \gamma – the higher the value of \gamma, the more rapid the decay.  This serves to illustrate that the RBF kernel is extremely flexible….but the flexibility comes at a price – the danger of overfitting for large values of \gamma .  One should choose appropriate values of C and \gamma so as to ensure that the resulting kernel represents the best possible balance between flexibility and accuracy. We’ll discuss how this is done in practice later in this article.

Finally, though it is probably obvious, it is worth mentioning that the separation boundaries for arbitrary kernels are also defined through support vectors as in Figure 3.  To reiterate a point made earlier, this means that a solution that has fewer support vectors is likely to be more robust than one with many. Why? Because the data points defining support vectors are ones that are most sensitive to noise- therefore the fewer, the better.

There are many other types of kernels, each with their own pros and cons. However, I’ll leave these for adventurous readers to explore by themselves.  Finally, for a much more detailed….and dare I say, better… explanation of the kernel trick, I highly recommend this article by Eric Kim.

Support vector machines in R

In this demo we’ll use the svm interface that is implemented in the e1071 R package. This interface provides R programmers access to the comprehensive libsvm library written by Chang and Lin. I’ll use two toy datasets: the famous iris dataset available with the base R package and the sonar dataset from the mlbench package. I won’t describe details of the datasets as they are discussed at length in the documentation that I have linked to. However, it is worth mentioning the reasons why I chose these datasets:

  1. As mentioned earlier, no real life dataset is linearly separable, but the iris dataset is almost so. Consequently, it is a good illustration of using linear SVMs. Although one almost never uses these in practice, I have illustrated their use primarily for pedagogical reasons.
  2. The sonar dataset is a good illustration of the benefits of using RBF kernels in cases where the dataset is hard to visualise (60 variables in this case!). In general, one would almost always use RBF (or other nonlinear) kernels in practice.

With that said, let’s get right to it. I assume you have R and RStudio installed. For instructions on how to do this, have a look at the first article in this series. The processing preliminaries – loading libraries, data and creating training and test datasets are much the same as in my previous articles so I won’t dwell on these here. For completeness, however, I’ll list all the code so you can run it directly in R or R studio (a complete listing of the code can be found here):

#set working directory if needed (modify path as needed)
setwd(“C:/Users/Kailash/Documents/svm”)
#load required library
library(e1071)
#load built-in iris dataset
data(iris)
#set seed to ensure reproducible results
set.seed(42)
#split into training and test sets
iris[,”train”] <- ifelse(runif(nrow(iris))<0.8,1,0)
#separate training and test sets
trainset <- iris[iris$train==1,]
testset <- iris[iris$train==0,]
#get column index of train flag
trainColNum <- grep("train",names(trainset))
#remove train flag column from train and test sets
trainset <- trainset[,-trainColNum]
testset <- testset[,-trainColNum]
#get column index of predicted variable in dataset
typeColNum <- grep("Species",names(iris))
#build model – linear kernel and C-classification (soft margin) with default cost (C=1)
svm_model <- svm(Species~ ., data=trainset, method="C-classification", kernel="linear")
svm_model
Call:
svm(formula = Species ~ ., data = trainset, method = “C-classification”, kernel = “linear”)
Parameters:
SVM-Type: C-classification
SVM-Kernel: linear
cost: 1
gamma: 0.25
Number of Support Vectors: 24
#training set predictions
pred_train <-predict(svm_model,trainset)
mean(pred_train==trainset$Species)
[1] 0.9826087
#test set predictions
pred_test <-predict(svm_model,testset)
mean(pred_test==testset$Species)
[1] 0.9142857

 

The output from the SVM model show that there are 24 support vectors. If desired, these can be examined using the SV variable in the model – i.e via svm_model$SV.

The test prediction accuracy indicates that the linear performs quite well on this dataset, confirming that it is indeed near linearly separable. To check performance by class, one can create a confusion matrix as described in my post on random forests. I’ll leave this as an exercise for you.  Another point is that  we have used a soft-margin classification scheme with a cost C=1. You can experiment with this by explicitly changing the value of C. Again, I’ll leave this for you an exercise.

Before proceeding to the RBF kernel, I should mention a point that an alert reader may have noticed. The predicted variable, Species, can take on 3 values (setosa, versicolor and virginica). However, our discussion above dealt with a binary (2 valued) classification problem. This brings up the question as to how the algorithm deals multiclass classification problems – i.e those involving datasets with more than two classes. The libsvm algorithm (which svm uses) does this using a one-against-one classification strategy. Here’s how it works:

  1. Divide the dataset (assumed to have N classes) into N(N-1)/2 datasets that have two classes each.
  2. Solve the binary classification problem for each of these subsets
  3. Use a simple voting mechanism to assign a class to each data point.

Basically, each data point is assigned the most frequent classification it receives from all the binary classification problems it figures in.

With that said for the unrealistic linear classifier, let’s move to the real world.  In the code below, I build SVM models using three different kernels

  1.  Linear kernel (this is for comparison with the following 2 kernels).
  2. RBF kernel with default values for the parameters C and \gamma.
  3. RBF kernel with optimal values for C and \gamma. The optimal values are obtained using the tune.svm function (also available in e1071), which essentially builds models for multiple combinations of parameter values and selects the best.

OK, lets go:

#load required library (assuming e1071 is already loaded)
library(mlbench)
#load Sonar dataset
data(Sonar)
#set seed to ensure reproducible results
set.seed(42)
#split into training and test sets
Sonar[,”train”] <- ifelse(runif(nrow(Sonar))<0.8,1,0)
#separate training and test sets
trainset <- Sonar[Sonar$train==1,]
testset <- Sonar[Sonar$train==0,]
#get column index of train flag
trainColNum <- grep("train",names(trainset))
#remove train flag column from train and test sets
trainset <- trainset[,-trainColNum]
testset <- testset[,-trainColNum]
#get column index of predicted variable in dataset
typeColNum <- grep("Class",names(Sonar))
#build model – linear kernel and C-classification with default cost (C=1)
svm_model <- svm(Class~ ., data=trainset, method="C-classification", kernel="linear")
#training set predictions
pred_train <-predict(svm_model,trainset)
mean(pred_train==trainset$Class)
[1] 0.969697
#test set predictions
pred_test <-predict(svm_model,testset)
mean(pred_test==testset$Class)
[1] 0.6046512

I’ll leave you to examine the contents of the model. The important point to note here is that the performance of the model with the test set is quite dismal compared to the previous case. This simply indicates that the linear kernel is not appropriate here.  Let’s take a look at what happens if we use the RBF kernel with default values for the parameters:

#build model: radial kernel, default params
svm_model <- svm(Class~ ., data=trainset, method="C-classification", kernel="radial")
#print params
svm_model$cost
[1] 1
svm_model$gamma
[1] 0.01666667
#training set predictions
pred_train <-predict(svm_model,trainset)
mean(pred_train==trainset$Class)
[1] 0.9878788
#test set predictions
pred_test <-predict(svm_model,testset)
mean(pred_test==testset$Class)
[1] 0.7674419

That’s a pretty decent improvement from the linear kernel. Let’s see if we can do better by doing some parameter tuning. To do this we first invoke tune.svm and use the parameters it gives us in the call to svm:

#find optimal parameters in a specified range
tune_out <- tune.svm(x=trainset[,-typeColNum],y=trainset[,typeColNum],gamma=10^(-3:3),cost=c(0.01,0.1,1,10,100,1000),kernel="radial")
#print best values of cost and gamma
tune_out$best.parameters$cost
[1] 10
tune_out$best.parameters$gamma
[1] 0.01
#build model
svm_model <- svm(Class~ ., data=trainset, method="C-classification", kernel="radial",cost=tune_out$best.parameters$cost,gamma=tune_out$best.parameters$gamma)
#training set predictions
pred_train <-predict(svm_model,trainset)
mean(pred_train==trainset$Class)
[1] 1
#test set predictions
pred_test <-predict(svm_model,testset)
mean(pred_test==testset$Class)
[1] 0.8139535

Which is fairly decent improvement on the un-optimised case.

Wrapping up

This bring us to the end of this introductory exploration of SVMs in R. To recap, the distinguishing feature of SVMs in contrast to most other techniques is that they attempt to construct optimal separation boundaries between different categories.

SVMs  are quite versatile and have been applied to a wide variety of domains ranging from chemistry to pattern recognition. They are best used in binary classification scenarios. This brings up a question as to where SVMs are to be preferred to other binary classification techniques such as logistic regression. The honest response is, “it depends” – but here are some points to keep in mind when choosing between the two. A general point to keep in mind is that SVM  algorithms tend to be expensive both in terms of memory and computation, issues that can start to hurt as the size of the dataset increases.

Given all the above caveats and considerations, the best way  to figure out whether an SVM approach will work for your problem may be to do what most machine learning practitioners do: try it out!

Written by K

February 7, 2017 at 8:27 pm

The dark side of data science

with 9 comments

Data scientists are sometimes blind to the possibility that the predictions of their algorithms can have unforeseen negative effects on people. Ethical or social implications are easy to overlook when one finds interesting new patterns in data, especially if they promise significant financial gains. The Centrelink debt recovery debacle, recently reported in the Australian media, is a case in point.

Here is the story in brief:

Centrelink is an Australian Government organisation responsible for administering welfare services and payments to those in need. A major challenge such organisations face is ensuring that their clients are paid no less and no more than what is due to them. This is difficult because it involves crosschecking client income details across multiple systems owned by different government departments, a process that necessarily involves many assumptions. In July 2016, Centrelink unveiled an automated compliance system that compares income self-reported by clients to information held by the taxation office.

The problem is that the algorithm is flawed: it makes strong (and incorrect!) assumptions regarding the distribution of income across a financial year and, as a consequence, unfairly penalizes a number of legitimate benefit recipients.  It is very likely that the designers and implementers of the algorithm did not fully understand the implications of their assumptions. Worse, from the errors made by the system, it appears they may not have adequately tested it either.  But this did not stop them (or, quite possibly, their managers) from unleashing their algorithm on an unsuspecting public, causing widespread stress and distress.  More on this a bit later.

Algorithms like the one described above are the subject of Cathy O’Neil’s aptly titled book, Weapons of Math Destruction.  In the remainder of this article I discuss the main themes of the book.  Just to be clear, this post is more riff than review. However, for those seeking an opinion, here’s my one-line version: I think the book should be read not only by data science practitioners, but also by those who use or are affected by their algorithms (which means pretty much everyone!).

Abstractions and assumptions

‘O Neil begins with the observation that data algorithms are mathematical models of reality, and are necessarily incomplete because several simplifying assumptions are invariably baked into them. This point is important and often overlooked so it is worth illustrating via an example.

When assessing a person’s suitability for a loan, a bank will want to know whether the person is a good risk. It is impossible to model creditworthiness completely because we do not know all the relevant variables and those that are known may be hard to measure. To make up for their ignorance, data scientists typically use proxy variables, i.e. variables that are believed to be correlated with the variable of interest and are also easily measurable. In the case of creditworthiness, proxy variables might be things like gender, age, employment status, residential postcode etc.  Unfortunately many of these can be misleading, discriminatory or worse, both.

The Centrelink algorithm provides a good example of such a “double-whammy” proxy. The key variable it uses is the difference between the client’s annual income reported by the taxation office and self-reported annual income stated by the client. A large difference is taken to be an indicative of an incorrect payment and hence an outstanding debt. This simplistic assumption overlooks the fact that most affected people are not in steady jobs and therefore do not earn regular incomes over the course of a financial year (see this article by Michael Griffin, for a detailed example).  Worse, this crude proxy places an unfair burden on vulnerable individuals for whom casual and part time work is a fact of life.

Worse still, for those wrongly targeted with a recovery notice, getting the errors sorted out is not a straightforward process. This is typical of a WMD. As ‘O Neil states in her book, “The human victims of WMDs…are held to a far higher standard of evidence than the algorithms themselves.”  Perhaps this is because the algorithms are often opaque. But that’s a poor excuse.  This is the only technical field where practitioners are held to a lower standard of accountability than those affected by their products.

‘O Neil’s sums it up rather nicely when she calls algorithms like the Centrelink one  weapons of math destruction (WMD).

Self-fulfilling prophecies and feedback loops

A characteristic of WMD is that their predictions often become self-fulfilling prophecies. For example a person denied a loan by a faulty risk model is more likely to be denied again when he or she applies elsewhere, simply because it is on their record that they have been refused credit before. This kind of destructive feedback loop is typical of a WMD.

An example that ‘O Neil dwells on at length is a popular predictive policing program. Designed for efficiency rather than nuanced judgment, such algorithms measure what can easily be measured and act by it, ignoring the subtle contextual factors that inform the actions of experienced officers on the beat. Worse, they can lead to actions that can exacerbate the problem. For example, targeting young people of a certain demographic for stop and frisk actions can alienate them to a point where they might well turn to crime out of anger and exasperation.

As Goldratt famously said, “Tell me how you measure me and I’ll tell you how I’ll behave.”

This is not news: savvy managers have known about the dangers of managing by metrics for years. The problem is now exacerbated manyfold by our ability to implement and act on such metrics on an industrial scale, a trend that leads to a dangerous devaluation of human judgement in areas where it is most needed.

A related problem – briefly mentioned earlier – is that some of the important variables are known but hard to quantify in algorithmic terms. For example, it is known that community-oriented policing, where officers on the beat develop relationships with people in the community, leads to greater trust. The degree of trust is hard to quantify, but it is known that communities that have strong relationships with their police departments tend to have lower crime rates than similar communities that do not.  Such important but hard-to-quantify factors are typically missed by predictive policing programs.

Blackballed!

Ironically, although WMDs can cause destructive feedback loops, they are often not subjected to feedback themselves. O’Neil gives the example of algorithms that gauge the suitability of potential hires.  These programs often use proxy variables such as IQ test results, personality tests etc. to predict employability.  Candidates who are rejected often do not realise that they have been screened out by an algorithm. Further, it often happens that candidates who are thus rejected go on to successful careers elsewhere. However, this post-rejection information is never fed back to the algorithm because it impossible to do so.

In such cases, the only way to avoid being blackballed is to understand the rules set by the algorithm and play according to them. As ‘O Neil so poignantly puts it, “our lives increasingly depend on our ability to make our case to machines.” However, this can be difficult because it assumes that a) people know they are being assessed by an algorithm and 2) they have knowledge of how the algorithm works. In most hiring scenarios neither of these hold.

Just to be clear, not all data science models ignore feedback. For example, sabermetric algorithms used to assess player performance in Major League Baseball are continually revised based on latest player stats, thereby taking into account changes in performance.

Driven by data

In recent years, many workplaces have gradually seen the introduction to data-driven efficiency initiatives. Automated rostering, based on scheduling algorithms is an example. These algorithms are based on operations research techniques that were developed for scheduling complex manufacturing processes. Although appropriate for driving efficiency in manufacturing, these techniques are inappropriate for optimising shift work because of the effect they have on people. As O’ Neil states:

Scheduling software can be seen as an extension of just-in-time economy. But instead of lawn mower blades or cell phone screens showing up right on cue, it’s people, usually people who badly need money. And because they need money so desperately, the companies can bend their lives to the dictates of a mathematical model.

She correctly observes that an, “oversupply of low wage labour is the problem.” Employers know they can get away with treating people like machine parts because they have a large captive workforce.  What makes this seriously scary is that vested interests can make it difficult to outlaw such exploitative practices. As ‘O Neil mentions:

Following [a] New York Times report on Starbucks’ scheduling practices, Democrats in Congress promptly drew up bills to rein in scheduling software. But facing a Republican majority fiercely opposed to government regulations, the chances that their bill would become law were nil. The legislation died.

Commercial interests invariably trump social and ethical issues, so it is highly unlikely that industry or government will take steps to curb the worst excesses of such algorithms without significant pressure from the general public. A first step towards this is to educate ourselves on how these algorithms work and the downstream social effects of their predictions.

Messing with your mind

There is an even more insidious way that algorithms mess with us. Hot on the heels of the recent US presidential election, there were suggestions that fake news items on Facebook may have influenced the results.  Mark Zuckerberg denied this, but as this Casey Newton noted in this trenchant tweet, the denial leaves Facebook in “the awkward position of having to explain why they think they drive purchase decisions but not voting decisions.”

Be that as it may, the fact is Facebook’s own researchers have been conducting experiments to fine tune a tool they call the “voter megaphone”. Here’s what ‘O Neil says about it:

The idea was to encourage people to spread the word that they had voted. This seemed reasonable enough. By sprinkling people’s news feeds with “I voted” updates, Facebook was encouraging Americans – more that sixty-one million of them – to carry out their civic duty….by posting about people’s voting behaviour, the site was stoking peer pressure to vote. Studies have shown that the quiet satisfaction of carrying out a civic duty is less likely to move people than the possible judgement of friends and neighbours…The Facebook started out with a constructive and seemingly innocent goal to encourage people to vote. And it succeeded…researchers estimated that their campaign had increased turnout by 340,000 people. That’s a big enough crowd to swing entire states, and even national elections.

And if that’s not scary enough, try this:

For three months leading up to the election between President Obama and Mitt Romney, a researcher at the company….altered the news feed algorithm for about two million people, all of them politically engaged. The people got a higher proportion of hard news, as opposed to the usual cat videos, graduation announcements, or photos from Disney world….[the researcher] wanted to see  if getting more [political] news from friends changed people’s political behaviour. Following the election [he] sent out surveys. The self-reported results that voter participation in this group inched up from 64 to 67 percent.

This might not sound like much, but considering the thin margins of recent presidential elections, it could be enough to change a result.

But it’s even more insidious.  In a paper published in 2014, Facebook researchers showed that users’ moods can be influenced by the emotional content of their newsfeeds. Here’s a snippet from the abstract of the paper:

In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.

As you might imagine, there was a media uproar following which  the lead researcher issued a clarification and  Facebook officials duly expressed regret (but, as far as I know, not an apology).  To be sure, advertisers have been exploiting this kind of “mind control” for years, but a public social media platform should (expect to) be held to a higher standard of ethics. Facebook has since reviewed its internal research practices, but the recent fake news affair shows that the story is to be continued.

Disarming weapons of math destruction

The Centrelink debt debacle, Facebook mood contagion experiments and the other case studies mentioned in the book illusrate the myriad ways in which Big Data algorithms have a pernicious effect on our day-to-day lives. Quite often people remain unaware of their influence, wondering why a loan was denied or a job application didn’t go their way. Just as often, they are aware of what is happening, but are powerless to change it – shift scheduling algorithms being a case in point.

This is not how it was meant to be. Technology was supposed to make life better for all, not just the few who wield it.

So what can be done? Here are some suggestions:

  • To begin with, education is the key. We must work to demystify data science, create a general awareness of data science algorithms and how they work. O’ Neil’s book is an excellent first step in this direction (although it is very thin on details of how the algorithms work)
  • Develop a code of ethics for data science practitioners. It is heartening to see that IEEE has recently come up with a discussion paper on ethical considerations for artificial intelligence and autonomous systems and ACM has proposed a set of principles for algorithmic transparency and accountability.  However, I should also tag this suggestion with the warning that codes of ethics are not very effective as they can be easily violated. One has to – somehow – embed ethics in the DNA of data scientists. I believe, one way to do this is through practice-oriented education in which data scientists-in-training grapple with ethical issues through data challenges and hackathons. It is as Wittgenstein famously said, “it is clear that ethics cannot be articulated.” Ethics must be practiced.
  • Put in place a system of reliable algorithmic audits within data science departments, particularly those that do work with significant social impact.
  • Increase transparency a) by publishing information on how algorithms predict what they predict and b) by making it possible for those affected by the algorithm to access the data used to classify them as well as their classification, how it will be used and by whom.
  • Encourage the development of algorithms that detect bias in other algorithms and correct it.
  • Inspire aspiring data scientists to build models for the good.

It is only right that the last word in this long riff should go to ‘O Neil whose work inspired it. Towards the end of her book she writes:

Big Data processes codify the past. They do not invent the future. Doing that requires moral imagination, and that’s something that only humans can provide. We have to explicitly embed better values into our algorithms, creating Big Data models that follow our ethical lead. Sometimes that will mean putting fairness ahead of profit.

Excellent words for data scientists to live by.

Written by K

January 17, 2017 at 8:38 pm

Data science and sensemaking – tales from two hackathons

with 5 comments

It isn’t that they can’t see the solution. It is that they can’t see the problem” – GK Chesterton

Introduction

Examples of vendor-generated hype about data science are not hard to find,   I found one on the very first site I visited:  a large technology and services vendor who, in their own words, claim their analytics solutions help organisations “engage with data to answer the toughest business questions, uncover patterns and pursue breakthrough ideas.”  I’ve deliberately avoided linking to the guilty party because there are many others that spout similar rhetoric.

Unfortunately it seems to work:  according to Gartner, “by 2020, predictive and prescriptive analytics will attract 40% of enterprises’ net new investment in business intelligence and analytics.” This trend is accompanied by a concomitant increase in demand for data science education, fuelled by  remarks along the lines that data science is “The Sexiest Job of the 21st Century.”

By and large, data science education tends to focus on algorithms and technology, but its practice involves much more. The vendor who claims that technology can help organisations grapple with “toughest business questions” and “pursue breakthrough ideas” is singularly silent about where these questions or ideas come from. Data is meaningless without a meaningful hypothesis.  Problem is, in the real world questions or hypotheses aren’t obvious; one has to work to formulate them. As the management icon Russell Ackoff once said, “Outside of school, problems are seldom given; they have to be taken, extracted from complex situations…”

The art of taking problems is what sensemaking is all about.

Unfortunately, it is a skill that is typically ignored by data science educators.

Why?

Probably because it is hard to teach…but the good news is that it can be learnt. Like most tacit skills, sensemaking is best learnt by doing, that is, by formulating problems in real-world situations.  Before I get to that, however, let’s take a brief detour.

Real world problems are characterised by ambiguity

An important aspect of real-world problems – as opposed to classroom ones – is that they are invariably fraught with ambiguity. For example, a customer’s requirements may be vague or the available data incomplete and messy. What this means is that there is no guarantee one will be able to formulate a well-posed problem, let alone get a useful answer.   Worse, unlike a risk-based situation in which uncertainty can be quantified, one cannot even figure out the odds of success.

The human brain processes quantifiable uncertainty (aka risk) and ambiguity very differently. The former, which can be calculated, is dealt with by the prefrontal cortex which is responsible for decision making and goal-oriented thinking. Ambiguity, on the other hand, is processed by the amygdala, which deals with emotions.  The upshot of this is that ambiguity evokes an emotional response, the most common one being anxiety.

Although some people are innately better at coping with anxiety than others, it is possible to get better at it by repeatedly putting oneself in high-pressure (yet safe) situations that are ambiguous.  For data science students, hackathons provide a perfect opportunity to do this.

Ambiguity in data science – tales from two hackathons

Over the last two months, I’ve had the privilege of being a part of the Master of Data Science Innovation (MDSI) program run by the Connected Intelligence Centre at UTS.   The course director, Theresa Anderson, sees hackathons as a great way for students to learn how to handle ambiguity.  So, apart from regular coursework assignments, students are encouraged to participate in external hackathons sponsored by industry and government organisations.   This gives them opportunities to gain practical experience in formulating problems in ambiguous and high-pressure environments.

Datacake at GovHack

A few MDSI student teams participated in a GovHack event earlier this year. Here’s what William Azevedo,  a member of team that called themselves Datacake, wrote about his team’s problem formulation journey at the event :

The challenge is simple: the competitors should form teams, identify a problem and use data from government agencies from Australia and New Zealand to present a solution to the problem. Naturally, this solution should bring some benefit to the society.

I’m not sure I’d use the word simple…but the importance of problem formulation comes through quite clearly.  Here’s how he and his team (called Datacake) went about it:

 As a starting point, our team published an online survey to understand how safe people feel when walking on the streets, especially at night. As we didn’t have much time, we spread the message via social networks. In a couple of hours, we received 44 answers. It gave us enough information to back our idea.

Notice the process used in defining the problem – the team realised they did not know enough to define a meaningful problem so they went and got relevant data. Following this:

Our team analysed the answers of the survey, engaged in passionate discussions, took tips from the mentors, had lots of coffee and designed some cool diagrams on the blackboard.

…and then his description of the Aha moment when a good idea emerged:

Then the magic happened. We had this idea of merging information about crime, demographics, weather, land zoning and street illumination to provide a map of the safe and unsafe areas within a suburb.

An important point is that sensemaking is best done collaboratively. Since the problem is ambiguous or even undefined (as in this case) no individual has a privileged access to the “truth.” It is therefore important to bring diverse perspectives to bear on the problem. Indeed, sensemaking may be thought of as collaborative problem formulation and solving. In view of this it is interesting to hear what other members of Team Datacake had to say about their problem formulation process.  Here’s a comment from Anthony So:

During the whole weekend we really forced ourselves to go deep and asked “Why is it happening? Why is it happening? Why is it happening?” every time we found an interesting pattern. We really wanted to understand the true root causes of those accidents. We didn’t want to stay at a descriptive level. We knew the answers were behavioural. We knew there were multiple problems and therefore require different answers and solutions. We did different techniques to do so: machine learning, stats, data visualisation. It didn’t matter which we used the only important point was how can we get to answers of those questions.

The specific area they looked at was pedestrian safety. They found that obvious variables, such as driver fatigue and hazards were not significant, so they started looking for other potential factors. Here’s how Anthony put it:

For instance we built a classification model on the severity of the accidents involving children but we didn’t use it to make predictions. We used it to identify the important features (and unimportant) for those cases. We found out that some of the variables related to the environment (Primary_hazardous_feature, Surface_condition, Weather…) and to the drivers (Fatigue_involved_in_crash…) were not important. This gave us a good indication that those accidents are mostly related directly to the behaviour of the children. So we kept diving further and further and found 3 postcodes with higher numbers of accidents than others. We focused on those 3 areas and we kept going deeper and deeper…

In the end Datacake came up with a few suggestions for improving pedestrian safety. They were awarded a prize for their efforts, so the problem they formulated and solved was clearly valuable to the sponsors.

Peppermoney Hackathon

A couple of weekends ago, Pepper Money, Australia’s largest non-bank lender sponsored a day long internal hackathon for MDSI students, with a hefty winner-take-all prize as an incentive. The challenge was quite open-ended, and had to do with helping the organisation develop a consistent brand voice. Participants were given a small corpus of text files from the organisation’s public and social media sites and were given very general guidelines on how to proceed. Details were left entirely to the teams.

As one might expect, most teams spent the first few hours struggling to define a relevant and tractable problem – relevance being paramount for the client and tractability for the teams.  Being a mentor at the event, I was able observe how different teams handled this. Among other things, I was particularly impressed by how some teams with very little text mining experience were able to – in a few hours – come up with a good problem, an approach to solve it…and, most importantly, make decent progress by day’s end.

I won’t go into details except to say that the approaches were diverse, ranging from the somewhat philosophical to the very technical. A couple of examples:

I was amazed at the diversity of solutions the groups came up with, and so were the other mentors and the sponsor. Blair Hudson, Innovation Portfolio Manager at Pepper Money, summed the day up very well when he said:

#PepxUTS was our first hackathon event, challenging students to build data science solutions in a day to allow everyone at Pepper to communicate using a consistent brand voice. Our Co-Group CEOs both joined in for judging and awarded the winners. It was a rewarding day for all involved

(For some vignettes from the day, check out the #PepxUTS hashtag on Twitter.)

The day’s experiences left me ever more convinced that hackathons are an excellent vehicle for learning and demonstrating the practical utility of sensemaking skills.

Wrapping up

The two case studies highlight the benefits of sensemaking skills, both for students and organisations.  On the one hand,  students who participated got valuable experience in formulating problems collaboratively in high-pressure, high-ambiguity situations. This is a skill that cannot be learnt in classrooms, MOOCs or even in online data challenges (like Kaggle) where problems tend to be clearly defined. On the other hand, sponsoring organisations have benefited from new insights into longstanding problems.

Finally, it should be clear that although I’ve focused on educational settings,  what I’ve said for students applies to organisational settings too: there’s nothing to stop organisations from using hackathons as a means to help their employees learn sensemaking skills.

To conclude, the main point I want to make is that the most important situations we encounter at work (and even in our personal lives) are usually fraught with ambiguity. Our first reaction is to jump into problem solving mode because it feels like the right thing to do. In reality, one is generally better off stepping back and taking the time to think the situation through, preferably with a group of diversely skilled individuals. All too often this sensemaking step is neglected, and teams end up solving an irrelevant problem.

To paraphrase Chesterton, in order to see the right solution, one must first see the right problem.

Acknowledgements

Many thanks to Blair Hudson, William Azevedo and Anthony So for their contributions to this piece.

Written by K

October 18, 2016 at 6:23 pm

A gentle introduction to random forests using R

with 6 comments

Introduction

In a previous post, I described how decision tree algorithms work and demonstrated their use via the rpart library in R. Decision trees work by splitting a dataset recursively. That is, subsets arising from a split are further split until a predetermined termination criterion is reached.  At each step, a split is made based on the independent variable that results in the largest possible reduction in heterogeneity of the dependent  variable.

(Note:  readers unfamiliar with decision trees may want to read that post before proceeding)

The main drawback of decision trees is that they are prone to overfitting.   The  reason for this is that trees, if grown deep, are able to fit  all kinds of variations in the data, including noise. Although it is possible to address this partially by pruning, the result often remains less than satisfactory. This is because the algorithm makes a locally optimal choice at each split without any regard to whether the choice made is the best one overall.  A poor split made in the initial stages can thus doom the model, a problem that cannot be fixed by post-hoc pruning.

In this post I describe random forests, a tree-based algorithm that addresses the above shortcoming of decision trees. I’ll first describe the intuition behind the algorithm via an analogy and then do a demo using the R randomForest library.

Motivating random forests

One of the reasons for the popularity of decision trees is that they reflect the way humans make decisions: by weighing up options at each stage and choosing the best one available.  The analogy is particularly useful because it also suggests how decision trees can be improved.

One of the lifelines in the game show, Who Wants to be A Millionaire, is “Ask The Audience” wherein a contestant can ask the audience to vote on the answer to a question.  The rationale here is that the majority response from a large number of independent decision makers is more likely to yield a correct answer than one from a randomly chosen person.  There are two factors at play here:

  1. People have different experiences and will therefore draw upon different “data” to answer the question.
  2. People have different knowledge bases and preferences and will therefore draw upon different “variables” to make their choices at each stage in their decision process.

Taking a cue from the above, it seems reasonable to build many decision trees using:

  1. Different sets of training data.
  2. Randomly selected subsets of variables at each split of every decision tree.

Predictions can then made by taking the majority vote over all trees (for classification problems) or averaging results over all trees (for regression problems).  This is essentially how the random forest algorithm works.

The net effect of the two strategies is to reduce overfitting by a) averaging over trees created from different samples of the dataset and b) decreasing the likelihood of a small set of strong predictors dominating the splits.  The price paid is reduced interpretability as well as increased computational complexity. But then, there is no such thing as a free lunch.

The mechanics of the algorithm

Although we will not delve into the mathematical details of the algorithm, it is important to understand how two points made above are implemented in the algorithm.

Bootstrap aggregating… and a (rather cool) error estimate

A key feature of the algorithm is the use of multiple datasets for training individual decision trees.  This is done via a neat statistical trick called bootstrap aggregating (also called bagging).

Here’s how bagging works:

Assume you have a dataset of size N.  From this you create a sample (i.e. a subset) of size n (n less than or equal to N) by choosing n data points randomly with replacement.  “Randomly” means every point in the dataset is equally likely to be chosen and   “with replacement” means that a specific data point can appear more than once in the subset. Do this M times to create M equally-sized samples of size n each.  It can be shown that this procedure, which statisticians call bootstrapping, is legit when samples are created from large datasets – that is, when N is large.

Because a bagged sample is created by selection with replacement, there will generally be some points that are not selected.  In fact, it can be shown that, on the average, each sample will use about two-thirds of the available data points. This gives us a clever way to estimate the error as part of the process of model building.

Here’s how:

For every data point, obtain predictions for trees in which the point was out of bag. From the result mentioned above, this will yield approximately M/3 predictions per data point (because a third of the data points are out of bag).  Take the majority vote of these M/3 predictions as the predicted value for the data point. One can do this for the entire dataset. From these out of bag predictions for the whole dataset, we can estimate the overall error by computing a classification error (Count of correct predictions divided by N) for classification problems or the root mean squared error for regression problems.  This means there is no need to have a separate test data set, which is kind of cool.  However, if you have enough data, it is worth holding out some data for use as an independent test set. This is what we’ll do in the demo later.

Using subsets of predictor variables

Although bagging reduces overfitting somewhat, it does not address the issue completely. The reason is that in most datasets a small number of predictors tend to dominate the others.  These predictors tend to be selected in early splits and thus influence the shapes and sizes of a significant fraction of trees in the forest.  That is, strong predictors enhance correlations between trees which tends to come in the way of variance reduction.

A simple way to get around this problem is to use a random subset of variables at each split. This avoids over-representation of dominant variables and thus creates a more diverse forest. This is precisely what the random forest algorithm does.

Random forests in R

In what follows, I use the famous Glass dataset from the mlbench library.  The dataset has 214 data points of six types of glass  with varying metal oxide content and refractive indexes. I’ll first build a decision tree model based on the data using the rpart library (recursive partitioning) that I covered in an earlier article and then use then show how one can build a random forest model using the randomForest library. The rationale behind this is to compare the two models – single decision tree vs random forest. In the interests of space,  I won’t explain details of the rpart here as  I’ve covered it at length in the previous article. However, for completeness, I’ll list the demo code for it before getting into random forests.

Decision trees using rpart

Here’s the code listing for building a decision tree using rpart on the Glass dataset (please see my previous article for a full explanation of each step). Note that I have not used pruning as there is little benefit to be gained from it (Exercise for the reader: try this for yourself!).

#set working directory if needed (modify path as needed)
setwd(“C:/Users/Kailash/Documents/rf”)
#load required libraries – rpart for classification and regression trees
library(rpart)
#mlbench for Glass dataset
library(mlbench)
#load Glass
data(“Glass”)
#set seed to ensure reproducible results
set.seed(42)
#split into training and test sets
Glass[,”train”] <- ifelse(runif(nrow(Glass))<0.8,1,0)
#separate training and test sets
trainGlass <- Glass[Glass$train==1,]
testGlass <- Glass[Glass$train==0,]
#get column index of train flag
trainColNum <- grep(“train”,names(trainGlass))
#remove train flag column from train and test sets
trainGlass <- trainGlass[,-trainColNum]
testGlass <- testGlass[,-trainColNum]
#get column index of predicted variable in dataset
typeColNum <- grep(“Type”,names(Glass))
#build model
rpart_model <- rpart(Type ~.,data = trainGlass, method=”class”)
#plot tree
plot(rpart_model);text(rpart_model)
#…and the moment of reckoning
rpart_predict <- predict(rpart_model,testGlass[,-typeColNum],type=”class”)
mean(rpart_predict==testGlass$Type)
[1] 0.6744186

Now, we know that decision tree algorithms tend to display high variance so the hit rate from any one tree is likely to be misleading. To address this we’ll generate a bunch of trees using different training sets (via random sampling) and calculate an average hit rate and spread (or standard deviation).

#function to do multiple runs
multiple_runs <- function(train_fraction,n,dataset){
fraction_correct <- rep(NA,n)
set.seed(42)
for (i in 1:n){
dataset[,”train”] <- ifelse(runif(nrow(dataset))<0.8,1,0)
trainColNum <- grep(“train”,names(dataset))
typeColNum <- grep(“Type”,names(dataset))
trainset <- dataset[dataset$train==1,-trainColNum]
testset <- dataset[dataset$train==0,-trainColNum]
rpart_model <- rpart(Type~.,data = trainset, method=”class”)
rpart_test_predict <- predict(rpart_model,testset[,-typeColNum],type=”class”)
fraction_correct[i] <- mean(rpart_test_predict==testset$Type)
}
return(fraction_correct)
}
#50 runs, no pruning
n_runs <- multiple_runs(0.8,50,Glass)
mean(n_runs)
[1] 0.6874315
sd(n_runs)
[1] 0.0530809

The decision tree algorithm gets it right about 69% of the time with a variation of about 5%. The variation isn’t too bad here, but the accuracy has hardly improved at all (Exercise for the reader: why?). Let’s see if we can do better using random forests.

Random forests

As discussed earlier, a random forest algorithm works by averaging over multiple trees using bootstrapped samples. Also, it reduces the correlation between trees by splitting on a random subset of predictors at each node in tree construction. The key parameters for randomForest algorithm are the number of trees (ntree) and the number of variables to be considered for splitting (mtry).  The algorithm sets a default of 500 for ntree and sets mtry to one-third the total number of predictors for classification problems and square root of the the number of predictors for regression.  These defaults can be overridden by explicitly providing values for these variables.

The preliminary stuff – the creation of training and test datasets etc. – is much the same as for decision trees but I’ll list the code for completeness.

#load required library – randomForest
library(randomForest)
#mlbench for Glass dataset – load if not already loaded
#library(mlbench)
#load Glass
data(“Glass”)
#set seed to ensure reproducible results
set.seed(42)
#split into training and test sets
Glass[,”train”] <- ifelse(runif(nrow(Glass))<0.8,1,0)
#separate training and test sets
trainGlass <- Glass[Glass$train==1,]
testGlass <- Glass[Glass$train==0,]
#get column index of train flag
trainColNum <- grep(“train”,names(trainGlass))
#remove train flag column from train and test sets
trainGlass <- trainGlass[,-trainColNum]
testGlass <- testGlass[,-trainColNum]
#get column index of predicted variable in dataset
typeColNum <- grep(“Type”,names(Glass))
#build model
Glass.rf <- randomForest(Type ~.,data = trainGlass, importance=TRUE, xtest=testGlass[,-typeColNum],ntree=1000)
#Get summary info
Glass.rf
Call:
randomForest(formula = Type ~ ., data = trainGlass, importance = TRUE, xtest = testGlass[, -typeColNum], ntree = 1001)
Type of random forest: classification
Number of trees: 1000
No. of variables tried at each split: 3
OOB estimate of error rate: 23.98%
Confusion matrix:
1 2 3 5 6 7 class.error
1 40 7 2 0 0 0 0.1836735
2 8 49 1 2 2 1 0.2222222
3 6 3 6 0 0 0 0.6000000
5 0 1 0 11 0 1 0.1538462
6 1 2 0 1 6 0 0.5000000
7 1 2 0 1 0 21 0.1600000

The first thing to note is the out of bag error estimate is ~ 24%.  Equivalently the hit rate is 76%, which is better than the 69% for decision trees. Secondly, you’ll note that the algorithm does a terrible job identifying type 3 and 6 glasses correctly. This could possibly be improved by a technique called boosting, which works by  iteratively improving poor predictions made in earlier stages. I plan to look at boosting in a future post, but if you’re curious, check out the gbm package in R.

Finally, for completeness, let’s see how the test set does:

#accuracy for test set
mean(Glass.rf$test$predicted==testGlass$Type)
[1] 0.8372093
#confusion matrix
table(Glass.rf$test$predicted,testGlass$Type)
1 2 3 5 6 7
1 19 2 0 0 0 0
2 1 9 1 0 0 0
3 1 1 1 0 0 0
5 0 1 0 0 0 0
6 0 0 0 0 3 0
7 0 0 0 0 0 4

The test accuracy is better than the out of bag accuracy and there are some differences in the class errors as well. However, overall the two compare quite well and are significantly better than the results of the decision tree algorithm.

Variable importance

Random forest algorithms also give measures of variable importance. Computation of these is enabled by setting  importance, a boolean parameter, to TRUE. The algorithm computes two measures of variable importance: mean decrease in Gini and mean decrease in accuracy. Brief explanations of these follow.

Mean decrease in Gini

When determining splits in individual trees, the algorithm looks for the largest class (in terms of population) and attempts to isolate it first. If this is not possible, it tries to do the best it can, always focusing on isolating the largest remaining class in every split.This is called the Gini splitting rule (see this article for a good explanation of the rule).

The “goodness of split” is measured by the Gini Impurity, I_{G}. For a set containing K categories this is given by:

I_{G} = \sum_{i=1}^{K} f_{i}(1-f_{i})

where f_{i} is the fraction of the set that belongs to the ith category. Clearly, I_{G}  is 0 when the set is homogeneous or pure (1 class only) and is maximum when classes are equiprobable (for example, in a two class set the maximum occurs when f_{1} and f_{2} are 0.5). At each stage the algorithm chooses to split on the predictor that leads to the largest decrease in I_{G}. The algorithm tracks this decrease for each predictor for all splits and all trees in the forest. The average is reported  as the mean decrease in Gini.

Mean decrease in accuracy

The mean decrease in accuracy is calculated using the out of bag data points for each tree. The procedure goes as follows: when a particular tree is grown, the out of bag points are passed down the tree and the prediction accuracy (based on all out of bag points) recorded . The predictors are then randomly permuted and the out of bag prediction accuracy recalculated. The decrease in accuracy for a given predictor is the difference between the accuracy of the original (unpermuted) tree and the those obtained from the permuted trees in which the predictor was excluded. As in the previous case, the decrease in accuracy for each predictor can be computed and tracked as the algorithm progresses.  These can then be averaged by predictor to yield a mean decrease in accuracy.

Variable importance plot

From the above, it would seem that the mean decrease in accuracy is a more global measure as it uses fully constructed trees in contrast to the Gini measure which is based on individual splits. In practice, however, there could be other reasons for choosing one over the other…but that is neither here nor there, if you set importance to TRUE, you’ll get both. The numerical measures of importance are returned in the randomForest object (Glass.rf in our case), but I won’t list them here. Instead, I’ll just print out the variable importance plots for the two measures as these give a good visual overview of the relative importance of variables. The code is a simple one-liner:

#variable importance plot
varImpPlot(Glass.rf)

The plot is shown in Figure 1 below.

Figure 1: Variable importance plots

Figure 1: Variable importance plots

In this case the two measures are pretty consistent so it doesn’t really matter which one you choose.

Wrapping up

Random forests are an example of a general class of techniques called ensemble methods. These techniques are based on the principle that averaging over a large number of not-so-good models  yields a more reliable prediction than a single model. This is true only if models in the group are independent of  each other, which is precisely what bootstrap aggregation and predictor subsetting are intended to achieve.

Although  considerably more complex than decision trees, the logic behind random forests is not hard to understand. Indeed, the intuitiveness of the algorithm together with its ease of use and accuracy have made it very popular in the machine learning community.

Written by K

September 20, 2016 at 9:44 pm

The story before the story – a data science fable

with 5 comments

It is well-known that data-driven stories are a great way to convey results of data science initiatives. What is perhaps not as well-known is that data science projects often have to begin with stories too. Without this “story before the story” there will be no project, no results and no data-driven stories to tell….

 

For those who prefer to read, here’s a transcript of the video in full:

In the beginning there is no data, let alone results…but there are ideas. So, long before we tell stories about data or results, we have to tell stories about our ideas. The aim of these stories is to get people to care about our ideas as much as we do and, more important, invest in them. Without their interest or investment there will be no results and no further stories to tell.

So one of the first things one has to do is craft a story about the idea…or the story before the story.

Once upon a time there was a CRM system. The system captured every customer interaction that occurred, whether it was by phone, email or face to face conversation. Many quantitative details of interactions were recorded, time, duration, type. And if the interaction led to a sale, the details of the sale were recorded too.

Almost as an aside, the system also gave sales people the opportunity to record their qualitative impressions as free text notes. As you might imagine, this information, though potentially valuable, was never analysed. Sure managers looked at notes in isolation from time to time when referring.to specific customer interactions, but there was no systematic analysis of the corpus as a whole. Nobody had thought it worthwhile to do this, possibly because it is difficult if not quite impossible to analyse unstructured information in the world of relational databases and SQL.

One day, an analyst was browsing data randomly in the system, as good analysts sometimes do. He came across a note that to him seemed like the epitome of a good note…it described what the interaction was about, the customer’s reactions and potential next steps all in a logical fashion.

This gave him an idea. Wouldn’t it be cool, he thought, if we could measure the quality of notes? Not only would this tell us something about the customer and the interaction, it may tell us something about the sales person as well.

The analyst was mega excited…but he realised he’d need help. He was an IT guy and as we all know, business folks in big corporations stopped listening to their IT guys long ago. So our IT guy had his work cut out for him.

After much cogitation, he decided to enlist the help of his friend, a strategic business analyst in the marketing department. This lady, who worked in marketing had the trust of the head of marketing. If she liked the idea, she might be able to help sell it to the head of marketing.

As it turned out, the business analyst loved the idea…more important, since she knew what the sales people do on a day to day basis, she could give the IT guy more ideas on how he could build quantitative measures of the quality of notes. For example, she suggested looking for  emotion-laden words or mentions of competitor’s products and so on. The IT guy now had some concrete things to work on.  The initial results gave them even more ideas, and soon they had more than enough to make a convincing pitch to the head of marketing.

It would take us too far afield to discuss details of the pitch, but what we will say is this: they avoided technical details, instead focusing on the strategic and innovative aspects of the work.

The marketing head liked the idea…what was there not to like? He agreed to support the effort, and the idea became a project….

…and yes, within months the project resulted in new insights into customer behaviour. But that is another story.

Written by K

June 15, 2016 at 10:00 pm

%d bloggers like this: