Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Predictive Analytics’ Category

A prelude to machine learning

with 5 comments

What is machine learning?

The term machine learning gets a lot of airtime in the popular and trade press these days. As I started writing this article, I did a quick search for recent news headlines that contained this term. Here are the top three results with datelines within three days of the search:

http://venturebeat.com/2017/02/01/beyond-the-gimmick-implementing-effective-machine-learning-vb-live/

http://www.infoworld.com/article/3164249/artificial-intelligence/new-big-data-tools-for-machine-learning-spring-from-home-of-spark-and-mesos.html

http://www.infoworld.com/article/3163525/analytics/review-the-best-frameworks-for-machine-learning-and-deep-learning.html

The truth about hype usually tends to be quite prosaic and so it is in this case. Machine learning, as Professor Yaser Abu-Mostafa  puts it, is simply about “learning from data.”  And although the professor is referring to computers, this is so for humans too – we learn through patterns discerned from sensory data. As he states in the first few lines of his wonderful (but mathematically demanding!) book entitled, Learning From Data:

If you show a picture to a three-year-old and ask if there’s a tree in it, you will likely get a correct answer. If you ask a thirty year old what the definition of a tree is, you will likely get an inconclusive answer. We didn’t learn what a tree is by studying a [model] of what trees [are]. We learned by looking at trees. In other words, we learned from data.

In other words, the three year old forms a model of what constitutes a tree through a process of discerning a common pattern between all objects that grown-ups around her label “trees.” (the data). She can then “predict” that something is (or is not) a tree by applying this model to new instances presented to her.

This is exactly what happens in machine learning: the computer (or more correctly, the algorithm) builds a predictive model of a variable (like “treeness”) based on patterns it discerns in data.  The model can then be applied to predict the value of the variable (e.g. is it a tree  or not) in new instances.

With that said for an introduction, it is worth contrasting this machine-driven process of model building with the traditional approach of building mathematical models to predict phenomena as in, say,  physics and engineering.

What are models good for?

Physicists and engineers model phenomena using physical laws and mathematics. The aim of such modelling is both to understand and predict natural phenomena.  For example, a physical law such as Newton’s Law of Gravitation is itself a model – it helps us understand how gravity works and make predictions about (say) where Mars is going to be six months from now.  Indeed, all theories and laws of physics are but models that have wide applicability.

(Aside: Models are typically expressed via differential equations.  Most differential equations are hard to solve analytically (or exactly), so scientists use computers to solve them numerically.  It is important to note that in this case computers are used as calculation tools, they play no role in model-building.)

As mentioned earlier, the role of models in the sciences is twofold – understanding and prediction. In contrast, in machine learning the focus is usually on prediction rather than understanding.  The predictive successes of machine learning have led certain commentators to claim that scientific theory building is obsolete and science can advance by crunching data alone.  Such claims are overblown, not to mention, hubristic, for although a data scientist may be able to predict with accuracy, he or she may not be able to tell you why a particular prediction is obtained. This lack of understanding can mislead and can even have harmful consequences, a point that’s worth unpacking in some detail…

Assumptions, assumptions

A model of a real world process or phenomenon is necessarily a simplification. This is essentially because it is impossible to isolate a process or phenomenon from the rest of the world. As a consequence it is impossible to know for certain that the model one has built has incorporated all the interactions that influence the process / phenomenon of interest. It is quite possible that potentially important variables have been overlooked.

The selection of variables that go into a model is based on assumptions. In the case of model building in physics, these assumptions are made upfront and are thus clear to anybody who takes the trouble to read the underlying theory. In machine learning, however, the assumptions are harder to see because they are implicit in the data and the algorithm. This can be a problem when data is biased or an algorithm opaque.

Problem of bias and opacity become more acute as datasets increase in size and algorithms become more complex, especially when applied to social issues that have serious human consequences. I won’t go into this here, but for examples the interested reader may want to have a look at Cathy O’Neil’s book, Weapons of Math Destruction, or my article on the dark side of data science.

As an aside, I should point out that although assumptions are usually obvious in traditional modelling, they are often overlooked out of sheer laziness or, more charitably, lack of awareness. This can have disastrous consequences. The global financial crisis of 2008 can – to some extent – be blamed on the failure of trading professionals to understand assumptions behind the model that was used to calculate the value of collateralised debt obligations.

It all starts with a straight line….

Now that we’ve taken a tour of some of the key differences between model building in the old and new worlds, we are all set to start talking about machine learning proper.

I should begin by admitting that I overstated the point about opacity: there are some machine learning algorithms that are transparent as can possibly be. Indeed, chances are you know the  algorithm I’m going to discuss next, either from an introductory statistics course in university or from plotting relationships between two variables in your favourite spreadsheet.  Yea, you may have guessed that I’m referring to linear regression.

In its simplest avatar, linear regression attempts to fit a straight line to a set of data points in two dimensions. The two dimensions correspond to a dependent variable (traditionally denoted by y) and an independent variable (traditionally denoted by x).    An example of such a fitted line is shown in Figure 1.  Once such a line is obtained, one can “predict” the value of the dependent variable for any value of the independent variable.  In terms of our earlier discussion, the line is the model.

Figure 1: Linear Regression

Figure 1: Linear Regression

Figure 1 also serves to illustrate that linear models are going to be inappropriate in most real world situations (the straight line does not fit the data well). But it is not so hard to devise methods to fit more complicated functions.

The important point here is that since machine learning is about finding functions that accurately predict dependent variables for as yet unknown values of the independent variables, most algorithms make explicit or implicit choices about the form of these functions.

Complexity versus simplicity

At first sight it seems a no-brainer that complicated functions will work better than simple ones. After all, if we choose a nonlinear function with lots of parameters, we should be able to fit a complex data set better than a linear function can (See Figure 2 – the complicated function fits the datapoints better than the straight line).   But there’s catch: although the ability to fit a dataset increases with the flexibility of the fitting function,  increasing complexity beyond a point will invariably reduce predictive power.  Put another way, a complex enough function may fit the known data points perfectly but, as a consequence, will inevitably perform poorly on unknown data. This is an important point so let’s look at it in greater detail.

Figure 2: Simple and complex fitting functions

Figure 2: Simple and complex fitting function (courtesy: Wikimedia)

Recall that the aim of machine learning is to predict values of the dependent variable for as yet unknown values of the independent variable(s).  Given a finite (and usually, very limited) dataset, how do we build a model that we can have some confidence in? The usual strategy is to partition the dataset into two subsets, one containing 60 to 80% of the data (called the training set) and the other containing the remainder (called the test set). The model is then built – i.e. an appropriate function fitted – using the training data and verified against the test data. The verification process consists of comparing the predicted values of the dependent variable with the known values for the test set.

Now, it should be intuitively clear that the more complicated the function, the better it will fit the training data.

Question: Why?

Answer: Because complicated functions have more free parameters – for example, linear functions of a single (dependent) variable have two parameters (slope and intercept), quadratics have three, cubics four and so on.  The mathematician, John von Neumann is believed to have said, “With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” See this post for a nice demonstration of the literal truth of his words.

Put another way, complex functions are wrigglier than simple ones, and – by suitable adjustment of parameters – their “wriggliness” can be adjusted to fit the training data better than functions that are less wriggly. Figure 2 illustrates this point well.

This may sound like you can have your cake and eat it too: choose a complicated enough function and you can fit both the training and test data well. Not so! Keep in mind that the resulting model (fitted function) is built using the training set alone, so a good fit to the test data is not guaranteed.  In fact, it is intuitively clear that a function that fits the training data perfectly (as in Figure 2) is likely to do a terrible job on the test data.

Question: Why?

Answer:  Remember, as far as the model is concerned, the test data is unknown. Hence, the greater the wriggliness in the trained model, the less likely it is to fit the test data well. Remember, once the model is fitted to the training data, you have no freedom to tweak parameters any further.

This tension between simplicity and complexity of models is one of the key principles of machine learning and is called the bias-variance tradeoff. Bias here refers to lack of flexibility and variance, the reducible error. In general simpler functions have greater bias and lower variance and complex functions, the opposite.  Much of the subtlety of machine learning lies in developing an understanding of how to arrive at the right level of complexity for the problem at hand –  that is, how to tweak parameters so that the resulting function fits the training data just well enough so as to generalise well to unknown data.

Note: those who are curious to learn more about the bias-variance tradeoff may want to have a look at this piece.  For details on how to achieve an optimal tradeoff, search for articles on regularization in machine learning.

Unlocking unstructured data

The discussion thus far has focused primarily on quantitative or enumerable data (numbers and categories) that’s stored in  a structured format – i.e. as columns and rows in a spreadsheet or database table). This is fine as it goes, but the fact is that much of the data in organisations is unstructured, the most common examples being text documents and audio-visual media. This data is virtually impossible to analyse computationally using relational database technologies  (such as SQL) that are commonly used by organisations.

The situation has changed dramatically in the last decade or so. Text analysis techniques that once required expensive software and high-end computers have now been implemented in open source languages such as Python and R, and can be run on personal computers.  For problems that require computing power and memory beyond that, cloud technologies make it possible to do so cheaply. In my opinion, the ability to analyse textual data is the most important advance in data technologies in the last decade or so. It unlocks a world of possibilities for the curious data analyst. Just think, all those comment fields in your survey data can now be analysed in a way that was never possible in the relational world!

There is a general impression that text analysis is hard.  Although some of the advanced techniques can take a little time to wrap one’s head around, the basics are simple enough. Yea, I really mean that – for proof, check out my tutorial on the topic.

Wrapping up

I could go on for a while. Indeed, I was planning to delve into a few algorithms of increasing complexity (from regression to trees and forests to neural nets) and then close with a brief peek at some of the more recent headline-grabbing developments like deep learning. However, I realised that such an exploration would be too long and (perhaps more importantly) defeat the main intent of this piece which is to give starting students an idea of what machine learning is about, and how it differs from preexisting techniques of data analysis. I hope I have succeeded, at least partially, in achieving that aim.

For those who are interested in learning more about machine learning algorithms, I can suggest having a look at my “Gentle Introduction to Data Science using R” series of articles. Start with the one on text analysis (link in last line of previous section) and then move on to clustering, topic modelling, naive Bayes, decision trees, random forests and support vector machines. I’m slowly adding to the list as I find the time, so please do check back again from time to time.

Note: This post is written as an introduction to the Data, Algorithms and Meaning subject that is part of the core curriculum of the Master of Data Science and Innovation program, run by the Connected Intelligence Centre at UTS. I’m coordinating the subject this semester, and will be co-teaching it with Stephan Curiskis.

Written by K

February 23, 2017 at 3:12 pm

 A gentle introduction to support vector machines using R

with 5 comments

Introduction

Most machine learning algorithms involve minimising an error measure of some kind (this measure is often called an objective function or loss function).  For example, the error measure in linear regression problems is the famous mean squared error – i.e. the averaged sum of the squared differences between the predicted and actual values. Like the mean squared error, most objective functions depend on all points in the training dataset.  In this post, I describe the support vector machine (SVM) approach which focuses instead on finding the optimal separation boundary between datapoints that have different classifications.  I’ll elaborate on what this means in the next section.

Here’s the plan in brief. I’ll begin with the rationale behind SVMs using a simple case of a binary (two class) dataset with a simple separation boundary (I’ll clarify what “simple” means in a minute).  Following that, I’ll describe how this can be generalised to datasets with more complex boundaries. Finally, I’ll work through a couple of examples in R, illustrating the principles behind SVMs. In line with the general philosophy of my “Gentle Introduction to Data Science Using R” series, the focus is on developing an intuitive understanding of the algorithm along with a practical demonstration of its use through a toy example.

The rationale

The basic idea behind SVMs is best illustrated by considering a simple case:  a set of data points that belong to one of two classes, red and blue, as illustrated in figure 1 below. To make things simpler still, I have assumed that the boundary separating the two classes is a straight line, represented by the solid green line in the diagram.  In the technical literature, such datasets are called linearly separable.

Figure 1:

Figure 1: Linearly separable data

In the linearly separable case, there is usually a fair amount of freedom in the way a separating line can be drawn. Figure 2 illustrates this point: the two broken green lines are also valid separation boundaries. Indeed, because there is a non-zero distance between the two closest points between categories, there are an infinite number of possible separation lines. This, quite naturally, raises the question as to whether it is possible to choose a separation boundary that is optimal.

Figure 2: Illustrating multiple separation boundaries

Figure 2: Illustrating multiple separation boundaries

The short answer is, yes there is. One way to do this is to select a boundary line that maximises the margin, i.e. the distance between the separation boundary and the points that are closest to it.  Such an optimal boundary is illustrated by the black brace in Figure 3.  The really cool thing about this criterion is that the location of the separation boundary depends only on the points that are closest to it. This means, unlike other classification methods, the classifier does not depend on any other points in dataset. The directed lines between the boundary and the closest points on either side are called support vectors (these are the solid black lines in figure 3). A direct implication of this is that the fewer the support vectors, the better the generalizability of the boundary.

Figure 3: Optimal separation boundary in linearly separable case

Although the above sounds great, it is of limited practical value because real data sets are seldom (if ever) linearly separable.

So, what can we do when dealing with real (i.e. non linearly separable) data sets?

A simple approach to tackle small deviations from linear separability is to allow a small number of points (those that are close to the boundary) to be misclassified.  The number of possible misclassifications is governed by a free parameter C, which is called the cost.  The cost is essentially the penalty associated with making an error: the higher the value of C, the less likely it is that the algorithm will misclassify a point.

This approach – which is called soft margin classification – is illustrated in Figure 4. Note the points on the wrong side of the separation boundary.  We will demonstrate soft margin SVMs in the next section.  (Note:  At the risk of belabouring the obvious, the purely linearly separable case discussed in the previous para is simply is a special case of the soft margin classifier.)

Figure 3: Soft margin classifier (linearly separable data)

Figure 4: Soft margin classifier (linearly separable data)

Real life situations are much more complex and cannot be dealt with using soft margin classifiers. For example, as shown in Figure 5, one could have widely separated clusters of points that belong to the same classes. Such situations, which require the use of multiple (and nonlinear) boundaries, can sometimes be dealt with using a clever approach called the kernel trick.

Figure 5: Non-linearly separable data

Figure 5: Non-linearly separable data

The kernel trick

Recall that in the linearly separable (or soft margin) case, the SVM algorithm works by finding a separation boundary that maximises the margin, which is the distance between the boundary and the points closest to it. The distance here is the usual straight line distance between the boundary and the closest point(s). This is called the Euclidean distance in honour of the great geometer of antiquity. The point to note is that this process results in a separation boundary that is a straight line, which as Figure 5 illustrates, does not always work. In fact in most cases it won’t.

So what can we do? To answer this question, we have to take a bit of a detour…

What if we were able to generalize the notion of distance in a way that generates nonlinear separation boundaries? It turns out that this is possible. To see how, one has to first understand how the notion of distance can be generalized.

The key properties that any measure of distance must satisfy are:

  1. Non-negativity – a distance cannot be negative, a point that needs no further explanation I reckon 🙂
  2. Symmetry – that is, the distance between point A and point B is the same as the distance between point B and point A.
  3. Identity– the distance between a point and itself is zero.
  4. Triangle inequality – that is the sum of distances between point A and B and points B and C must be less than or equal to the distance between A and C (equality holds only if all three points lie along the same line).

Any mathematical object that displays the above properties is akin to a distance. Such generalized distances are called metrics and the mathematical space in which they live is called a metric space. Metrics are defined using special mathematical functions designed to satisfy the above conditions. These functions are known as kernels.

The essence of the kernel trick lies in mapping the classification problem to a  metric space in which the problem is rendered separable via a separation boundary that is simple in the new space, but complex – as it has to be – in the original one. Generally, the transformed space has a higher dimensionality, with each of the dimensions being (possibly complex) combinations of the original problem variables. However, this is not necessarily a problem because in practice one doesn’t actually mess around with transformations, one just tries different kernels (the transformation being implicit in the kernel) and sees which one does the job. The check is simple: we simply test the predictions resulting from using different kernels against a held out subset of the data (as one would for any machine learning algorithm).

It turns out that a particular function – called the radial basis function kernel  (RBF kernel) – is very effective in many cases.  The RBF kernel is essentially a Gaussian (or Normal) function with the Euclidean distance between pairs of points as the variable (see equation 1 below).   The basic rationale behind the RBF kernel is that it creates separation boundaries that it tends to classify points close together (in the Euclidean sense) in the original space in the same way. This is reflected in the fact that the kernel decays (i.e. drops off to zero) as the Euclidean distance between points increases.

\exp (-\gamma |\mathbf{x-y}|)....(1)

The rate at which a kernel decays is governed by the parameter \gamma – the higher the value of \gamma, the more rapid the decay.  This serves to illustrate that the RBF kernel is extremely flexible….but the flexibility comes at a price – the danger of overfitting for large values of \gamma .  One should choose appropriate values of C and \gamma so as to ensure that the resulting kernel represents the best possible balance between flexibility and accuracy. We’ll discuss how this is done in practice later in this article.

Finally, though it is probably obvious, it is worth mentioning that the separation boundaries for arbitrary kernels are also defined through support vectors as in Figure 3.  To reiterate a point made earlier, this means that a solution that has fewer support vectors is likely to be more robust than one with many. Why? Because the data points defining support vectors are ones that are most sensitive to noise- therefore the fewer, the better.

There are many other types of kernels, each with their own pros and cons. However, I’ll leave these for adventurous readers to explore by themselves.  Finally, for a much more detailed….and dare I say, better… explanation of the kernel trick, I highly recommend this article by Eric Kim.

Support vector machines in R

In this demo we’ll use the svm interface that is implemented in the e1071 R package. This interface provides R programmers access to the comprehensive libsvm library written by Chang and Lin. I’ll use two toy datasets: the famous iris dataset available with the base R package and the sonar dataset from the mlbench package. I won’t describe details of the datasets as they are discussed at length in the documentation that I have linked to. However, it is worth mentioning the reasons why I chose these datasets:

  1. As mentioned earlier, no real life dataset is linearly separable, but the iris dataset is almost so. Consequently, it is a good illustration of using linear SVMs. Although one almost never uses these in practice, I have illustrated their use primarily for pedagogical reasons.
  2. The sonar dataset is a good illustration of the benefits of using RBF kernels in cases where the dataset is hard to visualise (60 variables in this case!). In general, one would almost always use RBF (or other nonlinear) kernels in practice.

With that said, let’s get right to it. I assume you have R and RStudio installed. For instructions on how to do this, have a look at the first article in this series. The processing preliminaries – loading libraries, data and creating training and test datasets are much the same as in my previous articles so I won’t dwell on these here. For completeness, however, I’ll list all the code so you can run it directly in R or R studio (a complete listing of the code can be found here):

#set working directory if needed (modify path as needed)
setwd(“C:/Users/Kailash/Documents/svm”)
#load required library
library(e1071)
#load built-in iris dataset
data(iris)
#set seed to ensure reproducible results
set.seed(42)
#split into training and test sets
iris[,”train”] <- ifelse(runif(nrow(iris))<0.8,1,0)
#separate training and test sets
trainset <- iris[iris$train==1,]
testset <- iris[iris$train==0,]
#get column index of train flag
trainColNum <- grep(“train”,names(trainset))
#remove train flag column from train and test sets
trainset <- trainset[,-trainColNum]
testset <- testset[,-trainColNum]
#get column index of predicted variable in dataset
typeColNum <- grep(“Species”,names(iris))
#build model – linear kernel and C-classification (soft margin) with default cost (C=1)
svm_model <- svm(Species~ ., data=trainset, method=”C-classification”, kernel=”linear”)
svm_model
Call:
svm(formula = Species ~ ., data = trainset, method = “C-classification”, kernel = “linear”)
Parameters:
SVM-Type: C-classification
SVM-Kernel: linear
cost: 1
gamma: 0.25
Number of Support Vectors: 24
#training set predictions
pred_train <-predict(svm_model,trainset)
mean(pred_train==trainset$Species)
[1] 0.9826087
#test set predictions
pred_test <-predict(svm_model,testset)
mean(pred_test==testset$Species)
[1] 0.9142857

 

The output from the SVM model show that there are 24 support vectors. If desired, these can be examined using the SV variable in the model – i.e via svm_model$SV.

The test prediction accuracy indicates that the linear performs quite well on this dataset, confirming that it is indeed near linearly separable. To check performance by class, one can create a confusion matrix as described in my post on random forests. I’ll leave this as an exercise for you.  Another point is that  we have used a soft-margin classification scheme with a cost C=1. You can experiment with this by explicitly changing the value of C. Again, I’ll leave this for you an exercise.

Before proceeding to the RBF kernel, I should mention a point that an alert reader may have noticed. The predicted variable, Species, can take on 3 values (setosa, versicolor and virginica). However, our discussion above dealt with a binary (2 valued) classification problem. This brings up the question as to how the algorithm deals multiclass classification problems – i.e those involving datasets with more than two classes. The libsvm algorithm (which svm uses) does this using a one-against-one classification strategy. Here’s how it works:

  1. Divide the dataset (assumed to have N classes) into N(N-1)/2 datasets that have two classes each.
  2. Solve the binary classification problem for each of these subsets
  3. Use a simple voting mechanism to assign a class to each data point.

Basically, each data point is assigned the most frequent classification it receives from all the binary classification problems it figures in.

With that said for the unrealistic linear classifier, let’s move to the real world.  In the code below, I build SVM models using three different kernels

  1.  Linear kernel (this is for comparison with the following 2 kernels).
  2. RBF kernel with default values for the parameters C and \gamma.
  3. RBF kernel with optimal values for C and \gamma. The optimal values are obtained using the tune.svm function (also available in e1071), which essentially builds models for multiple combinations of parameter values and selects the best.

OK, lets go:

#load required library (assuming e1071 is already loaded)
library(mlbench)
#load Sonar dataset
data(Sonar)
#set seed to ensure reproducible results
set.seed(42)
#split into training and test sets
Sonar[,”train”] <- ifelse(runif(nrow(Sonar))<0.8,1,0)
#separate training and test sets
trainset <- Sonar[Sonar$train==1,]
testset <- Sonar[Sonar$train==0,]
#get column index of train flag
trainColNum <- grep(“train”,names(trainset))
#remove train flag column from train and test sets
trainset <- trainset[,-trainColNum]
testset <- testset[,-trainColNum]
#get column index of predicted variable in dataset
typeColNum <- grep(“Class”,names(Sonar))
#build model – linear kernel and C-classification with default cost (C=1)
svm_model <- svm(Class~ ., data=trainset, method=”C-classification”, kernel=”linear”)
#training set predictions
pred_train <-predict(svm_model,trainset)
mean(pred_train==trainset$Class)
[1] 0.969697
#test set predictions
pred_test <-predict(svm_model,testset)
mean(pred_test==testset$Class)
[1] 0.6046512

I’ll leave you to examine the contents of the model. The important point to note here is that the performance of the model with the test set is quite dismal compared to the previous case. This simply indicates that the linear kernel is not appropriate here.  Let’s take a look at what happens if we use the RBF kernel with default values for the parameters:

#build model: radial kernel, default params
svm_model <- svm(Class~ ., data=trainset, method=”C-classification”, kernel=”radial”)
#print params
svm_model$cost
[1] 1
svm_model$gamma
[1] 0.01666667
#training set predictions
pred_train <-predict(svm_model,trainset)
mean(pred_train==trainset$Class)
[1] 0.9878788
#test set predictions
pred_test <-predict(svm_model,testset)
mean(pred_test==testset$Class)
[1] 0.7674419

That’s a pretty decent improvement from the linear kernel. Let’s see if we can do better by doing some parameter tuning. To do this we first invoke tune.svm and use the parameters it gives us in the call to svm:

#find optimal parameters in a specified range
tune_out <- tune.svm(x=trainset[,-typeColNum],y=trainset[,typeColNum],gamma=10^(-3:3),cost=c(0.01,0.1,1,10,100,1000),kernel=”radial”)
#print best values of cost and gamma
tune_out$best.parameters$cost
[1] 10
tune_out$best.parameters$gamma
[1] 0.01
#build model
svm_model <- svm(Class~ ., data=trainset, method=”C-classification”, kernel=”radial”,cost=tune_out$best.parameters$cost,gamma=tune_out$best.parameters$gamma)
#training set predictions
pred_train <-predict(svm_model,trainset)
mean(pred_train==trainset$Class)
[1] 1
#test set predictions
pred_test <-predict(svm_model,testset)
mean(pred_test==testset$Class)
[1] 0.8139535

Which is fairly decent improvement on the un-optimised case.

Wrapping up

This bring us to the end of this introductory exploration of SVMs in R. To recap, the distinguishing feature of SVMs in contrast to most other techniques is that they attempt to construct optimal separation boundaries between different categories.

SVMs  are quite versatile and have been applied to a wide variety of domains ranging from chemistry to pattern recognition. They are best used in binary classification scenarios. This brings up a question as to where SVMs are to be preferred to other binary classification techniques such as logistic regression. The honest response is, “it depends” – but here are some points to keep in mind when choosing between the two. A general point to keep in mind is that SVM  algorithms tend to be expensive both in terms of memory and computation, issues that can start to hurt as the size of the dataset increases.

Given all the above caveats and considerations, the best way  to figure out whether an SVM approach will work for your problem may be to do what most machine learning practitioners do: try it out!

Written by K

February 7, 2017 at 8:27 pm

The dark side of data science

with 7 comments

Data scientists are sometimes blind to the possibility that the predictions of their algorithms can have unforeseen negative effects on people. Ethical or social implications are easy to overlook when one finds interesting new patterns in data, especially if they promise significant financial gains. The Centrelink debt recovery debacle, recently reported in the Australian media, is a case in point.

Here is the story in brief:

Centrelink is an Australian Government organisation responsible for administering welfare services and payments to those in need. A major challenge such organisations face is ensuring that their clients are paid no less and no more than what is due to them. This is difficult because it involves crosschecking client income details across multiple systems owned by different government departments, a process that necessarily involves many assumptions. In July 2016, Centrelink unveiled an automated compliance system that compares income self-reported by clients to information held by the taxation office.

The problem is that the algorithm is flawed: it makes strong (and incorrect!) assumptions regarding the distribution of income across a financial year and, as a consequence, unfairly penalizes a number of legitimate benefit recipients.  It is very likely that the designers and implementers of the algorithm did not fully understand the implications of their assumptions. Worse, from the errors made by the system, it appears they may not have adequately tested it either.  But this did not stop them (or, quite possibly, their managers) from unleashing their algorithm on an unsuspecting public, causing widespread stress and distress.  More on this a bit later.

Algorithms like the one described above are the subject of Cathy O’Neil’s aptly titled book, Weapons of Math Destruction.  In the remainder of this article I discuss the main themes of the book.  Just to be clear, this post is more riff than review. However, for those seeking an opinion, here’s my one-line version: I think the book should be read not only by data science practitioners, but also by those who use or are affected by their algorithms (which means pretty much everyone!).

Abstractions and assumptions

‘O Neil begins with the observation that data algorithms are mathematical models of reality, and are necessarily incomplete because several simplifying assumptions are invariably baked into them. This point is important and often overlooked so it is worth illustrating via an example.

When assessing a person’s suitability for a loan, a bank will want to know whether the person is a good risk. It is impossible to model creditworthiness completely because we do not know all the relevant variables and those that are known may be hard to measure. To make up for their ignorance, data scientists typically use proxy variables, i.e. variables that are believed to be correlated with the variable of interest and are also easily measurable. In the case of creditworthiness, proxy variables might be things like gender, age, employment status, residential postcode etc.  Unfortunately many of these can be misleading, discriminatory or worse, both.

The Centrelink algorithm provides a good example of such a “double-whammy” proxy. The key variable it uses is the difference between the client’s annual income reported by the taxation office and self-reported annual income stated by the client. A large difference is taken to be an indicative of an incorrect payment and hence an outstanding debt. This simplistic assumption overlooks the fact that most affected people are not in steady jobs and therefore do not earn regular incomes over the course of a financial year (see this article by Michael Griffin, for a detailed example).  Worse, this crude proxy places an unfair burden on vulnerable individuals for whom casual and part time work is a fact of life.

Worse still, for those wrongly targeted with a recovery notice, getting the errors sorted out is not a straightforward process. This is typical of a WMD. As ‘O Neil states in her book, “The human victims of WMDs…are held to a far higher standard of evidence than the algorithms themselves.”  Perhaps this is because the algorithms are often opaque. But that’s a poor excuse.  This is the only technical field where practitioners are held to a lower standard of accountability than those affected by their products.

‘O Neil’s sums it up rather nicely when she calls algorithms like the Centrelink one  weapons of math destruction (WMD).

Self-fulfilling prophecies and feedback loops

A characteristic of WMD is that their predictions often become self-fulfilling prophecies. For example a person denied a loan by a faulty risk model is more likely to be denied again when he or she applies elsewhere, simply because it is on their record that they have been refused credit before. This kind of destructive feedback loop is typical of a WMD.

An example that ‘O Neil dwells on at length is a popular predictive policing program. Designed for efficiency rather than nuanced judgment, such algorithms measure what can easily be measured and act by it, ignoring the subtle contextual factors that inform the actions of experienced officers on the beat. Worse, they can lead to actions that can exacerbate the problem. For example, targeting young people of a certain demographic for stop and frisk actions can alienate them to a point where they might well turn to crime out of anger and exasperation.

As Goldratt famously said, “Tell me how you measure me and I’ll tell you how I’ll behave.”

This is not news: savvy managers have known about the dangers of managing by metrics for years. The problem is now exacerbated manyfold by our ability to implement and act on such metrics on an industrial scale, a trend that leads to a dangerous devaluation of human judgement in areas where it is most needed.

A related problem – briefly mentioned earlier – is that some of the important variables are known but hard to quantify in algorithmic terms. For example, it is known that community-oriented policing, where officers on the beat develop relationships with people in the community, leads to greater trust. The degree of trust is hard to quantify, but it is known that communities that have strong relationships with their police departments tend to have lower crime rates than similar communities that do not.  Such important but hard-to-quantify factors are typically missed by predictive policing programs.

Blackballed!

Ironically, although WMDs can cause destructive feedback loops, they are often not subjected to feedback themselves. O’Neil gives the example of algorithms that gauge the suitability of potential hires.  These programs often use proxy variables such as IQ test results, personality tests etc. to predict employability.  Candidates who are rejected often do not realise that they have been screened out by an algorithm. Further, it often happens that candidates who are thus rejected go on to successful careers elsewhere. However, this post-rejection information is never fed back to the algorithm because it impossible to do so.

In such cases, the only way to avoid being blackballed is to understand the rules set by the algorithm and play according to them. As ‘O Neil so poignantly puts it, “our lives increasingly depend on our ability to make our case to machines.” However, this can be difficult because it assumes that a) people know they are being assessed by an algorithm and 2) they have knowledge of how the algorithm works. In most hiring scenarios neither of these hold.

Just to be clear, not all data science models ignore feedback. For example, sabermetric algorithms used to assess player performance in Major League Baseball are continually revised based on latest player stats, thereby taking into account changes in performance.

Driven by data

In recent years, many workplaces have gradually seen the introduction to data-driven efficiency initiatives. Automated rostering, based on scheduling algorithms is an example. These algorithms are based on operations research techniques that were developed for scheduling complex manufacturing processes. Although appropriate for driving efficiency in manufacturing, these techniques are inappropriate for optimising shift work because of the effect they have on people. As O’ Neil states:

Scheduling software can be seen as an extension of just-in-time economy. But instead of lawn mower blades or cell phone screens showing up right on cue, it’s people, usually people who badly need money. And because they need money so desperately, the companies can bend their lives to the dictates of a mathematical model.

She correctly observes that an, “oversupply of low wage labour is the problem.” Employers know they can get away with treating people like machine parts because they have a large captive workforce.  What makes this seriously scary is that vested interests can make it difficult to outlaw such exploitative practices. As ‘O Neil mentions:

Following [a] New York Times report on Starbucks’ scheduling practices, Democrats in Congress promptly drew up bills to rein in scheduling software. But facing a Republican majority fiercely opposed to government regulations, the chances that their bill would become law were nil. The legislation died.

Commercial interests invariably trump social and ethical issues, so it is highly unlikely that industry or government will take steps to curb the worst excesses of such algorithms without significant pressure from the general public. A first step towards this is to educate ourselves on how these algorithms work and the downstream social effects of their predictions.

Messing with your mind

There is an even more insidious way that algorithms mess with us. Hot on the heels of the recent US presidential election, there were suggestions that fake news items on Facebook may have influenced the results.  Mark Zuckerberg denied this, but as this Casey Newton noted in this trenchant tweet, the denial leaves Facebook in “the awkward position of having to explain why they think they drive purchase decisions but not voting decisions.”

Be that as it may, the fact is Facebook’s own researchers have been conducting experiments to fine tune a tool they call the “voter megaphone”. Here’s what ‘O Neil says about it:

The idea was to encourage people to spread the word that they had voted. This seemed reasonable enough. By sprinkling people’s news feeds with “I voted” updates, Facebook was encouraging Americans – more that sixty-one million of them – to carry out their civic duty….by posting about people’s voting behaviour, the site was stoking peer pressure to vote. Studies have shown that the quiet satisfaction of carrying out a civic duty is less likely to move people than the possible judgement of friends and neighbours…The Facebook started out with a constructive and seemingly innocent goal to encourage people to vote. And it succeeded…researchers estimated that their campaign had increased turnout by 340,000 people. That’s a big enough crowd to swing entire states, and even national elections.

And if that’s not scary enough, try this:

For three months leading up to the election between President Obama and Mitt Romney, a researcher at the company….altered the news feed algorithm for about two million people, all of them politically engaged. The people got a higher proportion of hard news, as opposed to the usual cat videos, graduation announcements, or photos from Disney world….[the researcher] wanted to see  if getting more [political] news from friends changed people’s political behaviour. Following the election [he] sent out surveys. The self-reported results that voter participation in this group inched up from 64 to 67 percent.

This might not sound like much, but considering the thin margins of recent presidential elections, it could be enough to change a result.

But it’s even more insidious.  In a paper published in 2014, Facebook researchers showed that users’ moods can be influenced by the emotional content of their newsfeeds. Here’s a snippet from the abstract of the paper:

In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.

As you might imagine, there was a media uproar following which  the lead researcher issued a clarification and  Facebook officials duly expressed regret (but, as far as I know, not an apology).  To be sure, advertisers have been exploiting this kind of “mind control” for years, but a public social media platform should (expect to) be held to a higher standard of ethics. Facebook has since reviewed its internal research practices, but the recent fake news affair shows that the story is to be continued.

Disarming weapons of math destruction

The Centrelink debt debacle, Facebook mood contagion experiments and the other case studies mentioned in the book illusrate the myriad ways in which Big Data algorithms have a pernicious effect on our day-to-day lives. Quite often people remain unaware of their influence, wondering why a loan was denied or a job application didn’t go their way. Just as often, they are aware of what is happening, but are powerless to change it – shift scheduling algorithms being a case in point.

This is not how it was meant to be. Technology was supposed to make life better for all, not just the few who wield it.

So what can be done? Here are some suggestions:

  • To begin with, education is the key. We must work to demystify data science, create a general awareness of data science algorithms and how they work. O’ Neil’s book is an excellent first step in this direction (although it is very thin on details of how the algorithms work)
  • Develop a code of ethics for data science practitioners. It is heartening to see that IEEE has recently come up with a discussion paper on ethical considerations for artificial intelligence and autonomous systems and ACM has proposed a set of principles for algorithmic transparency and accountability.  However, I should also tag this suggestion with the warning that codes of ethics are not very effective as they can be easily violated. One has to – somehow – embed ethics in the DNA of data scientists. I believe, one way to do this is through practice-oriented education in which data scientists-in-training grapple with ethical issues through data challenges and hackathons. It is as Wittgenstein famously said, “it is clear that ethics cannot be articulated.” Ethics must be practiced.
  • Put in place a system of reliable algorithmic audits within data science departments, particularly those that do work with significant social impact.
  • Increase transparency a) by publishing information on how algorithms predict what they predict and b) by making it possible for those affected by the algorithm to access the data used to classify them as well as their classification, how it will be used and by whom.
  • Encourage the development of algorithms that detect bias in other algorithms and correct it.
  • Inspire aspiring data scientists to build models for the good.

It is only right that the last word in this long riff should go to ‘O Neil whose work inspired it. Towards the end of her book she writes:

Big Data processes codify the past. They do not invent the future. Doing that requires moral imagination, and that’s something that only humans can provide. We have to explicitly embed better values into our algorithms, creating Big Data models that follow our ethical lead. Sometimes that will mean putting fairness ahead of profit.

Excellent words for data scientists to live by.

Written by K

January 17, 2017 at 8:38 pm

A gentle introduction to random forests using R

with 6 comments

Introduction

In a previous post, I described how decision tree algorithms work and demonstrated their use via the rpart library in R. Decision trees work by splitting a dataset recursively. That is, subsets arising from a split are further split until a predetermined termination criterion is reached.  At each step, a split is made based on the independent variable that results in the largest possible reduction in heterogeneity of the dependent  variable.

(Note:  readers unfamiliar with decision trees may want to read that post before proceeding)

The main drawback of decision trees is that they are prone to overfitting.   The  reason for this is that trees, if grown deep, are able to fit  all kinds of variations in the data, including noise. Although it is possible to address this partially by pruning, the result often remains less than satisfactory. This is because the algorithm makes a locally optimal choice at each split without any regard to whether the choice made is the best one overall.  A poor split made in the initial stages can thus doom the model, a problem that cannot be fixed by post-hoc pruning.

In this post I describe random forests, a tree-based algorithm that addresses the above shortcoming of decision trees. I’ll first describe the intuition behind the algorithm via an analogy and then do a demo using the R randomForest library.

Motivating random forests

One of the reasons for the popularity of decision trees is that they reflect the way humans make decisions: by weighing up options at each stage and choosing the best one available.  The analogy is particularly useful because it also suggests how decision trees can be improved.

One of the lifelines in the game show, Who Wants to be A Millionaire, is “Ask The Audience” wherein a contestant can ask the audience to vote on the answer to a question.  The rationale here is that the majority response from a large number of independent decision makers is more likely to yield a correct answer than one from a randomly chosen person.  There are two factors at play here:

  1. People have different experiences and will therefore draw upon different “data” to answer the question.
  2. People have different knowledge bases and preferences and will therefore draw upon different “variables” to make their choices at each stage in their decision process.

Taking a cue from the above, it seems reasonable to build many decision trees using:

  1. Different sets of training data.
  2. Randomly selected subsets of variables at each split of every decision tree.

Predictions can then made by taking the majority vote over all trees (for classification problems) or averaging results over all trees (for regression problems).  This is essentially how the random forest algorithm works.

The net effect of the two strategies is to reduce overfitting by a) averaging over trees created from different samples of the dataset and b) decreasing the likelihood of a small set of strong predictors dominating the splits.  The price paid is reduced interpretability as well as increased computational complexity. But then, there is no such thing as a free lunch.

The mechanics of the algorithm

Although we will not delve into the mathematical details of the algorithm, it is important to understand how two points made above are implemented in the algorithm.

Bootstrap aggregating… and a (rather cool) error estimate

A key feature of the algorithm is the use of multiple datasets for training individual decision trees.  This is done via a neat statistical trick called bootstrap aggregating (also called bagging).

Here’s how bagging works:

Assume you have a dataset of size N.  From this you create a sample (i.e. a subset) of size n (n less than or equal to N) by choosing n data points randomly with replacement.  “Randomly” means every point in the dataset is equally likely to be chosen and   “with replacement” means that a specific data point can appear more than once in the subset. Do this M times to create M equally-sized samples of size n each.  It can be shown that this procedure, which statisticians call bootstrapping, is legit when samples are created from large datasets – that is, when N is large.

Because a bagged sample is created by selection with replacement, there will generally be some points that are not selected.  In fact, it can be shown that, on the average, each sample will use about two-thirds of the available data points. This gives us a clever way to estimate the error as part of the process of model building.

Here’s how:

For every data point, obtain predictions for trees in which the point was out of bag. From the result mentioned above, this will yield approximately M/3 predictions per data point (because a third of the data points are out of bag).  Take the majority vote of these M/3 predictions as the predicted value for the data point. One can do this for the entire dataset. From these out of bag predictions for the whole dataset, we can estimate the overall error by computing a classification error (Count of correct predictions divided by N) for classification problems or the root mean squared error for regression problems.  This means there is no need to have a separate test data set, which is kind of cool.  However, if you have enough data, it is worth holding out some data for use as an independent test set. This is what we’ll do in the demo later.

Using subsets of predictor variables

Although bagging reduces overfitting somewhat, it does not address the issue completely. The reason is that in most datasets a small number of predictors tend to dominate the others.  These predictors tend to be selected in early splits and thus influence the shapes and sizes of a significant fraction of trees in the forest.  That is, strong predictors enhance correlations between trees which tends to come in the way of variance reduction.

A simple way to get around this problem is to use a random subset of variables at each split. This avoids over-representation of dominant variables and thus creates a more diverse forest. This is precisely what the random forest algorithm does.

Random forests in R

In what follows, I use the famous Glass dataset from the mlbench library.  The dataset has 214 data points of six types of glass  with varying metal oxide content and refractive indexes. I’ll first build a decision tree model based on the data using the rpart library (recursive partitioning) that I covered in an earlier article and then use then show how one can build a random forest model using the randomForest library. The rationale behind this is to compare the two models – single decision tree vs random forest. In the interests of space,  I won’t explain details of the rpart here as  I’ve covered it at length in the previous article. However, for completeness, I’ll list the demo code for it before getting into random forests.

Decision trees using rpart

Here’s the code listing for building a decision tree using rpart on the Glass dataset (please see my previous article for a full explanation of each step). Note that I have not used pruning as there is little benefit to be gained from it (Exercise for the reader: try this for yourself!).

#set working directory if needed (modify path as needed)
setwd(“C:/Users/Kailash/Documents/rf”)
#load required libraries – rpart for classification and regression trees
library(rpart)
#mlbench for Glass dataset
library(mlbench)
#load Glass
data(“Glass”)
#set seed to ensure reproducible results
set.seed(42)
#split into training and test sets
Glass[,”train”] <- ifelse(runif(nrow(Glass))<0.8,1,0)
#separate training and test sets
trainGlass <- Glass[Glass$train==1,]
testGlass <- Glass[Glass$train==0,]
#get column index of train flag
trainColNum <- grep(“train”,names(trainGlass))
#remove train flag column from train and test sets
trainGlass <- trainGlass[,-trainColNum]
testGlass <- testGlass[,-trainColNum]
#get column index of predicted variable in dataset
typeColNum <- grep(“Type”,names(Glass))
#build model
rpart_model <- rpart(Type ~.,data = trainGlass, method=”class”)
#plot tree
plot(rpart_model);text(rpart_model)
#…and the moment of reckoning
rpart_predict <- predict(rpart_model,testGlass[,-typeColNum],type=”class”)
mean(rpart_predict==testGlass$Type)
[1] 0.6744186

Now, we know that decision tree algorithms tend to display high variance so the hit rate from any one tree is likely to be misleading. To address this we’ll generate a bunch of trees using different training sets (via random sampling) and calculate an average hit rate and spread (or standard deviation).

#function to do multiple runs
multiple_runs <- function(train_fraction,n,dataset){
fraction_correct <- rep(NA,n)
set.seed(42)
for (i in 1:n){
dataset[,”train”] <- ifelse(runif(nrow(dataset))<0.8,1,0)
trainColNum <- grep(“train”,names(dataset))
typeColNum <- grep(“Type”,names(dataset))
trainset <- dataset[dataset$train==1,-trainColNum]
testset <- dataset[dataset$train==0,-trainColNum]
rpart_model <- rpart(Type~.,data = trainset, method=”class”)
rpart_test_predict <- predict(rpart_model,testset[,-typeColNum],type=”class”)
fraction_correct[i] <- mean(rpart_test_predict==testset$Type)
}
return(fraction_correct)
}
#50 runs, no pruning
n_runs <- multiple_runs(0.8,50,Glass)
mean(n_runs)
[1] 0.6874315
sd(n_runs)
[1] 0.0530809

The decision tree algorithm gets it right about 69% of the time with a variation of about 5%. The variation isn’t too bad here, but the accuracy has hardly improved at all (Exercise for the reader: why?). Let’s see if we can do better using random forests.

Random forests

As discussed earlier, a random forest algorithm works by averaging over multiple trees using bootstrapped samples. Also, it reduces the correlation between trees by splitting on a random subset of predictors at each node in tree construction. The key parameters for randomForest algorithm are the number of trees (ntree) and the number of variables to be considered for splitting (mtry).  The algorithm sets a default of 500 for ntree and sets mtry to one-third the total number of predictors for classification problems and square root of the the number of predictors for regression.  These defaults can be overridden by explicitly providing values for these variables.

The preliminary stuff – the creation of training and test datasets etc. – is much the same as for decision trees but I’ll list the code for completeness.

#load required library – randomForest
library(randomForest)
#mlbench for Glass dataset – load if not already loaded
#library(mlbench)
#load Glass
data(“Glass”)
#set seed to ensure reproducible results
set.seed(42)
#split into training and test sets
Glass[,”train”] <- ifelse(runif(nrow(Glass))<0.8,1,0)
#separate training and test sets
trainGlass <- Glass[Glass$train==1,]
testGlass <- Glass[Glass$train==0,]
#get column index of train flag
trainColNum <- grep(“train”,names(trainGlass))
#remove train flag column from train and test sets
trainGlass <- trainGlass[,-trainColNum]
testGlass <- testGlass[,-trainColNum]
#get column index of predicted variable in dataset
typeColNum <- grep(“Type”,names(Glass))
#build model
Glass.rf <- randomForest(Type ~.,data = trainGlass, importance=TRUE, xtest=testGlass[,-typeColNum],ntree=1000)
#Get summary info
Glass.rf
Call:
randomForest(formula = Type ~ ., data = trainGlass, importance = TRUE, xtest = testGlass[, -typeColNum], ntree = 1001)
Type of random forest: classification
Number of trees: 1000
No. of variables tried at each split: 3
OOB estimate of error rate: 23.98%
Confusion matrix:
1 2 3 5 6 7 class.error
1 40 7 2 0 0 0 0.1836735
2 8 49 1 2 2 1 0.2222222
3 6 3 6 0 0 0 0.6000000
5 0 1 0 11 0 1 0.1538462
6 1 2 0 1 6 0 0.5000000
7 1 2 0 1 0 21 0.1600000

The first thing to note is the out of bag error estimate is ~ 24%.  Equivalently the hit rate is 76%, which is better than the 69% for decision trees. Secondly, you’ll note that the algorithm does a terrible job identifying type 3 and 6 glasses correctly. This could possibly be improved by a technique called boosting, which works by  iteratively improving poor predictions made in earlier stages. I plan to look at boosting in a future post, but if you’re curious, check out the gbm package in R.

Finally, for completeness, let’s see how the test set does:

#accuracy for test set
mean(Glass.rf$test$predicted==testGlass$Type)
[1] 0.8372093
#confusion matrix
table(Glass.rf$test$predicted,testGlass$Type)
1 2 3 5 6 7
1 19 2 0 0 0 0
2 1 9 1 0 0 0
3 1 1 1 0 0 0
5 0 1 0 0 0 0
6 0 0 0 0 3 0
7 0 0 0 0 0 4

The test accuracy is better than the out of bag accuracy and there are some differences in the class errors as well. However, overall the two compare quite well and are significantly better than the results of the decision tree algorithm.

Variable importance

Random forest algorithms also give measures of variable importance. Computation of these is enabled by setting  importance, a boolean parameter, to TRUE. The algorithm computes two measures of variable importance: mean decrease in Gini and mean decrease in accuracy. Brief explanations of these follow.

Mean decrease in Gini

When determining splits in individual trees, the algorithm looks for the largest class (in terms of population) and attempts to isolate it first. If this is not possible, it tries to do the best it can, always focusing on isolating the largest remaining class in every split.This is called the Gini splitting rule (see this article for a good explanation of the rule).

The “goodness of split” is measured by the Gini Impurity, I_{G}. For a set containing K categories this is given by:

I_{G} = \sum_{i=1}^{K} f_{i}(1-f_{i})

where f_{i} is the fraction of the set that belongs to the ith category. Clearly, I_{G}  is 0 when the set is homogeneous or pure (1 class only) and is maximum when classes are equiprobable (for example, in a two class set the maximum occurs when f_{1} and f_{2} are 0.5). At each stage the algorithm chooses to split on the predictor that leads to the largest decrease in I_{G}. The algorithm tracks this decrease for each predictor for all splits and all trees in the forest. The average is reported  as the mean decrease in Gini.

Mean decrease in accuracy

The mean decrease in accuracy is calculated using the out of bag data points for each tree. The procedure goes as follows: when a particular tree is grown, the out of bag points are passed down the tree and the prediction accuracy (based on all out of bag points) recorded . The predictors are then randomly permuted and the out of bag prediction accuracy recalculated. The decrease in accuracy for a given predictor is the difference between the accuracy of the original (unpermuted) tree and the those obtained from the permuted trees in which the predictor was excluded. As in the previous case, the decrease in accuracy for each predictor can be computed and tracked as the algorithm progresses.  These can then be averaged by predictor to yield a mean decrease in accuracy.

Variable importance plot

From the above, it would seem that the mean decrease in accuracy is a more global measure as it uses fully constructed trees in contrast to the Gini measure which is based on individual splits. In practice, however, there could be other reasons for choosing one over the other…but that is neither here nor there, if you set importance to TRUE, you’ll get both. The numerical measures of importance are returned in the randomForest object (Glass.rf in our case), but I won’t list them here. Instead, I’ll just print out the variable importance plots for the two measures as these give a good visual overview of the relative importance of variables. The code is a simple one-liner:

#variable importance plot
varImpPlot(Glass.rf)

The plot is shown in Figure 1 below.

Figure 1: Variable importance plots

Figure 1: Variable importance plots

In this case the two measures are pretty consistent so it doesn’t really matter which one you choose.

Wrapping up

Random forests are an example of a general class of techniques called ensemble methods. These techniques are based on the principle that averaging over a large number of not-so-good models  yields a more reliable prediction than a single model. This is true only if models in the group are independent of  each other, which is precisely what bootstrap aggregation and predictor subsetting are intended to achieve.

Although  considerably more complex than decision trees, the logic behind random forests is not hard to understand. Indeed, the intuitiveness of the algorithm together with its ease of use and accuracy have made it very popular in the machine learning community.

Written by K

September 20, 2016 at 9:44 pm

A gentle introduction to decision trees using R

with 5 comments

Introduction

Most techniques of predictive analytics have their origins in probability or statistical theory (see my post on Naïve Bayes, for example).  In this post I’ll look at one that has more a commonplace origin: the way in which humans make decisions.  When making decisions, we typically identify the options available and then evaluate them based on criteria that are important to us.  The intuitive appeal of such a procedure is in no small measure due to the fact that it can be easily explained through a visual. Consider the following graphic, for example:

Figure 1: Example of a simple decision tree (Courtesy: Duncan Hull)

Figure 1: Example of a simple decision tree (Courtesy: Duncan Hull)

(Original image: https://www.flickr.com/photos/dullhunk/7214525854, Credit: Duncan Hull)

The tree structure depicted here provides a neat, easy-to-follow description of the issue under consideration and its resolution. The decision procedure is based on asking a series of questions, each of which serve to further reduce the domain of possibilities. The predictive technique I discuss in this post,classification and regression trees (CART), works in much the same fashion. It was invented by Leo Breiman and his colleagues in the 1970s.

In what follows, I will use the open source software, R. If you are new to R,   you may want to follow this link for more on the basics of setting up and installing it. Note that the R implementation of the CART algorithm is called RPART (Recursive Partitioning And Regression Trees). This is essentially because Breiman and Co. trademarked the term CART. As some others have pointed out, it is somewhat ironical that the algorithm is now commonly referred to as RPART rather than by the term coined by its inventors.

A bit about the algorithm

The rpart algorithm works by splitting the dataset recursively, which means that the subsets that arise from a split are further split until a predetermined termination criterion is reached.  At each step, the split is made based on the independent variable that results in the largest possible reduction in heterogeneity of the dependent (predicted) variable.

Splitting rules can be constructed in many different ways, all of which are based on the notion of impurity-  a measure of the degree of heterogeneity of the leaf nodes. Put another way, a leaf node that contains a single class is homogeneous and has impurity=0.   There are three popular impurity quantification methods: Entropy (aka information gain), Gini Index and Classification Error.  Check out this article for a simple explanation of the three methods.

The rpart algorithm offers the entropy  and Gini index methods as choices. There is a fair amount of fact and opinion on the Web about which method is better. Here are some of the better articles I’ve come across:

https://www.quora.com/Are-gini-index-entropy-or-classification-error-measures-causing-any-difference-on-Decision-Tree-classification

http://stats.stackexchange.com/questions/130155/when-to-use-gini-impurity-and-when-to-use-information-gain

https://www.garysieling.com/blog/sklearn-gini-vs-entropy-criteria

http://www.salford-systems.com/resources/whitepapers/114-do-splitting-rules-really-matter

The answer as to which method is the best is: it depends.  Given this, it may be prudent to try out a couple of methods and pick the one that works best for your problem.

Regardless of the method chosen, the splitting rules partition the decision space (a fancy word for the entire dataset) into rectangular regions each of which correspond to a split. Consider the following simple example with two predictors x1 and x2. The first split is at x1=1 (which splits the decision space into two regions x1<1 and x1>1), the second at x2=2, which splits the (x1>1) region into 2 sub-regions, and finally x1=1.5 which splits the (x1>1,x2>2) sub-region further.

Figure 2: Example of partitioning

Figure 2: Example of partitioning

It is important to note that the algorithm works by making the best possible choice at each particular stage, without any consideration of whether those choices remain optimal in future stages. That is, the algorithm makes a locally optimal decision at each stage. It is thus quite possible that such a choice at one stage turns out to be sub-optimal in the overall scheme of things.  In other words,  the algorithm does not find a globally optimal tree.

Another important point relates to well-known bias-variance tradeoff in machine learning, which in simple terms is a tradeoff between the degree to which a model fits the training data and its predictive accuracy.  This refers to the general rule that beyond a point, it is counterproductive to improve the fit of a model to the training data as this increases the likelihood of overfitting.  It is easy to see that deep trees are more likely to overfit the data than shallow ones. One obvious way to control such overfitting is to construct shallower trees by stopping the algorithm at an appropriate point based on whether a split significantly improves the fit.  Another is to grow a tree unrestricted and then prune it back using an appropriate criterion. The rpart algorithm takes the latter approach.

Here is how it works in brief:

Essentially one minimises the cost,  C_{\alpha}(T), a quantity that is a  linear combination of the error (essentially, the fraction of misclassified instances, or variance in the case of a continuous variable), R(T)  and the number of leaf nodes in the tree, |\tilde{T} |:

C_{\alpha}(T) = R(T) + \alpha |\tilde{T} |

First, we note that when \alpha = 0, this simply returns the original fully grown tree. As \alpha increases, we incur a penalty that is proportional to the number of leaf nodes.  This tends to cause the minimum cost to occur for a tree that is a subtree of the original one (since a subtree will have a smaller number of leaf nodes). In practice we vary \alpha and pick the value that gives the subtree that results in the smallest cross-validated prediction error.  One does not have to worry about programming this because the rpart algorithm actually computes the errors for different values of \alpha for us. All we need to do is pick the value of the coefficient that gives the lowest cross-validated error. I will illustrate this in detail in the next section.

An implication of their tendency to overfit data is that decision trees tend to be sensitive to relatively minor changes in the training datasets. Indeed, small differences can lead to radically different looking trees. Pruning addresses this to an extent, but does not resolve it completely.  A better resolution is offered by the so-called ensemble methods that average over many differently constructed trees. I’ll discuss one such method at length in a future post.

Finally, I should also mention that decision trees can be used for both classification and regression problems (i.e. those in which the predicted variable is discrete and continuous respectively).  I’ll demonstrate both types of problems in the next two sections.

Classification trees using rpart

To demonstrate classification trees, we’ll use the Ionosphere dataset available in the mlbench package in R. I have chosen this dataset because it nicely illustrates the points I wish to make in this post. In general, you will almost always find that algorithms that work fine on classroom datasets do not work so well in the real world…but of course, you know that already!

We begin by setting the working directory, loading the required packages (rpart and mlbench) and then loading the Ionosphere dataset.

#set working directory if needed (modify path as needed)
setwd(“C:/Users/Kailash/Documents/decisiontrees”)
#load required libraries – rpart for classification and regression trees
library(rpart)
#mlbench for Ionosphere dataset
library(mlbench)
#load Ionosphere
data(“Ionosphere”)

Next we separate the data into training and test sets. We’ll use the former to build the model and the latter to test it. To do this, I use a simple scheme wherein I randomly select 80% of the data for the training set and assign the remainder to the test data set. This is easily done in a single R statement that invokes the uniform distribution (runif) and the vectorised function, ifelse. Before invoking runif, I set a seed integer to my favourite integer in order to ensure reproducibility of results.

#set seed to ensure reproducible results
set.seed(42)
#split into training and test sets
Ionosphere[,”train”] <- ifelse(runif(nrow(Ionosphere))<0.8,1,0)
#separate training and test sets
trainset <- Ionosphere[Ionosphere$train==1,]
testset <- Ionosphere[Ionosphere$train==0,]
#get column index of train flag
trainColNum <- grep(“train”,names(trainset))
#remove train flag column from train and test sets
trainset <- trainset[,-trainColNum]
testset <- testset[,-trainColNum]

In the above, I have also removed the training flag from the training and test datasets.

Next we  invoke rpart. I strongly recommend you take some time to go through the documentation and understand the parameters and their defaults values.  Note that we need to remove the predicted variable from the dataset before passing the latter on to the algorithm, which is why we need to find the column index of the  predicted variable (first line below). Also note that we set the method parameter to “class“, which simply tells the algorithm that the predicted variable is discrete.  Finally, rpart uses Gini rule for splitting by default, and we’ll stick with this option.

#get column index of predicted variable in dataset
typeColNum <- grep(“Class”,names(Ionosphere))
#build model
rpart_model <- rpart(Class~.,data = trainset, method=”class”)
#plot tree
plot(rpart_model);text(rpart_model)

 

The resulting plot is shown in Figure 3 below.  It is  quite self-explanatory so I  won’t dwell on it here.

Figure 3: A classification tree for Ionosphere dataset

Figure 3: A classification tree for Ionosphere dataset

Next we see how good the model is by seeing how it fares against the test data.

#…and the moment of reckoning
rpart_predict <- predict(rpart_model,testset[,-typeColNum],type=”class”)
mean(rpart_predict==testset$Class)
[1] 0.8450704
#confusion matrix
table(pred=rpart_predict,true=testset$Class)
pred true bad good
bad 17 2
good 9 43

 

Note that we need to verify the above results by doing multiple runs, each using different training and test sets. I will  do this later, after discussing pruning.

Next, we prune the tree using the cost complexity criterion. Basically, the intent is to see if a shallower subtree can give us comparable results. If so, we’d be better of choosing the shallower tree because it reduces the likelihood of overfitting.

As described earlier, we choose the appropriate pruning parameter (aka cost-complexity parameter) \alpha by picking the value that results in the lowest prediction error. Note that all relevant computations have already been carried out by R when we built the original tree (the call to rpart in the code above). All that remains now is to pick the value of \alpha:

#cost-complexity pruning
printcp(rpart_model)
CP nsplit rel error xerror xstd
1 0.57 0 1.00 1.00 0.080178
2 0.20 1 0.43 0.46 0.062002
3 0.02 2 0.23 0.26 0.048565
4 0.01 4 0.19 0.35

It is clear from the above, that the lowest cross-validation error (xerror in the table) occurs for \alpha =0.02 (this is CP in the table above).   One can find CP programatically like so:

# get index of CP with lowest xerror
opt <- which.min(rpart_model$cptable[,”xerror”])
#get its value
cp <- rpart_model$cptable[opt, “CP”]

Next, we prune the tree based on this value of CP:

#prune tree
pruned_model <- prune(rpart_model,cp)
#plot tree
plot(pruned_model);text(pruned_model)

Note that rpart will use a default CP value of 0.01 if you don’t specify one in prune.

The pruned tree is shown in Figure 4 below.

Figure 4: A pruned classification tree for Ionosphere dataset

Figure 4: A pruned classification tree for Ionosphere dataset

Let’s see how this tree stacks up against the fully grown one shown in Fig 3.

#find proportion of correct predictions using test set
rpart_pruned_predict <- predict(pruned_model,testset[,-typeColNum],type=”class”)
mean(rpart_pruned_predict==testset$Class)
[1] 0.8873239

This seems like an improvement over the unpruned tree, but one swallow does not a summer make. We need to check that this holds up for different training and test sets. This is easily done by creating multiple random partitions of the dataset and checking the efficacy of pruning for each. To do this efficiently, I’ll create a function that takes the training fraction, number of runs (partitions) and the name of the dataset as inputs and outputs the proportion of correct predictions for each run. It also optionally prunes the tree. Here’s the code:

#function to do multiple runs
multiple_runs_classification <- function(train_fraction,n,dataset,prune_tree=FALSE){
fraction_correct <- rep(NA,n)
set.seed(42)
for (i in 1:n){
dataset[,”train”] <- ifelse(runif(nrow(dataset))<0.8,1,0)
trainColNum <- grep(“train”,names(dataset))
typeColNum <- grep(“Class”,names(dataset))
trainset <- dataset[dataset$train==1,-trainColNum]
testset <- dataset[dataset$train==0,-trainColNum]
rpart_model <- rpart(Class~.,data = trainset, method=”class”)
if(prune_tree==FALSE) {
rpart_test_predict <- predict(rpart_model,testset[,-typeColNum],type=”class”)
fraction_correct[i] <- mean(rpart_test_predict==testset$Class)
}else{
opt <- which.min(rpart_model$cptable[,”xerror”])
cp <- rpart_model$cptable[opt, “CP”]
pruned_model <- prune(rpart_model,cp)
rpart_pruned_predict <- predict(pruned_model,testset[,-typeColNum],type=”class”)
fraction_correct[i] <- mean(rpart_pruned_predict==testset$Class)
}
}
return(fraction_correct)
}

Note that in the above,  I have set the default value of the prune_tree to FALSE, so the function will execute the first branch of the if statement unless the default is overridden.

OK, so let’s do 50 runs with and without pruning, and check the mean and variance of the results for both sets of runs.

#50 runs, no pruning
unpruned_set <- multiple_runs_classification(0.8,50,Ionosphere)
mean(unpruned_set)
[1] 0.8772763
sd(unpruned_set)
[1] 0.03168975
#50 runs, with pruning
pruned_set <- multiple_runs_classification(0.8,50,Ionosphere,prune_tree=TRUE)
mean(pruned_set)
[1] 0.9042914
sd(pruned_set)
[1] 0.02970861

So we see that there is an improvement of about 3% with pruning. Also, if you were to plot the trees as we did earlier, you would see that this improvement is achieved with shallower trees. Again, I point out that this is not always the case. In fact, it often happens that pruning results in worse predictions, albeit with better reliability – a classic illustration of the bias-variance tradeoff.

Regression trees using rpart

In the previous section we saw how one can build decision trees for situations in which the predicted variable is discrete.  Let’s now look at the case in which the predicted variable is continuous. We’ll use the Boston Housing dataset from the mlbench package.  Much of the discussion of the earlier section applies here, so I’ll just display the code, explaining only the differences.

#load Boston Housing dataset
data(“BostonHousing”)
#set seed to ensure reproducible results
set.seed(42)
#split into training and test sets
BostonHousing[,”train”] <- ifelse(runif(nrow(BostonHousing))<0.8,1,0)
#separate training and test sets
trainset <- BostonHousing[BostonHousing$train==1,]
testset <- BostonHousing[BostonHousing$train==0,]
#get column index of train flag
trainColNum <- grep(“train”,names(trainset))
#remove train flag column from train and test sets
trainset <- trainset[,-trainColNum]
testset <- testset[,-trainColNum]

Next we invoke rpart, noting that the predicted variable is medv (median value of owner-occupied homes in $1000 units) and that we need to set the method parameter to “anova“. The latter tells rpart that the predicted variable is continuous (i.e that this is a regression problem).

#build model
rpart_model <- rpart(medv~.,data = trainset, method=”anova”)
#plot tree
plot(rpart_model);text(rpart_model)

The plot of the tree is shown in Figure 5 below.

Figure 5: A regression tree for Boston Housing dataset

Figure 5: A regression tree for Boston Housing dataset

Next, we need to see how good the predictions are. Since the dependent variable is continuous, we cannot compare the predictions directly against the test set. Instead, we calculate the root mean square (RMS) error. To do this, we request rpart to output the predictions as a vector – one prediction per record in the test dataset. The RMS error can then easily be calculated by comparing this vector with the medv column in the test dataset.

Here is the relevant code:

#…the moment of reckoning
rpart_test_predict <- predict(rpart_model,testset[,-resultColNum],type = “vector” )
#calculate RMS error
rmsqe <- sqrt(mean((rpart_test_predict-testset$medv)^2)))
rmsqe
[1] 4.586388

Again, we need to do multiple runs to check on the  reliability of the predictions. However, you already know how to do that so I will leave it to you.

Moving on, we prune the tree using the cost complexity criterion as before.  The code is exactly the same as in the classification problem.

# get index of CP with lowest xerror
opt <- which.min(rpart_model$cptable[,”xerror”])
#get its value
cp <- rpart_model$cptable[opt, “CP”]
#prune tree
pruned_model <- prune(rpart_model,cp)
#plot tree
plot(pruned_model);text(pruned_model)

The tree is unchanged so I won’t show it here. This means, as far as the cost complexity pruning is concerned, the optimal subtree is the same as the original tree. To confirm this, we’d need to do multiple runs as before – something that I’ve already left as as an exercise for you :).  Basically, you’ll need to write a function analogous to the one above, that computes the root mean square error instead of the proportion of correct predictions.

Wrapping up

This brings us to the end of my introduction to classification and regression trees using R.  Unlike some articles on the topic I have attempted to describe each of the steps in detail and provide at least some kind of a rationale for them. I hope you’ve found the description and code snippets useful.

I’ll end by reiterating a couple points I made early in this piece. The nice thing about decision trees is that they are easy to explain to the users of our predictions. This is primarily because they  reflect the way we think about how decisions are made in real life – via a set of binary choices based on appropriate criteria. That  said, in many practical situations decision trees turn out to be unstable:  small changes in the dataset can lead to wildly different trees. It turns out that this limitation can be addressed by building a variety of trees using different starting points and then averaging over  them.  This is the domain of the so-called random forest algorithm.We’ll make the journey from decision trees to random forests in a future post.

Postscript, 20th September 2016: I finally got around to finishing my article on random forests.

Written by K

February 16, 2016 at 6:33 pm

A gentle introduction to Naïve Bayes classification using R

with 14 comments

Preamble

One of the key problems of predictive analytics is to classify entities or events based on a knowledge of their attributes.  An example: one might want to classify customers into two categories, say, ‘High Value’ or ‘Low Value,’ based on a knowledge of their buying patterns.  Another example: to figure out the party allegiances of  representatives based on their voting records.  And yet another:  to predict the species a particular plant or animal specimen based on a list of its characteristics. Incidentally, if you haven’t been there already, it is worth having a look at Kaggle to get an idea of some of the real world classification problems that people tackle using techniques of predictive analytics.

Given the importance of classification-related problems, it is no surprise that analytics tools offer a range of options. My favourite (free!) tool, R, is no exception: it has a plethora of state of the art packages designed to handle a wide range of problems. One of the problems with this diversity of choice is that it is often confusing for beginners to figure out which one to use in a particular situation. Over the next several months, I intend to write up tutorial articles covering many of the common algorithms, with a particular focus on their strengths and weaknesses; explaining where they work well and where they don’t. I’ll kick-off this undertaking with a simple yet surprisingly effective algorithm – the Naïve Bayes classifier.

Just enough theory

I’m going to assume you have R and RStudio installed on your computer. If you need help with this, please follow the instructions here.

To introduce the Naive Bayes algorithm, I will use the HouseVotes84 dataset, which contains US congressional voting records for 1984. The data set is in the mlbench package which is not part of the base R installation. You will therefore need to install it if you don’t have it already.  Package installation is a breeze in RStudio – just go to Tools > Install Packages and follow the prompts.

The HouseVotes84 dataset describes how 435 representatives voted – yes (y), no (n) or unknown (NA) – on 16 key issues presented to Congress.  The dataset also provides the party affiliation of each representative – democrat or republican.

Let’s begin by exploring the dataset. To do this, we load mlbench, fetch the dataset and get some summary stats on it. (Note: a complete listing of the code in this article can be found here)

#load mlbench library
library(mlbench)
#set working directory if needed (modify path as needed)
setwd(“C:/Users/Kailash/Documents/NaiveBayes”)
#load HouseVotes84 dataset
data(“HouseVotes84”)

It is good to begin by exploring the data visually.  To this end, let’s do some bar plots using the basic graphic capabilities of R:

#barplots for specific issue
plot(as.factor(HouseVotes84[,2]))
title(main=”Votes cast for issue”, xlab=”vote”, ylab=”# reps”)
#by party
plot(as.factor(HouseVotes84[HouseVotes84$Class==’republican’,2]))
title(main=”Republican votes cast for issue 1″, xlab=”vote”, ylab=”# reps”)
plot(as.factor(HouseVotes84[HouseVotes84$Class==’democrat’,2]))
title(main=”Democrat votes cast for issue 1″, xlab=”vote”, ylab=”# reps”)

The plots are shown in Figures 1 through 3.

Fig 1: y and n votes for issue 1

Fig 1: y and n votes for issue 1

Fig 2: Republican votes for issue 1.

Fig 2: Republican votes for issue 1.

Fig 3: Democrat votes for issue 1.

Fig 3: Democrat votes for issue 1.

Among other things, such plots give us a feel for the probabilities associated with how representatives from parties tend to vote on specific issues.

The classification problem at hand is to figure out the party affiliation from a knowledge of voting patterns. For simplicity let us assume that there are only 3 issues voted on instead of the 16 in the actual dataset. In concrete terms we wish to answer the question, “what is the probability that a representative is, say, a democrat (D) given that he or she has voted, say,  (v1 = y, v2=n,v3 = y) on the three issues?” To keep things simple I’m assuming there are no NA values.

In the notation of conditional probability this can be written as,

P(D|v1=y, v2=n,v3=y)

(Note:  If you need a refresher on conditional probability, check out this post for a simple explanation.)

By Bayes theorem, which I’ve explained at length in this post, this can be recast as,

P(D|v1=y, v2=n,v3=y) = \displaystyle \frac{p(D) p(v1=y, v2=n,v3=y|D)}{p(v1=y, v2=n,v3=y)}......(1)

We’re interested only in relative probabilities of the representative being a democrat or republican because the predicted party affiliation depends only on which of the two probabilities is larger (the actual value of the probability is not important). This being the case, we can factor out any terms that are constant.  As it happens, the denominator of the above equation – the probability of a particular voting pattern – is a constant because it depends on the total number of representatives (from both parties)  who voted a particular way.

Now, using the chain rule of conditional probability, we can rewrite the numerator as:

p(D) p(v1=y, v2=n,v3=y|D)

= p(D)p(v1=y|D) p(v2=n,v3=y|D,v1=y)

Basically, the second term on the left hand side, p(v1=y, v2=n,v3=y|D), is the probability of getting a particular voting pattern  (y,n,y) assuming the rep is a Democrat (D). The definition of conditional probability allows us to rewrite this as the probability of getting a n vote for issue v2 and a y vote for issue v3 given that the rep is a Democrat who has voted y on issue v1. Again, this is simply a consequence of the definition of conditional probability.

Another application of the chain rule gives:

p(D) p(v1=y, v2=n,v3=y|D)

= p(D)p(v1=y|d) p(v2=n|D,v1=y) p(v3=y|D,v1=y,v2=n)

Where we have now factored out the n vote on the second issue.

The key assumption of Naïve Bayes is that the conditional probability of each feature given the class is independent of all other features. In mathematical terms this means that,

p(v2=n|D,v1=y) = p(v2=n|D)

and

p(v3=y|D,v1=y,v2=n) = p(v3=y|D)

The quantity of interest, the numerator of equation (1) can then be written as:

p(D) p(v1=y, v2=n,v3=y|D)

= p(D)p(v1=y|D)p(v2=n|D)p(v3=y|D).......(2)

The assumption of independent conditional probabilities is a drastic one.  What it is saying is that the features are completely independent of each other. This is clearly not the case in the situation above: how representatives vote on a particular issue is coloured by their beliefs and values. For example, the conditional probability of voting patterns  on socially progressive issues are definitely not independent of each other. However, as we shall see  in the next section, the Naïve Bayes assumption works well for this problem as it does in many other situations where we know upfront that it is grossly incorrect.

Another good example of the unreasonable efficacy of Naive Bayes is in spam filtering.  In the case of spam, the features are individual words in an email.  It is clear that certain word combinations tend to show up consistently in spam – for example, “online”, “meds”, “Viagra” and “pharmacy.” In other words, we know upfront that their occurrences are definitely not independent of each other. Nevertheless, Naïve Bayes based spam detectors which assume mutual independence of features do remarkably well in distinguishing spam from ham.

Why is this so?

To explain why, I return to a point I mentioned earlier: to figure out  the affiliation associated with a particular voting pattern (say, v1=y, v2=n,v3=y) one only needs to know which of the two probabilities p(R| v1=y, v2=n,v3=y) and p(D| V1=y, V2=n, V3=y) is greater than the other.  That is, the values of these probabilities are not important in determining the party affiliations.

This hints as to why the independence assumption might not be so quite so idiotic. Since the prediction depends only the on the maximum, the algorithm will get it right even if there are dependencies between feature providing the dependencies do not change which class has the maximum probability (once again, note that only the maximal class is important here, not  the value of the maximum).

Yet another reason for the surprising success of Naïve Bayes is that dependencies often cancel out across a large set of features. But, of course, there is no guarantee that this will always happen.

In general, Naïve Bayes algorithms work better for problems in which the dependent (predicted) variable is discrete, even when there are dependencies between features (spam detection is a good example).  They work less well for regression problems – i.e those in  which predicted variables are continuous.

I hope the above has given you an intuitive feel for how Naïve Bayes algorithms work. I don’t know about you, but my head’s definitely spinning after writing out all that mathematical notation.

It’s time to clear our heads by doing some computation.

Naïve Bayes in action

There are a couple of well-known implementations of Naïve Bayes in R. One of them is the naiveBayes method in the e1071 package and the other is NaiveBayes method in  the klaR package.  I’ll use the former for no other reason than it seems to be more popular. That said, I have used the latter too and can confirm that it works just as well.

We’ve already loaded and explored the HouseVotes84 dataset. One of the things you may have noticed when summarising the data is  that there are a fair number of NA values. Naïve Bayes algorithms typically handle NA values either by ignoring records that contain any NA values or by ignoring just the NA values. These choices are indicated by the value of the variable na.action in the naiveBayes algorithm, which is set to na.omit (to ignore the record) or na.pass (to ignore the value).

Just for fun, we’ll take a different approach. We’ll impute NA values for a given issue and party by looking at how other representatives from the same party voted on the issue. This is very much in keeping with the Bayesian spirit: we infer unknowns based on a justifiable belief – that is, belief based on the evidence.

To do this I write two functions: one to  compute the number of NA values for a given issue (vote) and class (party affiliation), and the other to calculate the fraction of yes votes for a given issue (column) and class (party affiliation).

#Functions needed for imputation
#function to return number of NAs by vote and class (democrat or republican)
na_by_col_class <- function (col,cls){return(sum(is.na(HouseVotes84[,col]) & HouseVotes84$Class==cls))}
#function to compute the conditional probability that a member of a party will cast a ‘yes’ vote for
#a particular issue. The probability is based on all members of the party who #actually cast a vote on the issue (ignores NAs).
p_y_col_class <- function(col,cls){
sum_y<-sum(HouseVotes84[,col]==’y’ & HouseVotes84$Class==cls,na.rm = TRUE)
sum_n<-sum(HouseVotes84[,col]==’n’ & HouseVotes84$Class==cls,na.rm = TRUE)
return(sum_y/(sum_y+sum_n))}
#Check that functions work!
> p_y_col_class(2,’democrat’)
[1] 0.6046512
> p_y_col_class(2,’republican’)
[1] 0.1878788
> na_by_col_class(2,’democrat’)
[1] 9
> na_by_col_class(2,’republican’)
> [1] 3

Before proceeding,  you might want to go back to the data and convince yourself that these values are sensible.

We can now impute the NA values based on the above. We do this by randomly assigning values ( y or n) to NAs, based on the proportion of members of a party who have voted y or n. In practice, we do this by invoking the uniform distribution and setting an NA value to y if the random number returned is less than the probability of a yes vote and to n otherwise. This is not as complicated as it sounds; you should be able to figure the logic out from the code below.

#impute missing values.
for (i in 2:ncol(HouseVotes84)) {
if(sum(is.na(HouseVotes84[,i])>0)) {
c1 <- which(is.na(HouseVotes84[,i])& HouseVotes84$Class==’democrat’,arr.ind = TRUE)
c2 <- which(is.na(HouseVotes84[,i])& HouseVotes84$Class==’republican’,arr.ind = TRUE)
HouseVotes84[c1,i] <-
ifelse(runif(na_by_col_class(i,’democrat’))<p_y_col_class(i,’democrat’),’y’,’n’)
HouseVotes84[c2,i] <-
ifelse(runif(na_by_col_class(i,’republican’))<p_y_col_class(i,’republican’),’y’,’n’)}
}

Note that the which function filters  indices by the criteria specified in the arguments and ifelse is a vectorised conditional function which enables us to apply logical criteria to multiple elements of a vector.

At this point it is a good idea to check that the NAs in each column have been set according to the voting patterns of non-NAs for a given party. You can use the p_y_col_class() function to check that the new probabilities are close to the old ones. You might want to do this before you proceed any further.

The next step is to divide the available data into training and test datasets. The former will be used to train the algorithm and produce a predictive model. The effectiveness of the model will then be tested using the test dataset. There is a great deal of science and art behind the creation of training and testing datasets. An important consideration is that both sets must contain records that are representative of the entire dataset. This can be difficult to do, especially when data is scarce and there are predictors that do not vary too much…or vary wildly for that matter. On the other hand, problems can also arise when there are redundant predictors. Indeed, the much of the art of successful prediction lies in figuring out which predictors are likely to lead to better predictions, an area known as feature selection. However, that’s a topic for another time. Our current dataset does not suffer from any of these complications so we’ll simply divide the it in an 80/20 proportion, assigning the larger number of records to the training set.

#divide into test and training sets
#create new col “train” and assign 1 or 0 in 80/20 proportion via random uniform dist
HouseVotes84[,”train”] <- ifelse(runif(nrow(HouseVotes84))<0.80,1,0)
#get col number of train / test indicator column (needed later)
trainColNum <- grep(“train”,names(HouseVotes84))
#separate training and test sets and remove training column before modeling
trainHouseVotes84 <- HouseVotes84[HouseVotes84$train==1,-trainColNum]
testHouseVotes84 <- HouseVotes84[HouseVotes84$train==0,-trainColNum]

Now we’re finally good to build our Naive Bayes model (machine learning folks call this model training rather than model building – and I have to admit, it does sound a lot cooler).

The code to train the model is anticlimactically simple:

#load e1071 library and invoke naiveBayes method
library(e1071)
nb_model <- naiveBayes(Class~.,data = trainHouseVotes84)

Here we’ve invokedthe naiveBayes method from the e1071 package. The first argument uses R’s formula notation.In this notation, the dependent variable (to be predicted) appears on the left hand side of the ~ and the independent variables (predictors or features) are on the right hand side. The dot (.) is simply shorthand for “all variable other than the dependent one.” The second argument is the dataframe that contains the training data. Check out the documentation for the other arguments of naiveBayes; it will take me too far afield to cover them here. Incidentally, you can take a look at the model using the summary() or str() functions, or even just entering the model name in the R console:

nb_model
summary(nb_model)
str(nb_model)

Note that I’ve suppressed the output above.

Now that we have a model, we can do some predicting. We do this by feeding our test data into our model and comparing the predicted party affiliations with the known ones. The latter is done via the wonderfully named confusion matrix – a table in which true and predicted values for each of the predicted classes are displayed in a matrix format. This again is just a couple of lines of code:

#…and the moment of reckoning
nb_test_predict <- predict(nb_model,testHouseVotes84[,-1])
#confusion matrix
table(pred=nb_test_predict,true=testHouseVotes84$Class)
pred true democrat republican
democrat 38 3
republican 5 22

The numbers you get will be different because your training/test sets are almost certainly different from mine.

In the confusion matrix (as defined above), the true values are in columns and the predicted values in rows. So, the algorithm has correctly classified 38 out of 43 (i.e. 38+5) Democrats and 22 out of 25 Republicans (i.e. 22+3). That’s pretty decent. However, we need to keep in mind that this could well be quirk of the choice of dataset. To address this, we should get a numerical measure of the efficacy of the algorithm and for different training and testing datasets. A simple measure of efficacy would be the fraction of predictions that the algorithm gets right. For the training/testing set above, this is simply 60/68 (see the confusion matrix above). The simplest way to calculate this in R is:

#fraction of correct predictions
mean(nb_test_predict==testHouseVotes84$Class)
[1] 0.8823529

A natural question to ask at this point is: how good is this prediction. This question cannot be answered with only a single run of the model; we need to do many runs and look at the spread of the results. To do this, we’ll create a function which takes the number of times the model should be run and the training fraction as inputs and spits out a vector containing the proportion of correct predictions for each run. Here’s the function

#function to create, run and record model results
nb_multiple_runs <- function(train_fraction,n){
fraction_correct <- rep(NA,n)
for (i in 1:n){
HouseVotes84[,”train”] <- ifelse(runif(nrow(HouseVotes84))<train_fraction,1,0)
trainColNum <- grep(“train”,names(HouseVotes84))
trainHouseVotes84 <- HouseVotes84[HouseVotes84$train==1,-trainColNum]
testHouseVotes84 <- HouseVotes84[HouseVotes84$train==0,-trainColNum]
nb_model <- naiveBayes(Class~.,data = trainHouseVotes84)
nb_test_predict <- predict(nb_model,testHouseVotes84[,-1])
fraction_correct[i] <- mean(nb_test_predict==testHouseVotes84$Class)
}
return(fraction_correct)
}

I’ve not commented the above code as it is essentially a repeat of the steps described earlier. Also, note that I have not made any effort to make the code generic or efficient.

Let’s do 20 runs with the same training fraction (0.8) as before:

#20 runs, 80% of data randomly selected for training set in each run
fraction_correct_predictions <- nb_multiple_runs(0.8,20)
fraction_correct_predictions
[1] 0.9417476 0.9036145 0.9294118 0.9302326 0.9213483 0.9404762 0.8777778 0.9102564
[9] 0.9102564 0.9080460 0.9139785 0.9200000 0.9090909 0.9239130 0.9605263 0.9333333
[17] 0.9052632 0.8977273 0.9642857 0.8518519
#summary of results
summary(fraction_correct_predictions)
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.8519 0.9074 0.9170 0.9177 0.9310 0.9643
#standard deviation
sd(fraction_correct_predictions)
[1] 0.02582419

We see that the outcome of the runs are quite close together, in the 0.85 to 0.95 range with a standard deviation of 0.025. This tells us that Naive Bayes does a pretty decent job with this data.

Wrapping up

I originally intended to cover a few more case studies in this post, a couple of which highlight the shortcomings of the Naive Bayes algorithm. However, I realize that doing so would make this post unreasonably long, so I’ll stop here with a few closing remarks, and a promise to write up the rest of the story in a subsequent post.

To sum up: I have illustrated the use of a popular Naive Bayes implementation in R and attempted to convey an intuition for how the algorithm works.  As we have seen, the algorithm works quite well in the example case, despite the violation of the assumption of independent conditional probabilities.

The reason for the unreasonable effectiveness of the algorithm is two-fold. Firstly, the algorithm picks the predicted class based on the largest predicted probability, so ordering is more important than the actual value of the probability. Secondly, in many cases, a bias one way for a particular vote may well be counteracted by a bias the other way for another vote. That is, biases tend to cancel out, particularly if there are a large number of features.

That said, there are many cases in which the algorithm fails miserably – and we’ll look at some of these in a future post.  However, despite its well known shortcomings, Naive Bayes is often the first port of call in prediction problems simply because it is easy to set up and is fast compared to many of the iterative algorithms we will explore later in this series of articles.

Endnote

Thanks for reading! If you liked this piece, you might enjoy the other articles in my “Gentle introduction to analytics using R” series. Here are the links:

A gentle introduction to text mining using R

A gentle introduction to cluster analysis using R

A gentle introduction to topic modeling using R

Written by K

November 6, 2015 at 7:33 am

%d bloggers like this: