Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘R’ Category

An intuitive introduction to support vector machines using R – Part 1

leave a comment »

About a year ago, I wrote a piece on support vector machines as a part of my gentle introduction to data science R series. So it is perhaps appropriate to begin this piece with a few words about my motivations for writing yet another article on the topic.

Late last year, a curriculum lead at DataCamp got in touch to ask whether I’d be interested in developing a course on SVMs for them.

My answer was, obviously, an enthusiastic “Yes!”

Instead of rehashing what I had done in my previous article, I thought it would be interesting to try an approach that focuses on building an intuition for how the algorithm works using examples of increasing complexity, supported by visualisation rather than math.  This post is the first part of a two-part series based on this approach.

The article builds up some basic intuitions about support vector machines (abbreviated henceforth as SVM) and then focuses on linearly separable problems. Part 2 (to be released at a future date) will deal with radially separable and more complex data sets. The focus throughout is on developing an understanding what the algorithm does rather than the technical details of how it does it.

Prerequisites for this series are a basic knowledge of R and some familiarity with the ggplot package. However, even if you don’t have the latter, you should be able to follow much of what I cover so I encourage you to press on regardless.

<advertisement> if you have a DataCamp account, you may want to check out my course on support vector machines using R. Chapters 1 and 2 of the course closely follow the path I take in this article. </advertisement>

A one dimensional example

A soft drink manufacturer has two brands of their flagship product: Choke (sugar content of 11g/100ml) and Choke-R (sugar content 8g/100 ml). The actual sugar content can vary quite a bit in practice so it can sometimes be hard to figure out the brand given the sugar content.  Given sugar content data for 25 samples taken randomly from both populations (see file sugar_content.xls), our task is to come up with a decision rule for determining the brand.

Since this is one-variable problem, the simplest way to discern if the samples fall into distinct groups is through visualisation.  Here’s one way to do this using ggplot:

#load required library
library(ggplot2)
#load data from sugar_content.csv (remember to save the Excel file as a csv!)
drink_samples <- read.csv(file= “sugar_content.csv”)
#create plot
p <- ggplot(data=drink_samples, aes(x=drink_samples$sugar_content, y=c(0)))
p <- p + geom_point() + geom_text(label=drink_samples$sugar_content,size=2.5, vjust=2, hjust=0.5)
#display it
p

…and here’s the resulting plot:

sugar content plot

Figure 1: Sugar content of samples

Note that we’ve simulated a one-dimensional plot by setting all the y values to 0.

From the plot, it is evident that the samples fall into distinct groups: low sugar content, bounded above by the 8.8 g/100ml sample and high sugar content, bounded below by the 10 g/100ml sample.

Clearly, any point that lies between the two points is an acceptable decision boundary. We could, for example, pick 9.1g/100ml and 9.7g/100ml. Here’s the R code with those points added in. Note that we’ve made the points a bit bigger and coloured them red to distinguish them from the sample points.

#p created in previous code block!
#dataframe to hold separators
d_bounds <- data.frame(sep=c(9.1,9.7))
#add layer containing decision boundaries to previous plot
p <- p + geom_point(data=d_bounds, aes(x=d_bounds$sep, y=c(0)), colour= “red”, size=3)
#add labels for candidate decision boundaries
p <- p + geom_text(data=d_bounds, aes(x=d_bounds$sep, y=c(0)),
label=d_bounds$sep, size=2.5,
vjust=2, hjust=0.5, colour=”red”)
#display plot
p

And here’s the plot:

Figure 2: Plot showing example decision boundaries (in red)

Now, a bit about the decision rule. Say we pick the first point as the decision boundary, the decision rule would be:

Say we pick 9.1 as the decision boundary, our classifier (in R) would be:

ifelse(drink_sample$sugar_content < 9.1, “Choke-R”,”Choke”)

The other one is left for you as an exercise.

Now, it is pretty clear that although either these points define an acceptable decision boundary, neither of them are the best.  Let’s try to formalise our intuitive notion as to why this is so.

The margin is the distance between the points in both classes that are closest to the decision boundary. In case at hand, the margin is 1.2 g/100ml, which is the difference between the two extreme points at 8.8 g/100ml (Choke-R) and 10 g/100ml (Choke). It should be clear that the best separator is the one that lies halfway between the two extreme points. This is called the maximum margin separator. The maximum margin separator in the case at hand is simply the average of the two extreme points:

#dataframe to hold max margin separator
mm_sep <- data.frame(sep=c((8.8+10)/2))
#add layer containing max margin separator to previous plot
p <- p +
geom_point(data=mm_sep,aes(x=mm_sep$sep, y=c(0)), colour=”blue”, size=4)
#display plot
p

And here’s the plot:

Figure 3: Plot showing maximum margin separator (in blue)

We are dealing with a one dimensional problem here so the decision boundary is a point. In a moment we will generalise this to a two dimensional case in which the boundary is a straight line.

Let’s close this section with some general points.

Remember this is a sample not the entire population, so it is quite possible (indeed likely) that there will be as yet unseen samples of Choke-R and Choke that have a sugar content greater than 8.8 and less than 10 respectively.  So, the best classifier is one that lies at the greatest possible distance from both classes. The maximum margin separator is that classifier.

This toy example serves to illustrate the main aim of SVMs, which is to find an optimal separation boundary in the sense described here. However, doing this for real life problems is not so simple because life is not one dimensional. In the remainder of this article and its yet-to-be-written sequel, we will work through examples of increasing complexity so as to develop a good understanding of how SVMs work in addition to practical experience with using the popular SVM implementation in R.

<Advertisement> Again, for those of you who have DataCamp premium accounts, here is a course that covers  pretty much the  entire territory of this two part series. </Advertisement>

Linearly separable case

The next level of complexity is a two dimensional case (2 predictors) in which the classes are separated by a straight line. We’ll create such a dataset next.

Let’s begin by generating 200 points with attributes x1 and x2, randomly distributed between 0 and 1. Here’s the R code:

#number of datapoints
n <- 200
#Generate dataframe with 2 uniformly distributed predictors x1 and x2 in (0,1)
df <- data.frame(x1=runif(n),x2=runif(n))

Let’s visualise the generated data using a scatter plot:

#load ggplot
library(ggplot2)
#build scatter plot
p <- ggplot(data=df, aes(x=x1,y=x2)) + geom_point()
#display it
p

And here’s the plot

Figure 4: scatter plot of uniformly distributed datapoints

Now let’s classify the points that lie above the line x1=x2 as belonging to the class +1 and those that lie below it as belonging to class -1 (the class values are arbitrary choices, I could have chosen them to be anything at all). Here’s the R code:

# if x1>x2 then -1, else +1
df$y <- factor(ifelse(df$x1-df$x2>0,-1,1),levels=c(-1,1))

Let’s modify the plot in Figure 4, colouring the points classified as +1n blue and those classified -1 red. For good measure, let’s also add in the decision boundary. Here’s the R code:

#load ggplot if not loaded
library(ggplot2)
#build scatter plot, distinguishing classes by colour
p <- ggplot(data=df, aes(x=x1,y=x2,colour=y)) + geom_point() + scale_colour_manual(values=c(“-1″=”red”,”1″=”blue”))
#add decision boundary
p <- p + geom_abline(slope=1,intercept=0)
#display plot
p

Note that the parameters in geom_abline()  are derived from the fact that the line x1=x2 has slope 1 and y intercept 0.

Here’s the resulting plot:

Figure 5: Linearly separable dataset with boundary.

Next let’s introduce a margin in the dataset. To do this, we need to exclude points that lie within a specified distance of the boundary.  A simple way to approximate this is to exclude points that have  x1 and x2 values that differ by less a pre-specified value, delta. Here’s the code to do this with delta set to 0.05 units.

#create a margin of 0.05 in dataset
delta <- 0.05
# retain only those points that lie outside the margin
df1 <- df[abs(df$x1-df$x2)>delta,]
#check number of datapoints remaining
nrow(df1)

The check on the number of datapoints tells us that a number of points have been excluded.

Running the previous ggplot code block yields the following plot which clearly shows the reduced dataset with the depopulated region near the decision boundary:

Figure 6: Dataset with margin (note depleted areas on either side of boundary)

Let’s add the margin boundaries to the plot. We know that these are parallel to the decision boundary and lie delta units on either side of it. In other words, the margin boundaries have slope=1 and y intercepts delta and –delta. Here’s the ggplot code:

#add margins to plot object created earlier
p <- p + geom_abline(slope=1,intercept = delta, linetype=”dashed”) + geom_abline(slope=1,intercept = -delta, linetype=”dashed”)
#display plot
p

And here’s the plot with the margins:

Figure 7: Linearly separable dataset with margin and decision boundary displayed

 

OK, so we have constructed a dataset that is linearly separable, which is just a short code for saying that the classes can be separated by a straight line. Further, the dataset has a margin, i.e. there is a “gap” so to speak, between the classes. Let’s save the dataset so that we can use it in the next section where we’ll take a first look at the svm() function in the e1071 package.

write.csv(df1,file=”linearly_separable_with_margin.csv”,row.names = FALSE)

That done,  we can now move on to…

Linear SVMs

Let’s begin  by reading in the datafile we created in the previous section:

#read in dataset for linearly separable data with margin
df <- read.csv(file = “linearly_separable_with_margin.csv”)
#set y to factor explicitly
df$y <- as.factor(df$y)

We then split the data into training and test sets using an 80/20 random split. There are many ways to do this. Here’s one:

#set seed for random number generation
set.seed(1)
#split train and test data 80/20
df[,”train”] <- ifelse(runif(nrow(df))<0.8,1,0)
trainset <- df[df$train==1,]
testset <- df[df$train==0,]
#find “train” column index
trainColNum <- grep(“train”,names(trainset))
#remove column from train and test sets
trainset <- trainset[,-trainColNum]
testset <- testset[,-trainColNum]

The next step is to build the an SVM classifier model. We will do this using the svm() function which is available in the e1071 package. The svm() function has a range of parameters. I explain some of the key ones below, in particular, the following parameters: type, cost, kernel and scale.  It is recommended to have a browse of the documentation for more details.

The type parameter specifies the algorithm to be invoked by the function. The algorithm is capable of doing both classification and regression. We’ll focus on classification in this article. Note that there are two types of classification algorithms, nu and C classification. They essentially differ in the way that they penalise margin and boundary violations, but can be shown to lead to equivalent results. We will stick with C classification as it is more commonly used.  The “C” refers to the cost which we discuss next.

The cost parameter specifies the penalty to be applied for boundary violations.  This parameter  can vary from 0 to infinity (in practice a large number compared to 0, say 10^6 or 10^8). We will explore the effect of varying cost later in this piece. To begin with, however, we will leave it at its default value of 1.

The kernel parameter specifies the kind of function to be used to construct the decision boundary. The options are linear, polynomial and radial.  In this article we’ll focus on linear kernels as we know the decision boundary is a straight line.

The scale parameter is a Boolean that tells the algorithm whether or not the datapoints should be scaled to have zero mean and unit variance (i.e. shifted by the mean and scaled by the standard deviation).  Scaling is generally good practice to avoid undue influence of attributes that have unduly large numeric values. However, in this case we will avoid scaling as we know the attributes are bounded and (more important) we would like to plot the boundary obtained from the algorithm manually.

Building the model is a simple one-line call, setting appropriate values for the parameters:

#load library
library(e1071)
#build model using parameter settings discussed earlier
svm_model <- svm(y ~ ., data=trainset, type=”C-classification”, kernel=”linear”, scale=FALSE)

We expect a linear model to perform well here since the dataset it is linear by construction. Let’s confirm this by calculating training and test accuracy. Here’s the code:

#training accuracy
pred_train <- predict(svm_model,trainset)
mean(pred_train==trainset$y)
[1] 1
#test accuracy
pred_test <- predict(svm_model,testset)
mean(pred_test==testset$y)
[1] 1

The perfect accuracies confirm our expectation. However, accuracies by themselves are misleading because the story is somewhat more nuanced. To understand why, let’s plot the predicted decision boundary and margins using ggplot. To do this, we have to first extract information regarding these from the svm model object. One can obtain summary information for the model by typing in the model name like so:

svm_model
Call:
svm(formula = y ~ ., data = trainset, type = “C-classification”,
kernel = “linear”, scale = FALSE)
Parameters:
SVM-Type: C-classification
SVM-Kernel: linear
cost: 1
gamma: 0.5
Number of Support Vectors: 55

Which outputs the following: the function call, SVM type, kernel and cost (which is set to its default). In case you are wondering about gamma,  although it’s set to 0.5 here, it plays no role in linear SVMs. We’ll say more about it in the sequel to this article in which we’ll cover more complex kernels. More interesting are the support vectors. In a nutshell, these are training dataset points that specify the location of the decision boundary. We can develop a better understanding of their role by visualising them. To do this, we need to know their coordinates and indices (position within the dataset). This information is stored in the SVM model object. Specifically, the index element of svm_model contains the indices of the training dataset points that are support vectors and the SV element lists the coordinates of these points. The following R code lists these explicitly (Note that I’ve not shown the outputs in the code snippet below):

#index of support vectors in training dataset
svm_model$index
#Support vectors
svm_model$SV

Let’s use the indices to visualise these points in the training dataset. Here’s the ggplot code to do that:

#load library
library(ggplot2)
#build plot of training set, distinguishing classes by colour as before
p <- ggplot(data=trainset, aes(x=x1,y=x2,colour=y)) + geom_point()+ scale_colour_manual(values=c(“red”,”blue”))
#identify support vectors in training set
df_sv <- trainset[svm_model$index,]
#add layer marking out support vectors with semi-transparent purple blobs
p <- p + geom_point(data=df_sv,aes(x=x1,y=x2),colour=”purple”,size = 4,alpha=0.5)
#display plot
p

And here is the plot:

Figure 8: Training dataset showing support vectors

We now see that the support vectors are clustered around the boundary and, in a sense, serve to define it. We will see this more clearly by plotting the predicted decision boundary. To do this, we need its slope and intercept.  These  aren’t available directly available in the svm_model, but they can be extracted from the coefs, SV and rho elements of the  object.

The first step is to use coefs and the support vectors to build the what’s called the weight vector. The weight vector is given by the product of the coefs matrix with the matrix containing the SVs.  Note that the fact that only the support vectors play a role in defining the boundary is consistent with our expectation that the boundary should be fully specified by them. Indeed, this is often touted as a feature of SVMs in that it is one of the few classifiers that depends on only a small subset of the training data, i.e. the datapoints closest to the boundary rather than the entire dataset.

#build weight vector
w <- t(svm_model$coefs) %*% svm_model$SV

Once we have the weight vector, we can calculate the slope and intercept of the predicted decision boundary as follows:

#calculate slope
slope_1 <- -w[1]/w[2]
slope_1
[1] 0.9272129
#calculate intercept
intercept_1 <- svm_model$rho/w[2]
intercept_1
[1] 0.02767938

Note that the slope and intercept are quite different from the correct values of 1 and 0 (reminder: the actual decision boundary is the line x1=x2 by construction).  We’ll see how to improve on this shortly, but before we do that, let’s plot the decision boundary using the  slope and intercept we have just calculated. Here’s the code:

# augment Figure 8 with decision boundary using calculated slope and intercept
p <- p + geom_abline(slope=slope_1,intercept = intercept_1)
# display plot
p

And here’s the augmented plot:

 

Figure 9: Training dataset showing support vectors and decision boundary

The plot clearly shows how the support vectors “support” the boundary – indeed, if one draws line segments from each of the points to the boundary in such a way that the intersect the boundary at right angles, the lines can be thought of as “holding the boundary in place”. Hence the term support vector.

This is a good time to mention that the e1071 library provides a built-in plot method for svm function. This is invoked as follows:

#plot using function provided in e1071
#Note: no need to specify plane as there are only 2 predictors
plot(x=svm_model, data=trainset)

The svm plot function takes a formula specifying the plane on which the boundary is to be plotted. This is not necessary here as we have only two predictors (x1 and x2) which automatically define a plane.

Here is the plot generated by the above code:

Figure 10: Decision boundary for linearly separable dataset visualised using svm.plot()

Note that the axes are switched (x1 is on the y axis). Aside from that, the plot is reassuringly similar to our ggplot version in Figure 9. Also note that that the support vectors are marked by “x”.  Unfortunately the built in function does not display the margin boundaries, but this is something we can easily add to our home-brewed plot. Here’s how.  We know that the margin boundaries are parallel to the decision boundary, so all we need to find out is their intercept.  It turns out that the intercepts are offset by an amount  1/w[2] units on either side of the decision boundary. With that information in hand we can now write the the code to add in the margins to the plot shown in Figure 9. Here it is:

#margins are offset 1/w[2] on either side of decision boundary
#p created in earlier code block
p <- p + geom_abline(slope=slope_1,intercept = intercept_1-1/w[2], linetype=”dashed”)+
geom_abline(slope=slope_1,intercept = intercept_1+1/w[2], linetype=”dashed”)
#display plot
p

And here is the plot with the margins added in:

Figure 11: Training dataset showing support vectors  + decision and margin boundaries

Note that the predicted margins are much wider than the actual ones (compare with Figure 7). As a consequence, many of  the support vectors lie within the predicted margin – that is, they violate it. The upshot of the wide margin is that the decision boundary is not tightly specified. This is why we get a significant difference between the slope and intercept of  predicted decision boundary and the actual one. We can sharpen the boundary by narrowing the margin. How do we do this? We make margin violations more expensive by increasing the cost.  Let’s see this margin-narrowing effect in action by building a model with cost = 100 on the same training dataset as before. Here is the code:

#build cost=100 model using parameter settings discussed earlier
svm_model <- svm(y ~ ., data=trainset, type=”C-classification”, kernel=”linear”, cost=100, scale=FALSE)

I’ll leave you to calculate the training and test accuracies (as one might expect, these will be perfect).

Let’s inspect the cost=100 model:

svm_model
Call:
svm(formula = y ~ ., data = trainset, type = “C-classification”,
kernel = “linear”,cost=100, scale = FALSE)
Parameters:
SVM-Type: C-classification
SVM-Kernel: linear
cost: 100
gamma: 0.5
Number of Support Vectors: 6

The number of support vectors is reduced from 55 to 6! We can plot these and the boundary / margin lines using ggplot as before. The code is identical to the previous case (see code block preceding Figure 8). If you run it, you will get the plot shown in Figure 12.

Figure 12: Training dataset showing support vectors for cost=100 case

 

Since the boundary is more tightly specified, we would expect the slope and intercept of the predicted boundary to be considerably closer to their actual values of 1 and 0 respectively (as compared to the default cost case). Let’s confirm that this is so by calculating the slope and intercept as we did in the code snippets preceding Figure 9. Here’s the code:

#build weight vector
w <- t(svm_model$coefs) %*% svm_model$SV
#calculate slope
slope_100 <- -w[1]/w[2]
slope_100
[1] 0.9732495
#calculate intercept
intercept_100 <- svm_model$rho/w[2]
intercept_100
[1] 0.01472426

Which nicely confirms our expectation.

The decision boundary and margins for the high cost case can also be plotted with the code shown earlier. Her it is for completeness:

# augment Figure 12 with decision boundary using calculated slope and intercept
p <- p + geom_abline(slope=slope_100,intercept = intercept_100)
#margins are offset 1/w[2] on either side of decision boundary
p <- p + geom_abline(slope=slope_100,intercept = intercept_100-1/w[2], linetype=”dashed”)+
geom_abline(slope=slope_100,intercept = intercept_100+1/w[2], linetype=”dashed”)
#display plot
p

And here’s the plot:

Figure 13: Training dataset with support vectors predicted decision boundary and margins for cost=100

SVMs that allow margin violations are called soft margin classifiers and those that do not are called hard. In this case, the hard margin classifier does a better job because it specifies the boundary more accurately than its soft counterpart. However, this does not mean that hard margin classifier are to be preferred over soft ones in all situations. Indeed, in real life, where we usually do not know the shape of the decision boundary upfront, soft margin classifiers can allow for a greater degree of uncertainty in the decision boundary thus improving generalizability of the classifier.

OK, so now we have a good feel for what the SVM  algorithm does in the linearly separable case. We will round out this article by looking at a real world dataset that fortuitously turns out to be almost linearly separable: the famous (notorious?) iris dataset. It is instructive to look at this dataset because it serves to illustrate another feature of the e1071 SVM algorithm – its capability to handle classification problems that have more than 2 classes.

A multiclass problem

The iris dataset is well-known in the machine learning community as it features in many introductory courses and tutorials. It consists of 150 observations of 3 species of the iris flower – setosa, versicolor and virginica.  Each observation consists of numerical values for 4 independent variables (predictors): petal length, petal width, sepal length and sepal width.  The dataset is available in a standard installation of R as a built in dataset. Let’s read it in and examine its structure:

# read data
data(iris)
str(iris)
‘data.frame’: 150 obs. of 5 variables:
$ Sepal.Length: num 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 …
$ Sepal.Width : num 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 …
$ Petal.Length: num 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 …
$ Petal.Width : num 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 …
$ Species : Factor w/ 3 levels “setosa”,”versicolor”,..: 1 1 1 1 1 1 1 1 1 1 …

Now, as it turns out, petal length and petal width are the key determinants of species. So let’s create a scatterplot of the datapoints as a function of these two variables (i.e. project each data point on  the petal length-petal width plane). We will also distinguish between species using different colour. Here’s the ggplot code to do this:

# plot petal width vs petal length
library(ggplot2)
p <- ggplot(data=iris, aes(x=Petal.Width,y=Petal.Length,colour=Species)) + geom_point()
p

And here’s the plot:

Figure 15: iris dataset, petal width vs petal length

 

On this plane we see a clear linear boundary between setosa and the other two species, versicolor and virginica. The boundary between the latter two is almost linear. Since there are four predictors, one would have to plot the other combinations to get a better feel for the data. I’ll leave this as an exercise for you and move on with the assumption that the data is nearly linearly separable. If the assumption is grossly incorrect, a linear SVM will not work well.

Up until now, we have discussed binary classification problem, i.e. those in which the predicted variable can take on only two values. In this case, however, the predicted variable, Species, can take on 3 values (setosa, versicolor and virginica). This brings up the question as to how the algorithm deals multiclass classification problems – i.e those involving datasets with more than two classes. The SVM algorithm does this using a one-against-one classification strategy. Here’s how it works:

  • Divide the dataset (assumed to have N classes) into N(N-1)/2 datasets that have two classes each.
  • Solve the binary classification problem for each of these subsets
  • Use a simple voting mechanism to assign a class to each data point.

Basically, each data point is assigned the most frequent classification it receives from all the binary classification problems it figures in.

With that said, let’s get on with building the classifier. As before, we begin by splitting the data into training and test sets using an 80/20 random split. Here is the code to do this:

#set seed for random number generation
set.seed(10)
#split train and test data 80/20
iris[,”train”] <- ifelse(runif(nrow(iris))<0.8,1,0)
trainset <- iris[iris$train==1,]
testset <- iris[iris$train==0,]
#find “train” column index
trainColNum <- grep(“train”,names(trainset))
#remove column from train and test sets
trainset <- trainset[,-trainColNum]
testset <- testset[,-trainColNum]

Then we build the model (default cost) and examine it:

#build default cost model
svm_model<- svm(Species~ ., data=trainset,type=”C-classification”, kernel=”linear”)

The main thing to note is that the function call is identical to the binary classification case. We get some basic information about the model by typing in the model name as before:

svm_model
Call:
svm(formula = Species ~ ., data = trainset, type = “C-classification”,
kernel = “linear”)
Parameters:
SVM-Type: C-classification
SVM-Kernel: linear
cost: 1
gamma: 0.25
Number of Support Vectors: 26

And the train and test accuracies are computed in the usual way:

#training accuracy
pred_train <- predict(svm_model,trainset)
mean(pred_train==trainset$Species)
[1] 0.9770992
#test accuracy
pred_test <- predict(svm_model,testset)
mean(pred_test==testset$Species)
[1] 0.9473684

This looks good, but is potentially misleading because it is for a particular train/test split. Remember, in this case, unlike the earlier example, we do not know the shape of the actual decision boundary. So, to get a robust measure of accuracy, we should calculate the average test accuracy over a number of train/test partitions. Here’s some code to do that:

#load data, set seed and initialise vector to hold calculated accuracies
data(iris)
set.seed(10)
accuracy <- rep(NA,100)
#calculate test accuracy for 100 different partitions
for (i in 1:100){
iris[,”train”] <- ifelse(runif(nrow(iris))<0.8,1,0)
trainColNum <- grep(“train”,names(iris))
trainset <- iris[iris$train==1,-trainColNum]
testset <- iris[iris$train==0,-trainColNum]
svm_model <- svm(Species~ ., data=trainset, type=”C-classification”, kernel=”linear”)
pred_test <- predict(svm_model,testset)
accuracy[i] <- mean(pred_test==testset$Species)
}
mean(accuracy)
[1] 0.9620757
sd(accuracy)
[1] 0.03443983

Which is not too bad at all, indicating that the dataset is indeed nearly linearly separable. If you try different values of cost you will see that it does not make much difference to the average accuracy.

This is a good note to close this piece on. Those who have access to DataCamp premium courses will find that the content above is covered in chapters 1 and 2 of the course on support vector machines in R. The next article in this two-part series will cover chapters 3 and 4.

Summarising

My main objective in this article was to help develop an intuition for how SVMs work in simple cases. We illustrated the basic principles and terminology with a simple 1 dimensional example and then worked our way to linearly separable binary classification problems with multiple predictors. We saw how the latter can be solved using a popular svm implementation available in R.  We also saw that the algorithm can handle multiclass problems. All through, we used visualisations to see what the algorithm does and how the key parameters affect the decision boundary and margins.

In the next part (yet to be written) we will see how SVMs can be generalised to deal with complex, nonlinear decision boundaries. In essence, the use a mathematical trick to “linearise” these boundaries.  We’ll delve into details of this trick in an intuitive, visual way as we have done here.

Many thanks for reading!

Written by K

June 6, 2018 at 9:56 pm

A gentle introduction to data visualisation using R

with 5 comments

Data science students often focus on machine learning algorithms, overlooking some of the more routine but important skills of the profession.  I’ve lost count of the number of times I have advised students working on projects for industry clients to curb their keenness to code and work on understanding the data first.  This is important because, as people (ought to) know, data doesn’t speak for itself, it has to be given a voice; and as data-scarred professionals know from hard-earned experience, one of the best ways to do this is through visualisation.

Data visualisation is sometimes (often?) approached as a bag of tricks to be learnt individually, with no little or no reference to any underlying principles. Reading Hadley Wickham’s paper on the grammar of graphics was an epiphany; it showed me how different types of graphics can be constructed in a consistent way using common elements. Among other things, the grammar makes visualisation a logical affair rather than a set of tricks. This post is a brief – and hopefully logical – introduction to visualisation using ggplot2, Wickham’s implementation of a grammar of graphics.

In keeping with the practical bent of this series we’ll  focus on worked examples, illustrating elements of the grammar as we go along. We’ll first briefly describe the elements of the grammar and then show how these are used to build different types of visualisations.

A grammar of graphics

Most visualisations are constructed from common elements that are pieced together in prescribed ways.  The elements can be grouped into the following categories:

  • Data – this is obvious, without data there is no story to tell and definitely no plot!
  • Mappings – these are correspondences between data and display elements such as spatial location, shape or colour. Mappings are referred to as aesthetics in Wickham’s grammar.
  • Scales – these are transformations (conversions) of data values to numbers that can be displayed on-screen. There should be one scale per mapping. ggplot typically does the scaling transparently, without users having to worry about it. One situation in which you might need to mess with default scales is when you want to zoom in on a particular range of values. We’ll see an example or two of this later in this article.
  • Geometric objects – these specify the geometry of the visualisation. For example, in ggplot2 a scatter plot is specified via a point geometry whereas a fitting curve is represented by a smooth geometry. ggplot2 has a range of geometries available of which we will illustrate just a few.
  • Coordinate system – this specifies the system used to position data points on the graphic. Examples of coordinate systems are Cartesian and polar. We’ll deal with Cartesian systems in this tutorial. See this post for a nice illustration of how one can use polar plots creatively.
  • Facets – a facet specifies how data can be split over multiple plots to improve clarity. We’ll look at this briefly towards the end of this article.

The basic idea of a layered grammar of graphics is that each of these elements can be combined – literally added layer by layer – to achieve a desired visual result. Exactly how this is done will become clear as we work through some examples. So without further ado, let’s get to it.

Hatching (gg)plots

In what follows we’ll use the NSW Government Schools dataset,  made available via the state government’s open data initiative. The data is in csv format. If you cannot access the original dataset from the aforementioned link, you can download an Excel file with the data here (remember to save it as a csv before running the code!).

The first task – assuming that you have a working R/RStudio environment –  is to load the data into R. To keep things simple we’ll delete a number of columns (as shown in the code) and keep only  rows that are complete, i.e. those that have no missing values. Here’s the code:

#set working directory if needed (modify path as needed)
#setwd(“C:/Users/Kailash/Documents/ggplot”)
#load required library
library(ggplot2)
#load dataset (ensure datafile is in directory!)
nsw_schools <- read.csv(“NSW-Public-Schools-Master-Dataset-07032017.csv”)
#build expression for columns to delete
colnames_for_deletion <- paste0(“AgeID|”,”street|”,”indigenous_pct|”,
“lbote_pct|”,”opportunity_class|”,”school_specialty_type|”,
“school_subtype|”,”support_classes|”,”preschool_ind|”,
“distance_education|”,”intensive_english_centre|”,”phone|”,
“school_email|”,”fax|”,”late_opening_school|”,
“date_1st_teacher|”,”lga|”,”electorate|”,”fed_electorate|”,
“operational_directorate|”,”principal_network|”,
“facs_district|”,”local_health_district|”,”date_extracted”)
#get indexes of cols for deletion
cols_for_deletion <- grep(colnames_for_deletion,colnames(nsw_schools))
#delete them
nsw_schools <- nsw_schools[,-cols_for_deletion]
#structure and number of rows
str(nsw_schools)
nrow(nsw_schools)
#remove rows with NAs
nsw_schools <- nsw_schools[complete.cases(nsw_schools),]
#rowcount
nrow(nsw_schools)
#convert student number to numeric datatype.
#Need to convert factor to character first…
#…alternately, load data with StringsAsFactors set to FALSE
nsw_schools$student_number <- as.numeric(as.character(nsw_schools$student_number))
#a couple of character strings have been coerced to NA. Remove these
nsw_schools <- nsw_schools[complete.cases(nsw_schools),]

A note regarding the last line of code above, a couple of schools have “np” entered for the student_number variable. These are coerced to NA in the numeric conversion. The last line removes these two schools from the dataset.

Apart from student numbers and location data, we have retained level of schooling (primary, secondary etc.) and ICSEA ranking. The location information includes attributes such as suburb, postcode, region, remoteness as well as latitude and longitude. We’ll use only remoteness in this post.

The first thing that caught my eye in the data was was the ICSEA ranking.  Before going any further, I should mention that the  Australian Curriculum Assessment and Reporting Authority   (the  organisation responsible for developing the ICSEA system) emphasises that the score  is not a school ranking, but a measure of socio-educational advantage  of the student population in a school. Among other things, this is related to family background and geographic location.  The average ICSEA score is set at an average of 1000, which can be used as a reference level.

I thought a natural first step would be to see how ICSEA varies as a function of the other variables in the dataset such as student numbers and location (remoteness, for example). To begin with, let’s plot ICSEA rank as a function of student number. As it is our first plot, let’s take it step by step to understand how the layered grammar works. Here we go:

#specify data layer
p <- ggplot(data=nsw_schools)
#display plot
p

This displays a blank plot because we have not specified a mapping and geometry to go with the data. To get a plot we need to specify both. Let’s start with a scatterplot, which is specified via a point geometry.  Within the geometry function, variables are mapped to visual properties of the  using  aesthetic mappings. Here’s the code:

#specify a point geometry (geom_point)
p <- p + geom_point(mapping = aes(x=student_number,y=ICSEA_Value))
#…lo and behold our first plot
p

The resulting plot is shown in Figure 1.

Figure 1: Scatterplot of ICSEA score versus student numbers

At first sight there are two points that stand out: 1) there are fewer number of large schools, which we’ll look into in more detail later and 2) larger schools seem to have a higher ICSEA score on average.   To dig a little deeper into the latter, let’s add a linear trend line. We do that by adding another layer (geometry) to the scatterplot like so:

#add a trendline using geom_smooth
p <- p + geom_smooth(mapping= aes(x=student_number,y=ICSEA_Value),method=”lm”)
#scatter plot with trendline
p

The result is shown in Figure 2.

Figure 2: scatterplot of ICSEA vs student number with linear trendline

The lm method does a linear regression on the data.  The shaded area around the line is the 95% confidence level of the regression line (i.e that it is 95% certain that the true regression line lies in the shaded region). Note that geom_smooth   provides a range of smoothing functions including generalised linear and local regression (loess) models.

You may have noted that we’ve specified the aesthetic mappings in both geom_point and geom_smooth. To avoid this duplication, we can simply specify the mapping, once in the top level ggplot call (the first layer) like so:

#rewrite the above, specifying the mapping in the ggplot call instead of geom
p <- ggplot(data=nsw_schools,mapping= aes(x=student_number,y=ICSEA_Value)) +
geom_point()+
geom_smooth(method=”lm”)
#display plot, same as Fig 2
p

From Figure 2, one can see a clear positive correlation between student numbers and ICSEA scores, let’s zoom in around the average value (1000) to see this more clearly…

#set display to 900 < y < 1100
p <- p + coord_cartesian(ylim =c(900,1100))
#display plot
p

The coord_cartesian function is used to zoom the plot to without changing any other settings. The result is shown in Figure 3.

Figure 3: Zoomed view of Figure 2 for 900 < ICSEA <1100

To  make things clearer, let’s add a reference line at the average:

#add horizontal reference line at the avg ICSEA score
p <- p + geom_hline(yintercept=1000)
#display plot
p

The result, shown in Figure 4, indicates quite clearly that larger schools tend to have higher ICSEA scores. That said, there is a twist in the tale which we’ll come to a bit later.

Figure 4: Zoomed view with reference line at average value of ICSEA

As a side note, you would use geom_vline to zoom in on a specific range of x values and geom_abline to add a reference line with a specified slope and intercept. See this article on ggplot reference lines for more.

OK, now that we have seen how ICSEA scores vary with student numbers let’s switch tack and incorporate another variable in the mix.  An obvious one is remoteness. Let’s do a scatterplot as in Figure 1, but now colouring each point according to its remoteness value. This is done using the colour aesthetic as shown below:

#Map aecg_remoteness to colour aesthetic
p <- ggplot(data=nsw_schools, aes(x=student_number,y=ICSEA_Value,  colour=ASGS_remoteness)) +
geom_point()
#display plot
p

The resulting plot is shown in Figure 5.

Figure 5: ICSEA score as a function of student number and remoteness category

Aha, a couple of things become apparent. First up, large schools tend to be in metro areas, which makes good sense. Secondly, it appears that metro area schools have a distinct socio-educational advantage over regional and remote area schools. Let’s add trendlines by remoteness category as well to confirm that this is indeed so:

#add reference line at avg + trendlines for each remoteness category
p <- p + geom_hline(yintercept=1000) + geom_smooth(method=”lm”)
#display plot
p

The plot, which is shown in Figure 6, indicates clearly that  ICSEA scores decrease on the average as we move away from metro areas.

Figure 6: ICSEA scores vs student numbers and remoteness, with trendlines for each remoteness category

Moreover, larger schools metropolitan areas tend to have higher than average scores (above 1000),  regional areas tend to have lower than average scores overall, with remote areas being markedly more disadvantaged than both metro and regional areas.  This is no surprise, but the visualisations show just how stark the  differences are.

It is also interesting that, in contrast to metro and (to some extent) regional areas, there negative correlation between student numbers and scores for remote schools. One can also use  local regression to get a better picture of how ICSEA varies with student numbers and remoteness. To do this, we simply use the loess method instead of lm:

#redo plot using loess smoothing instead of lm
p <- ggplot(data=nsw_schools, aes(x=student_number,y=ICSEA_Value, colour=ASGS_remoteness)) +
geom_point() + geom_hline(yintercept=1000) + geom_smooth(method=”loess”)
#display plot
p

The result, shown in Figure 7,  has  a number  of interesting features that would have been worth pursuing further were we analysing this dataset in a real life project.  For example, why do small schools tend to have lower than benchmark scores?

Figure 7: ICSEA scores vs student numbers and remoteness with loess regression curves.

From even a casual look at figures 6 and 7, it is clear that the confidence intervals for remote areas are huge. This suggests that the number of datapoints for these regions are a) small and b) very scattered.  Let’s quantify the number by getting counts using the table function (I know, we could plot this too…and we will do so a little later). We’ll also transpose the results using data.frame to make them more readable:

#get school counts per remoteness category
data.frame(table(nsw_schools$ASGS_remoteness))
Var1 Freq
1 0
2 Inner Regional Australia 561
3 Major Cities of Australia 1077
4 Outer Regional Australia 337
5 Remote Australia 33
6 Very Remote Australia 14

The number of datapoints for remote regions is much less than those for metro and regional areas. Let’s repeat the loess plot with only the two remote regions. Here’s the code:

#create vector containing desired categories
remote_regions <- c(‘Remote Australia’,’Very Remote Australia’)
#redo loess plot with only remote regions included
p <- ggplot(data=nsw_schools[nsw_schools$ASGS_remoteness %in% remote_regions,], aes(x=student_number,y=ICSEA_Value, colour=ASGS_remoteness)) +
geom_point() + geom_hline(yintercept=1000) + geom_smooth(method=”loess”)
#display plot
p

The plot, shown in Figure 8, shows that there is indeed a huge variation in the (small number) of datapoints, and the confidence intervals reflect that. An interesting feature is that some small remote schools have above average scores. If we were doing a project on this data, this would be a feature worth pursuing further as it would likely be of interest to education policymakers.

Figure 8: Loess plots as in Figure 7 for remote region schools

Note that there is a difference in the x axis scale between Figures 7 and 8 – the former goes from 0 to 2000 whereas the  latter goes up to 400 only. So for a fair comparison, between remote and other areas, you may want to re-plot Figure 7, zooming in on student numbers between 0 and 400 (or even less). This will also enable you to see the complicated dependence of scores on student numbers more clearly across all regions.

We’ll leave the scores vs student numbers story there and move on to another  geometry – the well-loved bar chart. The first one is a visualisation of the remoteness category count that we did earlier. The relevant geometry function is geom_bar, and the code is as easy as:

#frequency plot
p <- ggplot(data=nsw_schools, aes(x=ASGS_remoteness)) + geom_bar()
#display plot
p

The plot is shown in Figure 9.

Figure 9: School count by remoteness categories

The category labels on the x axis are too long and look messy. This can be fixed by tilting them to a 45 degree angle so that they don’t run into each other as they most likely did when you ran the code on your computer. This is done by modifying the axis.text element of the plot theme. Additionally, it would be nice to get counts on top of each category bar. The way to do that is using another geometry function, geom_text. Here’s the code incorporating the two modifications:

#frequency plot
p <- p + geom_text(stat=’count’,aes(label= ..count..),vjust=-1)+
theme(axis.text.x=element_text(angle=45, hjust=1))
#display plot
p

The result is shown in Figure 10.

Figure 10: Bar plot of remoteness with counts and angled x labels

Some things to note: : stat=count tells ggplot to compute counts by category and the aesthetic label = ..count.. tells ggplot to access the internal variable that stores those counts. The the vertical justification setting, vjust=-1, tells ggplot to display the counts on top of the bars. Play around with different values of vjust to see how it works. The code to adjust label angles is self explanatory.

It would be nice to reorder the bars by frequency. This is easily done via fct_infreq function in the forcats package like so:

#use factor tools
library(forcats)
#descending
p <- ggplot(data=nsw_schools) +
geom_bar(mapping = aes(x=fct_infreq(ASGS_remoteness)))+
theme(axis.text.x=element_text(angle=45, hjust=1))
#display plot
p

The result is shown in Figure 11.

Figure 11: Barplot of Figure 10 sorted by descending count

To reverse the order, invoke fct_rev, which reverses the sort order:

#reverse sort order to ascending
p <- ggplot(data=nsw_schools) +
geom_bar(mapping = aes(x=fct_rev(fct_infreq(ASGS_remoteness))))+
theme(axis.text.x=element_text(angle=45, hjust=1))
#display plot
p

The resulting plot is shown in Figure 12.

Figure 12: Bar plot of Figure 10 sorted by ascending count

If this is all too grey for us, we can always add some colour. This is done using the fill aesthetic as follows:

#add colour using the fill aesthetic
p <- ggplot(data=nsw_schools) +
geom_bar(mapping = aes(x=ASGS_remoteness, fill=ASGS_remoteness))+
theme(axis.text.x=element_text(angle=45, hjust=1))
#display plot
p

The resulting plot is shown in Figure 13.

Figure 13: Coloured bar plot of school count by remoteness

Note that, in the above, that we have mapped fill and x to the same variable, remoteness which makes the legend superfluous. I will leave it to you to figure out how to suppress the legend – Google is your friend.

We could also map fill to another variable, which effectively adds another dimension to the plot. Here’s how:

#map fill to another variable
p <- ggplot(data=nsw_schools) +
geom_bar(mapping = aes(x=ASGS_remoteness, fill=level_of_schooling))+
theme(axis.text.x=element_text(angle=45, hjust=1))
#display plot
p

The plot is shown in Figure 14. The new variable, level of schooling, is displayed via proportionate coloured segments stacked up in each bar. The default stacking is one on top of the other.

Figure 14: Bar plot of school counts as a function of remoteness and school level

Alternately, one can stack them up side by side by setting the position argument to dodge as follows:

#stack side by side
p <- ggplot(data=nsw_schools) +
geom_bar(mapping = aes(x=ASGS_remoteness,fill=level_of_schooling),position =”dodge”)+
theme(axis.text.x=element_text(angle=45, hjust=1))
#display plot
p

The plot is shown in Figure 15.

Figure 15: Same data as in Figure 14 stacked side-by-side

Finally, setting the  position argument to fill  normalises the bar heights and gives us the proportions of level of schooling for each remoteness category. That sentence will  make more sense when you see Figure 16 below. Here’s the code, followed by the figure:

#proportion plot
p <- ggplot(data=nsw_schools) +
geom_bar(mapping = aes(x=ASGS_remoteness,fill=level_of_schooling),position = “fill”)+
theme(axis.text.x=element_text(angle=45, hjust=1))
#display plot
p

Obviously,  we lose frequency information since the bar heights are normalised.

Figure 16: Proportions of school levels for remoteness categories

An  interesting feature here is that  the proportion of central and community schools increases with remoteness. Unlike primary and secondary schools, central / community schools provide education from Kindergarten through Year 12. As remote areas have smaller numbers of students, it makes sense to consolidate educational resources in institutions that provide schooling at all levels .

Finally, to close the loop so to speak,  let’s revisit our very first plot in Figure 1 and try to simplify it in another way. We’ll use faceting to  split it out into separate plots, one per remoteness category. First, we’ll organise the subplots horizontally using facet_grid:

#faceting – subplots laid out horizontally (faceted variable on right of formula)
p <- ggplot(data=nsw_schools) + geom_point(mapping = aes(x=student_number,y=ICSEA_Value))+
facet_grid(~ASGS_remoteness)
#display plot
p

The plot is shown in Figure 17 in which the different remoteness categories are presented in separate plots (facets) against a common y axis. It shows, the sharp differences between student numbers between remote and other regions.

Figure 17: Horizontally laid out facet plots of ICSEA scores for different remoteness categories

To get a vertically laid out plot, switch the faceted variable to other side of the formula (left as an exercise for you).

If one has too many categories to fit into a single row, one can wrap the facets using facet_wrap like so:

#faceting – wrapping facets in 2 columns
p <- ggplot(data=nsw_schools) +
geom_point(mapping = aes(x=student_number,y=ICSEA_Value))+
facet_wrap(~ASGS_remoteness, ncol= 2)
#display plot
p

The resulting plot is shown in Figure 18.

Figure 18: Same data as in Figure 17, with facets wrapped in a 2 column format

One can specify the number of rows instead of columns. I won’t illustrate that as the change in syntax is quite obvious.

…and I think that’s a good place to stop.

Wrapping up

Data visualisation has a reputation of being a dark art, masterable only by the visually gifted. This may have been partially true some years ago, but in this day and age it definitely isn’t. Versatile  packages such as ggplot, that use a consistent syntax have made  the art much more accessible to visually ungifted folks like myself. In this post I have attempted to provide a brief and (hopefully) logical introduction to ggplot.  In closing I note that  although some of the illustrative examples  violate the  principles of good data visualisation, I hope this article will serve its primary purpose which is pedagogic rather than artistic.

Further reading:

Where to go for more? Two of the best known references are Hadley Wickham’s books:

I highly recommend his R for Data Science , available online here. Apart from providing a good overview of ggplot, it is an excellent introduction to R for data scientists.  If you haven’t read it, do yourself a favour and buy it now.

People tell me his ggplot book is an excellent book for those wanting to learn the ins and outs of ggplot . I have not read it myself, but if his other book is anything to go by, it should be pretty damn good.

Written by K

October 10, 2017 at 8:17 pm

A gentle introduction to logistic regression and lasso regularisation using R

with 11 comments

In this day and age of artificial intelligence and deep learning, it is easy to forget that simple algorithms can work well for a surprisingly large range of practical business problems.  And the simplest place to start is with the granddaddy of data science algorithms: linear regression and its close cousin, logistic regression. Indeed, in his acclaimed MOOC and accompanying textbook, Yaser Abu-Mostafa spends a good portion of his time talking about linear methods, and with good reason too: linear methods are not only a good way to learn the key principles of machine learning, they can also be remarkably helpful in zeroing in on the most important predictors.

My main aim in this post is to provide a beginner level introduction to logistic regression using R and also introduce LASSO (Least Absolute Shrinkage and Selection Operator), a powerful feature selection technique that is very useful for regression problems. Lasso is essentially a regularization method. If you’re unfamiliar with the term, think of it as a way to reduce overfitting using less complicated functions (and if that means nothing to you, check out my prelude to machine learning).  One way to do this is to toss out less important variables, after checking that they aren’t important.  As we’ll discuss later, this can be done manually by examining p-values of coefficients and discarding those variables whose coefficients are not significant. However, this can become tedious for classification problems with many independent variables.  In such situations, lasso offers a neat way to model the dependent variable while automagically selecting significant variables by shrinking the coefficients of unimportant predictors to zero.  All this without having to mess around with p-values or obscure information criteria. How good is that?

Why not linear regression?

In linear regression one attempts to model a dependent variable (i.e. the one being predicted) using the best straight line fit to a set of predictor variables.  The best fit is usually taken to be one that minimises the root mean square error,  which is the sum of square of the differences between the actual and predicted values of the dependent variable. One can think of logistic regression as the equivalent of linear regression for a classification problem.  In what follows we’ll look at binary classification – i.e. a situation where the dependent variable takes on one of two possible values (Yes/No, True/False, 0/1 etc.).

First up, you might be wondering why one can’t use linear regression for such problems. The main reason is that classification problems are about determining class membership rather than predicting variable values, and linear regression is more naturally suited to the latter than the former. One could, in principle, use linear regression for situations where there is a natural ordering of categories like High, Medium and Low for example. However, one then has to map sub-ranges of the predicted values to categories. Moreover, since predicted values are potentially unbounded (in data as yet unseen) there remains a degree of arbitrariness associated with such a mapping.

Logistic regression sidesteps the aforementioned issues by modelling class probabilities instead.  Any input to the model yields a number lying between 0 and 1, representing the probability of class membership. One is still left with the problem of determining the threshold probability, i.e. the probability at which the category flips from one to the other.  By default this is set to p=0.5, but in reality it should be settled based on how the model will be used.  For example, for a marketing model that identifies potentially responsive customers, the threshold for a positive event might be set low (much less than 0.5) because the client does not really care about mailouts going to a non-responsive customer (the negative event). Indeed they may be more than OK with it as there’s always a chance – however small – that a non-responsive customer will actually respond.  As an opposing example, the cost of a false positive would be high in a machine learning application that grants access to sensitive information. In this case, one might want to set the threshold probability to a value closer to 1, say 0.9 or even higher. The point is, the setting an appropriate threshold probability is a business issue, not a technical one.

Logistic regression in brief

So how does logistic regression work?

For the discussion let’s assume that the outcome (predicted variable) and predictors are denoted by Y and X respectively and the two classes of interest are denoted by + and – respectively.  We wish to model the conditional probability that the outcome Y is +, given that the input variables (predictors) are X. The conditional probability is denoted by p(Y=+|X)   which we’ll abbreviate as p(X) since we know we are referring to the positive outcome Y=+.

As mentioned earlier, we are after the probability of class membership so we must ensure that the hypothesis function (a fancy word for the model) always lies between 0 and 1. The function assumed in logistic regression is:

p(X) = \dfrac{\exp^{\beta_0+\beta_1 X}}{1+\exp^{\beta_0 + \beta_1 X}} .....(1)

You can verify that p(X) does indeed lie between 0 and  1 as X varies from -\infty to \infty.  Typically, however, the values of X that make sense are bounded as shown in the example (stolen from Wikipedia) shown in Figure 1. The figure also illustrates the typical S-shaped  curve characteristic of logistic regression.

Figure 1: Logistic function

As an aside, you might be wondering where the name logistic comes from. An equivalent way of expressing the above equation is:

\log(\dfrac{p(X)}{1-p(X)}) = \beta_0+\beta_1 X .....(2)

The quantity on the left is the logarithm of the odds. So, the model is a linear regression of the log-odds, sometimes called logit, and hence the name logistic.

The problem is to find the values of \beta_0  and \beta_1 that results in a p(X) that most accurately classifies all the observed data points – that is, those that belong to the positive class have a probability as close as possible to 1 and those that belong to the negative class have a probability as close as possible to 0. One way to frame this problem is to say that we wish to maximise the product of these probabilities, often referred to as the likelihood:

\displaystyle\log ( {\prod_{i:Y_i=+} p(X_{i}) \prod_{j:Y_j=-}(1-p(X_{j}))})

Where \prod represents the products over i and j, which run over the +ve and –ve classed points respectively. This approach, called maximum likelihood estimation, is quite common in many machine learning settings, especially those involving probabilities.

It should be noted that in practice one works with the log likelihood because it is easier to work with mathematically. Moreover, one minimises the negative  log likelihood which, of course, is the same as maximising the log likelihood.  The quantity one minimises is thus:

L = - \displaystyle\log ( {\prod_{i:Y_i=+} p(X_{i}) \prod_{j:Y_j=-}(1-p(X_{j}))}).....(3)

However, these are technical details that I mention only for completeness. As you will see next, they have little bearing on the practical use of logistic regression.

Logistic regression in R – an example

In this example, we’ll use the logistic regression option implemented within the glm function that comes with the base R installation. This function fits a class of models collectively known as generalized linear models. We’ll apply the function to the Pima Indian Diabetes dataset that comes with the mlbench package. The code is quite straightforward – particularly if you’ve read earlier articles in my “gentle introduction” series – so I’ll just list the code below  noting that the logistic regression option is invoked by setting family=”binomial”  in the glm function call.

Here we go:

#set working directory if needed (modify path as needed)
#setwd(“C:/Users/Kailash/Documents/logistic”)
#load required library
library(mlbench)
#load Pima Indian Diabetes dataset
data(“PimaIndiansDiabetes”)
#set seed to ensure reproducible results
set.seed(42)
#split into training and test sets
PimaIndiansDiabetes[,”train”] <- ifelse(runif(nrow(PimaIndiansDiabetes))<0.8,1,0)
#separate training and test sets
trainset <- PimaIndiansDiabetes[PimaIndiansDiabetes$train==1,]
testset <- PimaIndiansDiabetes[PimaIndiansDiabetes$train==0,]
#get column index of train flag
trainColNum <- grep(“train”,names(trainset))
#remove train flag column from train and test sets
trainset <- trainset[,-trainColNum]
testset <- testset[,-trainColNum]
#get column index of predicted variable in dataset
typeColNum <- grep(“diabetes”,names(PimaIndiansDiabetes))
#build model
glm_model <- glm(diabetes~.,data = trainset, family = binomial)
summary(glm_model)
Call:
glm(formula = diabetes ~ ., family = binomial, data = trainset)
<<output edited>>
Coefficients:
            Estimate  Std. Error z value Pr(>|z|)
(Intercept)-8.1485021 0.7835869 -10.399  < 2e-16 ***
pregnant    0.1200493 0.0355617   3.376  0.000736 ***
glucose     0.0348440 0.0040744   8.552  < 2e-16 ***
pressure   -0.0118977 0.0057685  -2.063  0.039158 *
triceps     0.0053380 0.0076523   0.698  0.485449
insulin    -0.0010892 0.0009789  -1.113  0.265872
mass        0.0775352 0.0161255   4.808  1.52e-06 ***
pedigree    1.2143139 0.3368454   3.605  0.000312 ***
age         0.0117270 0.0103418   1.134  0.256816
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#predict probabilities on testset
#type=”response” gives probabilities, type=”class” gives class
glm_prob <- predict.glm(glm_model,testset[,-typeColNum],type=”response”)
#which classes do these probabilities refer to? What are 1 and 0?
contrasts(PimaIndiansDiabetes$diabetes)
    pos
neg 0
pos 1
#make predictions
##…first create vector to hold predictions (we know 0 refers to neg now)
glm_predict <- rep(“neg”,nrow(testset))
glm_predict[glm_prob>.5] <- “pos”
#confusion matrix
table(pred=glm_predict,true=testset$diabetes)
glm_predict  neg pos
        neg    90 22
        pos     8 33
#accuracy
mean(glm_predict==testset$diabetes)
[1] 0.8039216

 

Although this seems pretty good, we aren’t quite done because there is an issue that is lurking under the hood. To see this, let’s examine the information output from the model summary, in particular the coefficient estimates (i.e. estimates for \beta) and their significance. Here’s a summary of the information contained in the table:

  • Column 2 in the table lists coefficient estimates.
  • Column 3 list s the standard error of the estimates (the larger the standard error, the less confident we are about the estimate)
  • Column 4 the z statistic (which is the coefficient estimate (column 2) divided by the standard error of the estimate (column 3)) and
  • The last column (Pr(>|z|) lists the p-value, which is the probability of getting the listed estimate assuming the predictor has no effect. In essence, the smaller the p-value, the more significant the estimate is likely to be.

From the table we can conclude that only 4 predictors are significant – pregnant, glucose, mass and pedigree (and possibly a fifth – pressure). The other variables have little predictive power and worse, may contribute to overfitting.  They should, therefore, be eliminated and we’ll do that in a minute. However, there’s an important point to note before we do so…

In this case we have only 9 variables, so are able to identify the significant ones by a manual inspection of p-values.  As you can well imagine, such a process will quickly become tedious as the number of predictors increases. Wouldn’t it be be nice if there were an algorithm that could somehow automatically shrink the coefficients of these variables or (better!) set them to zero altogether?  It turns out that this is precisely what  lasso and its close cousin, ridge regression, do.

Ridge and Lasso

Recall that the values of the logistic regression coefficients \beta_0  and \beta_1 are found by minimising the negative log likelihood described in equation (3).  Ridge and lasso regularization work by adding a penalty term to the log likelihood function.  In the case of ridge regression, the penalty term is \beta_1^2 and in the case of lasso, it is |\beta_1| (Remember, \beta_1  is a vector, with as many components as there are predictors).  The quantity to be minimised in the two cases is thus:

L +\lambda \sum \beta_1^2.....(4) – for ridge regression,

and

L +\lambda \sum |\beta_1|.....(5) – for lasso regression.

Where \lambda is a free parameter which is usually selected in such a way that the resulting model minimises the out of sample error. Typically, the optimal value of \lambda is found using grid search with cross-validation, a process akin to the one described in my discussion on cost-complexity parameter  estimation in decision trees. Most canned algorithms provide methods to do this; the one we’ll use in the next section is no exception.

In the case of ridge regression, the effect of the penalty term is to shrink the coefficients that contribute most to the error. Put another way, it reduces the magnitude of the coefficients that contribute to increasing L.  In contrast, in  the case of lasso regression, the effect of the penalty term is to set the these coefficients exactly to zero! This is cool because what it mean that lasso regression works like a feature selector that picks out the most important coefficients, i.e. those that are most predictive (and have the lowest p-values).

Let’s illustrate this through an example. We’ll use the glmnet package which implements a combined version of ridge and lasso (called elastic net). Instead of minimising (4) or (5) above, glmnet minimises:

L +\lambda[ (1-\alpha)\sum [\beta_1^2 + \alpha\sum|\beta_1|]....(6)

where \alpha controls the “mix” of ridge and lasso regularisation, with \alpha=0 being “pure” ridge and  \alpha=1 being “pure” lasso.

Lasso regularisation using glmnet

Let’s reanalyse the Pima Indian Diabetes dataset using glmnet with \alpha=1 (pure lasso). Before diving into code, it is worth noting that glmnet:

  • does not have a formula interface, so one has to input the predictors as a matrix and the class labels as a vector.
  • does not accept categorical predictors, so one has to convert these to numeric values before passing them to glmnet.

The glmnet function model.matrix creates the matrix and also converts categorical predictors to appropriate dummy variables.

Another important point to note is that we’ll use the function cv.glmnet, which automatically performs a grid search to find the optimal value of \lambda.

OK, enough said, here we go:

#load required library
library(glmnet)
#convert training data to matrix format
x <- model.matrix(diabetes~.,trainset)
#convert class to numerical variable
y <- ifelse(trainset$diabetes==”pos”,1,0)
#perform grid search to find optimal value of lambda
#family= binomial => logistic regression, alpha=1 => lasso
# check docs to explore other type.measure options
cv.out <- cv.glmnet(x,y,alpha=1,family=”binomial”,type.measure = “mse” )
#plot result
plot(cv.out)

 

The plot is shown in Figure 2 below:

Figure 2: Error as a function of lambda (select lambda that minimises error)

The plot shows that the log of the optimal value of lambda (i.e. the one that minimises the root mean square error) is approximately -5. The exact value can be viewed by examining the variable lambda_min in the code below. In general though, the objective of regularisation is to balance accuracy and simplicity. In the present context, this means a model with the smallest number of coefficients that also gives a good accuracy.  To this end, the cv.glmnet function  finds the value of lambda that gives the simplest model but also lies within one standard error of the optimal value of lambda. This value of lambda (lambda.1se) is what we’ll use in the rest of the computation. Interested readers should have a look at this article for more on lambda.1se vs lambda.min.

#min value of lambda
lambda_min <- cv.out$lambda.min
#best value of lambda
lambda_1se <- cv.out$lambda.1se
#regression coefficients
coef(cv.out,s=lambda_1se)
10 x 1 sparse Matrix of class “dgCMatrix”
                      1
(Intercept) -4.61706681
(Intercept)  .
pregnant     0.03077434
glucose      0.02314107
pressure     .
triceps      .
insulin      .
mass         0.02779252
pedigree     0.20999511
age          .

 

The output shows that only those variables that we had determined to be significant on the basis of p-values have non-zero coefficients. The coefficients of all other variables have been set to zero by the algorithm! Lasso has reduced the complexity of the fitting function massively…and you are no doubt wondering what effect this  has on accuracy. Let’s see by running the model against our test data:

 

#get test data
x_test <- model.matrix(diabetes~.,testset)
#predict class, type=”class”
lasso_prob <- predict(cv.out,newx = x_test,s=lambda_1se,type=”response”)
#translate probabilities to predictions
lasso_predict <- rep(“neg”,nrow(testset))
lasso_predict[lasso_prob>.5] <- “pos”
#confusion matrix
table(pred=lasso_predict,true=testset$diabetes)
pred  neg pos
   neg 94 28
  pos  4 27
#accuracy
mean(lasso_predict==testset$diabetes)
[1] 0.7908497

 

Which is a bit less than what we got with the more complex model. So, we get  a similar out-of-sample accuracy as we did before, and we do so using a way simpler function (4 non-zero coefficients) than the original one (9  nonzero coefficients). What this means is that the simpler function does at least as good a job fitting the signal in the data as the more complicated one.  The bias-variance tradeoff tells us that the simpler function should be preferred because it is less likely to overfit the training data.

Paraphrasing William of Ockhamall other things being equal, a simple hypothesis should be preferred over a complex one.

Wrapping up

In this post I have tried to provide a detailed introduction to logistic regression, one of the simplest (and oldest) classification techniques in the machine learning practitioners arsenal. Despite it’s simplicity (or I should say, because of it!) logistic regression works well for many business applications which often have a simple decision boundary. Moreover, because of its simplicity it is less prone to overfitting than flexible methods such as decision trees. Further, as we have shown, variables that contribute to overfitting can be eliminated using lasso (or ridge) regularisation, without compromising out-of-sample accuracy. Given these advantages and its inherent simplicity, it isn’t surprising that logistic regression remains a workhorse for data scientists.

Written by K

July 11, 2017 at 10:00 pm

 A gentle introduction to support vector machines using R

with 6 comments

Introduction

Most machine learning algorithms involve minimising an error measure of some kind (this measure is often called an objective function or loss function).  For example, the error measure in linear regression problems is the famous mean squared error – i.e. the averaged sum of the squared differences between the predicted and actual values. Like the mean squared error, most objective functions depend on all points in the training dataset.  In this post, I describe the support vector machine (SVM) approach which focuses instead on finding the optimal separation boundary between datapoints that have different classifications.  I’ll elaborate on what this means in the next section.

Here’s the plan in brief. I’ll begin with the rationale behind SVMs using a simple case of a binary (two class) dataset with a simple separation boundary (I’ll clarify what “simple” means in a minute).  Following that, I’ll describe how this can be generalised to datasets with more complex boundaries. Finally, I’ll work through a couple of examples in R, illustrating the principles behind SVMs. In line with the general philosophy of my “Gentle Introduction to Data Science Using R” series, the focus is on developing an intuitive understanding of the algorithm along with a practical demonstration of its use through a toy example.

The rationale

The basic idea behind SVMs is best illustrated by considering a simple case:  a set of data points that belong to one of two classes, red and blue, as illustrated in figure 1 below. To make things simpler still, I have assumed that the boundary separating the two classes is a straight line, represented by the solid green line in the diagram.  In the technical literature, such datasets are called linearly separable.

Figure 1:

Figure 1: Linearly separable data

In the linearly separable case, there is usually a fair amount of freedom in the way a separating line can be drawn. Figure 2 illustrates this point: the two broken green lines are also valid separation boundaries. Indeed, because there is a non-zero distance between the two closest points between categories, there are an infinite number of possible separation lines. This, quite naturally, raises the question as to whether it is possible to choose a separation boundary that is optimal.

Figure 2: Illustrating multiple separation boundaries

Figure 2: Illustrating multiple separation boundaries

The short answer is, yes there is. One way to do this is to select a boundary line that maximises the margin, i.e. the distance between the separation boundary and the points that are closest to it.  Such an optimal boundary is illustrated by the black brace in Figure 3.  The really cool thing about this criterion is that the location of the separation boundary depends only on the points that are closest to it. This means, unlike other classification methods, the classifier does not depend on any other points in dataset. The directed lines between the boundary and the closest points on either side are called support vectors (these are the solid black lines in figure 3). A direct implication of this is that the fewer the support vectors, the better the generalizability of the boundary.

Figure 3: Optimal separation boundary in linearly separable case

Although the above sounds great, it is of limited practical value because real data sets are seldom (if ever) linearly separable.

So, what can we do when dealing with real (i.e. non linearly separable) data sets?

A simple approach to tackle small deviations from linear separability is to allow a small number of points (those that are close to the boundary) to be misclassified.  The number of possible misclassifications is governed by a free parameter C, which is called the cost.  The cost is essentially the penalty associated with making an error: the higher the value of C, the less likely it is that the algorithm will misclassify a point.

This approach – which is called soft margin classification – is illustrated in Figure 4. Note the points on the wrong side of the separation boundary.  We will demonstrate soft margin SVMs in the next section.  (Note:  At the risk of belabouring the obvious, the purely linearly separable case discussed in the previous para is simply is a special case of the soft margin classifier.)

Figure 3: Soft margin classifier (linearly separable data)

Figure 4: Soft margin classifier (linearly separable data)

Real life situations are much more complex and cannot be dealt with using soft margin classifiers. For example, as shown in Figure 5, one could have widely separated clusters of points that belong to the same classes. Such situations, which require the use of multiple (and nonlinear) boundaries, can sometimes be dealt with using a clever approach called the kernel trick.

Figure 5: Non-linearly separable data

Figure 5: Non-linearly separable data

The kernel trick

Recall that in the linearly separable (or soft margin) case, the SVM algorithm works by finding a separation boundary that maximises the margin, which is the distance between the boundary and the points closest to it. The distance here is the usual straight line distance between the boundary and the closest point(s). This is called the Euclidean distance in honour of the great geometer of antiquity. The point to note is that this process results in a separation boundary that is a straight line, which as Figure 5 illustrates, does not always work. In fact in most cases it won’t.

So what can we do? To answer this question, we have to take a bit of a detour…

What if we were able to generalize the notion of distance in a way that generates nonlinear separation boundaries? It turns out that this is possible. To see how, one has to first understand how the notion of distance can be generalized.

The key properties that any measure of distance must satisfy are:

  1. Non-negativity – a distance cannot be negative, a point that needs no further explanation I reckon 🙂
  2. Symmetry – that is, the distance between point A and point B is the same as the distance between point B and point A.
  3. Identity– the distance between a point and itself is zero.
  4. Triangle inequality – that is the sum of distances between point A and B and points B and C must be less than or equal to the distance between A and C (equality holds only if all three points lie along the same line).

Any mathematical object that displays the above properties is akin to a distance. Such generalized distances are called metrics and the mathematical space in which they live is called a metric space. Metrics are defined using special mathematical functions designed to satisfy the above conditions. These functions are known as kernels.

The essence of the kernel trick lies in mapping the classification problem to a  metric space in which the problem is rendered separable via a separation boundary that is simple in the new space, but complex – as it has to be – in the original one. Generally, the transformed space has a higher dimensionality, with each of the dimensions being (possibly complex) combinations of the original problem variables. However, this is not necessarily a problem because in practice one doesn’t actually mess around with transformations, one just tries different kernels (the transformation being implicit in the kernel) and sees which one does the job. The check is simple: we simply test the predictions resulting from using different kernels against a held out subset of the data (as one would for any machine learning algorithm).

It turns out that a particular function – called the radial basis function kernel  (RBF kernel) – is very effective in many cases.  The RBF kernel is essentially a Gaussian (or Normal) function with the Euclidean distance between pairs of points as the variable (see equation 1 below).   The basic rationale behind the RBF kernel is that it creates separation boundaries that it tends to classify points close together (in the Euclidean sense) in the original space in the same way. This is reflected in the fact that the kernel decays (i.e. drops off to zero) as the Euclidean distance between points increases.

\exp (-\gamma |\mathbf{x-y}|)....(1)

The rate at which a kernel decays is governed by the parameter \gamma – the higher the value of \gamma, the more rapid the decay.  This serves to illustrate that the RBF kernel is extremely flexible….but the flexibility comes at a price – the danger of overfitting for large values of \gamma .  One should choose appropriate values of C and \gamma so as to ensure that the resulting kernel represents the best possible balance between flexibility and accuracy. We’ll discuss how this is done in practice later in this article.

Finally, though it is probably obvious, it is worth mentioning that the separation boundaries for arbitrary kernels are also defined through support vectors as in Figure 3.  To reiterate a point made earlier, this means that a solution that has fewer support vectors is likely to be more robust than one with many. Why? Because the data points defining support vectors are ones that are most sensitive to noise- therefore the fewer, the better.

There are many other types of kernels, each with their own pros and cons. However, I’ll leave these for adventurous readers to explore by themselves.  Finally, for a much more detailed….and dare I say, better… explanation of the kernel trick, I highly recommend this article by Eric Kim.

Support vector machines in R

In this demo we’ll use the svm interface that is implemented in the e1071 R package. This interface provides R programmers access to the comprehensive libsvm library written by Chang and Lin. I’ll use two toy datasets: the famous iris dataset available with the base R package and the sonar dataset from the mlbench package. I won’t describe details of the datasets as they are discussed at length in the documentation that I have linked to. However, it is worth mentioning the reasons why I chose these datasets:

  1. As mentioned earlier, no real life dataset is linearly separable, but the iris dataset is almost so. Consequently, it is a good illustration of using linear SVMs. Although one almost never uses these in practice, I have illustrated their use primarily for pedagogical reasons.
  2. The sonar dataset is a good illustration of the benefits of using RBF kernels in cases where the dataset is hard to visualise (60 variables in this case!). In general, one would almost always use RBF (or other nonlinear) kernels in practice.

With that said, let’s get right to it. I assume you have R and RStudio installed. For instructions on how to do this, have a look at the first article in this series. The processing preliminaries – loading libraries, data and creating training and test datasets are much the same as in my previous articles so I won’t dwell on these here. For completeness, however, I’ll list all the code so you can run it directly in R or R studio (a complete listing of the code can be found here):

#set working directory if needed (modify path as needed)
setwd(“C:/Users/Kailash/Documents/svm”)
#load required library
library(e1071)
#load built-in iris dataset
data(iris)
#set seed to ensure reproducible results
set.seed(42)
#split into training and test sets
iris[,”train”] <- ifelse(runif(nrow(iris))<0.8,1,0)
#separate training and test sets
trainset <- iris[iris$train==1,]
testset <- iris[iris$train==0,]
#get column index of train flag
trainColNum <- grep("train",names(trainset))
#remove train flag column from train and test sets
trainset <- trainset[,-trainColNum]
testset <- testset[,-trainColNum]
#get column index of predicted variable in dataset
typeColNum <- grep("Species",names(iris))
#build model – linear kernel and C-classification (soft margin) with default cost (C=1)
svm_model <- svm(Species~ ., data=trainset, method="C-classification", kernel="linear")
svm_model
Call:
svm(formula = Species ~ ., data = trainset, method = “C-classification”, kernel = “linear”)
Parameters:
SVM-Type: C-classification
SVM-Kernel: linear
cost: 1
gamma: 0.25
Number of Support Vectors: 24
#training set predictions
pred_train <-predict(svm_model,trainset)
mean(pred_train==trainset$Species)
[1] 0.9826087
#test set predictions
pred_test <-predict(svm_model,testset)
mean(pred_test==testset$Species)
[1] 0.9142857

 

The output from the SVM model show that there are 24 support vectors. If desired, these can be examined using the SV variable in the model – i.e via svm_model$SV.

The test prediction accuracy indicates that the linear performs quite well on this dataset, confirming that it is indeed near linearly separable. To check performance by class, one can create a confusion matrix as described in my post on random forests. I’ll leave this as an exercise for you.  Another point is that  we have used a soft-margin classification scheme with a cost C=1. You can experiment with this by explicitly changing the value of C. Again, I’ll leave this for you an exercise.

Before proceeding to the RBF kernel, I should mention a point that an alert reader may have noticed. The predicted variable, Species, can take on 3 values (setosa, versicolor and virginica). However, our discussion above dealt with a binary (2 valued) classification problem. This brings up the question as to how the algorithm deals multiclass classification problems – i.e those involving datasets with more than two classes. The libsvm algorithm (which svm uses) does this using a one-against-one classification strategy. Here’s how it works:

  1. Divide the dataset (assumed to have N classes) into N(N-1)/2 datasets that have two classes each.
  2. Solve the binary classification problem for each of these subsets
  3. Use a simple voting mechanism to assign a class to each data point.

Basically, each data point is assigned the most frequent classification it receives from all the binary classification problems it figures in.

With that said for the unrealistic linear classifier, let’s move to the real world.  In the code below, I build SVM models using three different kernels

  1.  Linear kernel (this is for comparison with the following 2 kernels).
  2. RBF kernel with default values for the parameters C and \gamma.
  3. RBF kernel with optimal values for C and \gamma. The optimal values are obtained using the tune.svm function (also available in e1071), which essentially builds models for multiple combinations of parameter values and selects the best.

OK, lets go:

#load required library (assuming e1071 is already loaded)
library(mlbench)
#load Sonar dataset
data(Sonar)
#set seed to ensure reproducible results
set.seed(42)
#split into training and test sets
Sonar[,”train”] <- ifelse(runif(nrow(Sonar))<0.8,1,0)
#separate training and test sets
trainset <- Sonar[Sonar$train==1,]
testset <- Sonar[Sonar$train==0,]
#get column index of train flag
trainColNum <- grep("train",names(trainset))
#remove train flag column from train and test sets
trainset <- trainset[,-trainColNum]
testset <- testset[,-trainColNum]
#get column index of predicted variable in dataset
typeColNum <- grep("Class",names(Sonar))
#build model – linear kernel and C-classification with default cost (C=1)
svm_model <- svm(Class~ ., data=trainset, method="C-classification", kernel="linear")
#training set predictions
pred_train <-predict(svm_model,trainset)
mean(pred_train==trainset$Class)
[1] 0.969697
#test set predictions
pred_test <-predict(svm_model,testset)
mean(pred_test==testset$Class)
[1] 0.6046512

I’ll leave you to examine the contents of the model. The important point to note here is that the performance of the model with the test set is quite dismal compared to the previous case. This simply indicates that the linear kernel is not appropriate here.  Let’s take a look at what happens if we use the RBF kernel with default values for the parameters:

#build model: radial kernel, default params
svm_model <- svm(Class~ ., data=trainset, method="C-classification", kernel="radial")
#print params
svm_model$cost
[1] 1
svm_model$gamma
[1] 0.01666667
#training set predictions
pred_train <-predict(svm_model,trainset)
mean(pred_train==trainset$Class)
[1] 0.9878788
#test set predictions
pred_test <-predict(svm_model,testset)
mean(pred_test==testset$Class)
[1] 0.7674419

That’s a pretty decent improvement from the linear kernel. Let’s see if we can do better by doing some parameter tuning. To do this we first invoke tune.svm and use the parameters it gives us in the call to svm:

#find optimal parameters in a specified range
tune_out <- tune.svm(x=trainset[,-typeColNum],y=trainset[,typeColNum],gamma=10^(-3:3),cost=c(0.01,0.1,1,10,100,1000),kernel="radial")
#print best values of cost and gamma
tune_out$best.parameters$cost
[1] 10
tune_out$best.parameters$gamma
[1] 0.01
#build model
svm_model <- svm(Class~ ., data=trainset, method="C-classification", kernel="radial",cost=tune_out$best.parameters$cost,gamma=tune_out$best.parameters$gamma)
#training set predictions
pred_train <-predict(svm_model,trainset)
mean(pred_train==trainset$Class)
[1] 1
#test set predictions
pred_test <-predict(svm_model,testset)
mean(pred_test==testset$Class)
[1] 0.8139535

Which is fairly decent improvement on the un-optimised case.

Wrapping up

This bring us to the end of this introductory exploration of SVMs in R. To recap, the distinguishing feature of SVMs in contrast to most other techniques is that they attempt to construct optimal separation boundaries between different categories.

SVMs  are quite versatile and have been applied to a wide variety of domains ranging from chemistry to pattern recognition. They are best used in binary classification scenarios. This brings up a question as to where SVMs are to be preferred to other binary classification techniques such as logistic regression. The honest response is, “it depends” – but here are some points to keep in mind when choosing between the two. A general point to keep in mind is that SVM  algorithms tend to be expensive both in terms of memory and computation, issues that can start to hurt as the size of the dataset increases.

Given all the above caveats and considerations, the best way  to figure out whether an SVM approach will work for your problem may be to do what most machine learning practitioners do: try it out!

Written by K

February 7, 2017 at 8:27 pm

A gentle introduction to random forests using R

with 7 comments

Introduction

In a previous post, I described how decision tree algorithms work and demonstrated their use via the rpart library in R. Decision trees work by splitting a dataset recursively. That is, subsets arising from a split are further split until a predetermined termination criterion is reached.  At each step, a split is made based on the independent variable that results in the largest possible reduction in heterogeneity of the dependent  variable.

(Note:  readers unfamiliar with decision trees may want to read that post before proceeding)

The main drawback of decision trees is that they are prone to overfitting.   The  reason for this is that trees, if grown deep, are able to fit  all kinds of variations in the data, including noise. Although it is possible to address this partially by pruning, the result often remains less than satisfactory. This is because the algorithm makes a locally optimal choice at each split without any regard to whether the choice made is the best one overall.  A poor split made in the initial stages can thus doom the model, a problem that cannot be fixed by post-hoc pruning.

In this post I describe random forests, a tree-based algorithm that addresses the above shortcoming of decision trees. I’ll first describe the intuition behind the algorithm via an analogy and then do a demo using the R randomForest library.

Motivating random forests

One of the reasons for the popularity of decision trees is that they reflect the way humans make decisions: by weighing up options at each stage and choosing the best one available.  The analogy is particularly useful because it also suggests how decision trees can be improved.

One of the lifelines in the game show, Who Wants to be A Millionaire, is “Ask The Audience” wherein a contestant can ask the audience to vote on the answer to a question.  The rationale here is that the majority response from a large number of independent decision makers is more likely to yield a correct answer than one from a randomly chosen person.  There are two factors at play here:

  1. People have different experiences and will therefore draw upon different “data” to answer the question.
  2. People have different knowledge bases and preferences and will therefore draw upon different “variables” to make their choices at each stage in their decision process.

Taking a cue from the above, it seems reasonable to build many decision trees using:

  1. Different sets of training data.
  2. Randomly selected subsets of variables at each split of every decision tree.

Predictions can then made by taking the majority vote over all trees (for classification problems) or averaging results over all trees (for regression problems).  This is essentially how the random forest algorithm works.

The net effect of the two strategies is to reduce overfitting by a) averaging over trees created from different samples of the dataset and b) decreasing the likelihood of a small set of strong predictors dominating the splits.  The price paid is reduced interpretability as well as increased computational complexity. But then, there is no such thing as a free lunch.

The mechanics of the algorithm

Although we will not delve into the mathematical details of the algorithm, it is important to understand how two points made above are implemented in the algorithm.

Bootstrap aggregating… and a (rather cool) error estimate

A key feature of the algorithm is the use of multiple datasets for training individual decision trees.  This is done via a neat statistical trick called bootstrap aggregating (also called bagging).

Here’s how bagging works:

Assume you have a dataset of size N.  From this you create a sample (i.e. a subset) of size n (n less than or equal to N) by choosing n data points randomly with replacement.  “Randomly” means every point in the dataset is equally likely to be chosen and   “with replacement” means that a specific data point can appear more than once in the subset. Do this M times to create M equally-sized samples of size n each.  It can be shown that this procedure, which statisticians call bootstrapping, is legit when samples are created from large datasets – that is, when N is large.

Because a bagged sample is created by selection with replacement, there will generally be some points that are not selected.  In fact, it can be shown that, on the average, each sample will use about two-thirds of the available data points. This gives us a clever way to estimate the error as part of the process of model building.

Here’s how:

For every data point, obtain predictions for trees in which the point was out of bag. From the result mentioned above, this will yield approximately M/3 predictions per data point (because a third of the data points are out of bag).  Take the majority vote of these M/3 predictions as the predicted value for the data point. One can do this for the entire dataset. From these out of bag predictions for the whole dataset, we can estimate the overall error by computing a classification error (Count of correct predictions divided by N) for classification problems or the root mean squared error for regression problems.  This means there is no need to have a separate test data set, which is kind of cool.  However, if you have enough data, it is worth holding out some data for use as an independent test set. This is what we’ll do in the demo later.

Using subsets of predictor variables

Although bagging reduces overfitting somewhat, it does not address the issue completely. The reason is that in most datasets a small number of predictors tend to dominate the others.  These predictors tend to be selected in early splits and thus influence the shapes and sizes of a significant fraction of trees in the forest.  That is, strong predictors enhance correlations between trees which tends to come in the way of variance reduction.

A simple way to get around this problem is to use a random subset of variables at each split. This avoids over-representation of dominant variables and thus creates a more diverse forest. This is precisely what the random forest algorithm does.

Random forests in R

In what follows, I use the famous Glass dataset from the mlbench library.  The dataset has 214 data points of six types of glass  with varying metal oxide content and refractive indexes. I’ll first build a decision tree model based on the data using the rpart library (recursive partitioning) that I covered in an earlier article and then use then show how one can build a random forest model using the randomForest library. The rationale behind this is to compare the two models – single decision tree vs random forest. In the interests of space,  I won’t explain details of the rpart here as  I’ve covered it at length in the previous article. However, for completeness, I’ll list the demo code for it before getting into random forests.

Decision trees using rpart

Here’s the code listing for building a decision tree using rpart on the Glass dataset (please see my previous article for a full explanation of each step). Note that I have not used pruning as there is little benefit to be gained from it (Exercise for the reader: try this for yourself!).

#set working directory if needed (modify path as needed)
setwd(“C:/Users/Kailash/Documents/rf”)
#load required libraries – rpart for classification and regression trees
library(rpart)
#mlbench for Glass dataset
library(mlbench)
#load Glass
data(“Glass”)
#set seed to ensure reproducible results
set.seed(42)
#split into training and test sets
Glass[,”train”] <- ifelse(runif(nrow(Glass))<0.8,1,0)
#separate training and test sets
trainGlass <- Glass[Glass$train==1,]
testGlass <- Glass[Glass$train==0,]
#get column index of train flag
trainColNum <- grep(“train”,names(trainGlass))
#remove train flag column from train and test sets
trainGlass <- trainGlass[,-trainColNum]
testGlass <- testGlass[,-trainColNum]
#get column index of predicted variable in dataset
typeColNum <- grep(“Type”,names(Glass))
#build model
rpart_model <- rpart(Type ~.,data = trainGlass, method=”class”)
#plot tree
plot(rpart_model);text(rpart_model)
#…and the moment of reckoning
rpart_predict <- predict(rpart_model,testGlass[,-typeColNum],type=”class”)
mean(rpart_predict==testGlass$Type)
[1] 0.6744186

Now, we know that decision tree algorithms tend to display high variance so the hit rate from any one tree is likely to be misleading. To address this we’ll generate a bunch of trees using different training sets (via random sampling) and calculate an average hit rate and spread (or standard deviation).

#function to do multiple runs
multiple_runs <- function(train_fraction,n,dataset){
fraction_correct <- rep(NA,n)
set.seed(42)
for (i in 1:n){
dataset[,”train”] <- ifelse(runif(nrow(dataset))<0.8,1,0)
trainColNum <- grep(“train”,names(dataset))
typeColNum <- grep(“Type”,names(dataset))
trainset <- dataset[dataset$train==1,-trainColNum]
testset <- dataset[dataset$train==0,-trainColNum]
rpart_model <- rpart(Type~.,data = trainset, method=”class”)
rpart_test_predict <- predict(rpart_model,testset[,-typeColNum],type=”class”)
fraction_correct[i] <- mean(rpart_test_predict==testset$Type)
}
return(fraction_correct)
}
#50 runs, no pruning
n_runs <- multiple_runs(0.8,50,Glass)
mean(n_runs)
[1] 0.6874315
sd(n_runs)
[1] 0.0530809

The decision tree algorithm gets it right about 69% of the time with a variation of about 5%. The variation isn’t too bad here, but the accuracy has hardly improved at all (Exercise for the reader: why?). Let’s see if we can do better using random forests.

Random forests

As discussed earlier, a random forest algorithm works by averaging over multiple trees using bootstrapped samples. Also, it reduces the correlation between trees by splitting on a random subset of predictors at each node in tree construction. The key parameters for randomForest algorithm are the number of trees (ntree) and the number of variables to be considered for splitting (mtry).  The algorithm sets a default of 500 for ntree and sets mtry to the  square root of the the number of predictors for classification problems or one-third the total number of predictors for regression.   These defaults can be overridden by explicitly providing values for these variables.

The preliminary stuff – the creation of training and test datasets etc. – is much the same as for decision trees but I’ll list the code for completeness.

#load required library – randomForest
library(randomForest)
#mlbench for Glass dataset – load if not already loaded
#library(mlbench)
#load Glass
data(“Glass”)
#set seed to ensure reproducible results
set.seed(42)
#split into training and test sets
Glass[,”train”] <- ifelse(runif(nrow(Glass))<0.8,1,0)
#separate training and test sets
trainGlass <- Glass[Glass$train==1,]
testGlass <- Glass[Glass$train==0,]
#get column index of train flag
trainColNum <- grep(“train”,names(trainGlass))
#remove train flag column from train and test sets
trainGlass <- trainGlass[,-trainColNum]
testGlass <- testGlass[,-trainColNum]
#get column index of predicted variable in dataset
typeColNum <- grep(“Type”,names(Glass))
#build model
Glass.rf <- randomForest(Type ~.,data = trainGlass, importance=TRUE, xtest=testGlass[,-typeColNum],ntree=1000)
#Get summary info
Glass.rf
Call:
randomForest(formula = Type ~ ., data = trainGlass, importance = TRUE, xtest = testGlass[, -typeColNum], ntree = 1001)
Type of random forest: classification
Number of trees: 1000
No. of variables tried at each split: 3
OOB estimate of error rate: 23.98%
Confusion matrix:
1 2 3 5 6 7 class.error
1 40 7 2 0 0 0 0.1836735
2 8 49 1 2 2 1 0.2222222
3 6 3 6 0 0 0 0.6000000
5 0 1 0 11 0 1 0.1538462
6 1 2 0 1 6 0 0.5000000
7 1 2 0 1 0 21 0.1600000

The first thing to note is the out of bag error estimate is ~ 24%.  Equivalently the hit rate is 76%, which is better than the 69% for decision trees. Secondly, you’ll note that the algorithm does a terrible job identifying type 3 and 6 glasses correctly. This could possibly be improved by a technique called boosting, which works by  iteratively improving poor predictions made in earlier stages. I plan to look at boosting in a future post, but if you’re curious, check out the gbm package in R.

Finally, for completeness, let’s see how the test set does:

#accuracy for test set
mean(Glass.rf$test$predicted==testGlass$Type)
[1] 0.8372093
#confusion matrix
table(Glass.rf$test$predicted,testGlass$Type)
1 2 3 5 6 7
1 19 2 0 0 0 0
2 1 9 1 0 0 0
3 1 1 1 0 0 0
5 0 1 0 0 0 0
6 0 0 0 0 3 0
7 0 0 0 0 0 4

The test accuracy is better than the out of bag accuracy and there are some differences in the class errors as well. However, overall the two compare quite well and are significantly better than the results of the decision tree algorithm.

Variable importance

Random forest algorithms also give measures of variable importance. Computation of these is enabled by setting  importance, a boolean parameter, to TRUE. The algorithm computes two measures of variable importance: mean decrease in Gini and mean decrease in accuracy. Brief explanations of these follow.

Mean decrease in Gini

When determining splits in individual trees, the algorithm looks for the largest class (in terms of population) and attempts to isolate it first. If this is not possible, it tries to do the best it can, always focusing on isolating the largest remaining class in every split.This is called the Gini splitting rule (see this article for a good explanation of the rule).

The “goodness of split” is measured by the Gini Impurity, I_{G}. For a set containing K categories this is given by:

I_{G} = \sum_{i=1}^{K} f_{i}(1-f_{i})

where f_{i} is the fraction of the set that belongs to the ith category. Clearly, I_{G}  is 0 when the set is homogeneous or pure (1 class only) and is maximum when classes are equiprobable (for example, in a two class set the maximum occurs when f_{1} and f_{2} are 0.5). At each stage the algorithm chooses to split on the predictor that leads to the largest decrease in I_{G}. The algorithm tracks this decrease for each predictor for all splits and all trees in the forest. The average is reported  as the mean decrease in Gini.

Mean decrease in accuracy

The mean decrease in accuracy is calculated using the out of bag data points for each tree. The procedure goes as follows: when a particular tree is grown, the out of bag points are passed down the tree and the prediction accuracy (based on all out of bag points) recorded . The predictors are then randomly permuted and the out of bag prediction accuracy recalculated. The decrease in accuracy for a given predictor is the difference between the accuracy of the original (unpermuted) tree and the those obtained from the permuted trees in which the predictor was excluded. As in the previous case, the decrease in accuracy for each predictor can be computed and tracked as the algorithm progresses.  These can then be averaged by predictor to yield a mean decrease in accuracy.

Variable importance plot

From the above, it would seem that the mean decrease in accuracy is a more global measure as it uses fully constructed trees in contrast to the Gini measure which is based on individual splits. In practice, however, there could be other reasons for choosing one over the other…but that is neither here nor there, if you set importance to TRUE, you’ll get both. The numerical measures of importance are returned in the randomForest object (Glass.rf in our case), but I won’t list them here. Instead, I’ll just print out the variable importance plots for the two measures as these give a good visual overview of the relative importance of variables. The code is a simple one-liner:

#variable importance plot
varImpPlot(Glass.rf)

The plot is shown in Figure 1 below.

Figure 1: Variable importance plots

Figure 1: Variable importance plots

In this case the two measures are pretty consistent so it doesn’t really matter which one you choose.

Wrapping up

Random forests are an example of a general class of techniques called ensemble methods. These techniques are based on the principle that averaging over a large number of not-so-good models  yields a more reliable prediction than a single model. This is true only if models in the group are independent of  each other, which is precisely what bootstrap aggregation and predictor subsetting are intended to achieve.

Although  considerably more complex than decision trees, the logic behind random forests is not hard to understand. Indeed, the intuitiveness of the algorithm together with its ease of use and accuracy have made it very popular in the machine learning community.

Written by K

September 20, 2016 at 9:44 pm

A gentle introduction to decision trees using R

with 7 comments

Introduction

Most techniques of predictive analytics have their origins in probability or statistical theory (see my post on Naïve Bayes, for example).  In this post I’ll look at one that has more a commonplace origin: the way in which humans make decisions.  When making decisions, we typically identify the options available and then evaluate them based on criteria that are important to us.  The intuitive appeal of such a procedure is in no small measure due to the fact that it can be easily explained through a visual. Consider the following graphic, for example:

Figure 1: Example of a simple decision tree (Courtesy: Duncan Hull)

Figure 1: Example of a simple decision tree (Courtesy: Duncan Hull)

(Original image: https://www.flickr.com/photos/dullhunk/7214525854, Credit: Duncan Hull)

The tree structure depicted here provides a neat, easy-to-follow description of the issue under consideration and its resolution. The decision procedure is based on asking a series of questions, each of which serve to further reduce the domain of possibilities. The predictive technique I discuss in this post,classification and regression trees (CART), works in much the same fashion. It was invented by Leo Breiman and his colleagues in the 1970s.

In what follows, I will use the open source software, R. If you are new to R,   you may want to follow this link for more on the basics of setting up and installing it. Note that the R implementation of the CART algorithm is called RPART (Recursive Partitioning And Regression Trees). This is essentially because Breiman and Co. trademarked the term CART. As some others have pointed out, it is somewhat ironical that the algorithm is now commonly referred to as RPART rather than by the term coined by its inventors.

A bit about the algorithm

The rpart algorithm works by splitting the dataset recursively, which means that the subsets that arise from a split are further split until a predetermined termination criterion is reached.  At each step, the split is made based on the independent variable that results in the largest possible reduction in heterogeneity of the dependent (predicted) variable.

Splitting rules can be constructed in many different ways, all of which are based on the notion of impurity-  a measure of the degree of heterogeneity of the leaf nodes. Put another way, a leaf node that contains a single class is homogeneous and has impurity=0.   There are three popular impurity quantification methods: Entropy (aka information gain), Gini Index and Classification Error.  Check out this article for a simple explanation of the three methods.

The rpart algorithm offers the entropy  and Gini index methods as choices. There is a fair amount of fact and opinion on the Web about which method is better. Here are some of the better articles I’ve come across:

https://www.quora.com/Are-gini-index-entropy-or-classification-error-measures-causing-any-difference-on-Decision-Tree-classification

http://stats.stackexchange.com/questions/130155/when-to-use-gini-impurity-and-when-to-use-information-gain

https://www.garysieling.com/blog/sklearn-gini-vs-entropy-criteria

http://www.salford-systems.com/resources/whitepapers/114-do-splitting-rules-really-matter

The answer as to which method is the best is: it depends.  Given this, it may be prudent to try out a couple of methods and pick the one that works best for your problem.

Regardless of the method chosen, the splitting rules partition the decision space (a fancy word for the entire dataset) into rectangular regions each of which correspond to a split. Consider the following simple example with two predictors x1 and x2. The first split is at x1=1 (which splits the decision space into two regions x11), the second at x2=2, which splits the (x1>1) region into 2 sub-regions, and finally x1=1.5 which splits the (x1>1,x2>2) sub-region further.

Figure 2: Example of partitioning

Figure 2: Example of partitioning

It is important to note that the algorithm works by making the best possible choice at each particular stage, without any consideration of whether those choices remain optimal in future stages. That is, the algorithm makes a locally optimal decision at each stage. It is thus quite possible that such a choice at one stage turns out to be sub-optimal in the overall scheme of things.  In other words,  the algorithm does not find a globally optimal tree.

Another important point relates to well-known bias-variance tradeoff in machine learning, which in simple terms is a tradeoff between the degree to which a model fits the training data and its predictive accuracy.  This refers to the general rule that beyond a point, it is counterproductive to improve the fit of a model to the training data as this increases the likelihood of overfitting.  It is easy to see that deep trees are more likely to overfit the data than shallow ones. One obvious way to control such overfitting is to construct shallower trees by stopping the algorithm at an appropriate point based on whether a split significantly improves the fit.  Another is to grow a tree unrestricted and then prune it back using an appropriate criterion. The rpart algorithm takes the latter approach.

Here is how it works in brief:

Essentially one minimises the cost,  C_{\alpha}(T), a quantity that is a  linear combination of the error (essentially, the fraction of misclassified instances, or variance in the case of a continuous variable), R(T)  and the number of leaf nodes in the tree, |\tilde{T} |:

C_{\alpha}(T) = R(T) + \alpha |\tilde{T} |

First, we note that when \alpha = 0, this simply returns the original fully grown tree. As \alpha increases, we incur a penalty that is proportional to the number of leaf nodes.  This tends to cause the minimum cost to occur for a tree that is a subtree of the original one (since a subtree will have a smaller number of leaf nodes). In practice we vary \alpha and pick the value that gives the subtree that results in the smallest cross-validated prediction error.  One does not have to worry about programming this because the rpart algorithm actually computes the errors for different values of \alpha for us. All we need to do is pick the value of the coefficient that gives the lowest cross-validated error. I will illustrate this in detail in the next section.

An implication of their tendency to overfit data is that decision trees tend to be sensitive to relatively minor changes in the training datasets. Indeed, small differences can lead to radically different looking trees. Pruning addresses this to an extent, but does not resolve it completely.  A better resolution is offered by the so-called ensemble methods that average over many differently constructed trees. I’ll discuss one such method at length in a future post.

Finally, I should also mention that decision trees can be used for both classification and regression problems (i.e. those in which the predicted variable is discrete and continuous respectively).  I’ll demonstrate both types of problems in the next two sections.

Classification trees using rpart

To demonstrate classification trees, we’ll use the Ionosphere dataset available in the mlbench package in R. I have chosen this dataset because it nicely illustrates the points I wish to make in this post. In general, you will almost always find that algorithms that work fine on classroom datasets do not work so well in the real world…but of course, you know that already!

We begin by setting the working directory, loading the required packages (rpart and mlbench) and then loading the Ionosphere dataset.

#set working directory if needed (modify path as needed)
setwd(“C:/Users/Kailash/Documents/decisiontrees”)
#load required libraries – rpart for classification and regression trees
library(rpart)
#mlbench for Ionosphere dataset
library(mlbench)
#load Ionosphere
data(“Ionosphere”)

Next we separate the data into training and test sets. We’ll use the former to build the model and the latter to test it. To do this, I use a simple scheme wherein I randomly select 80% of the data for the training set and assign the remainder to the test data set. This is easily done in a single R statement that invokes the uniform distribution (runif) and the vectorised function, ifelse. Before invoking runif, I set a seed integer to my favourite integer in order to ensure reproducibility of results.

#set seed to ensure reproducible results
set.seed(42)
#split into training and test sets
Ionosphere[,”train”] <- ifelse(runif(nrow(Ionosphere))<0.8,1,0)
#separate training and test sets
trainset <- Ionosphere[Ionosphere$train==1,]
testset <- Ionosphere[Ionosphere$train==0,]
#get column index of train flag
trainColNum <- grep("train",names(trainset))
#remove train flag column from train and test sets
trainset <- trainset[,-trainColNum]
testset <- testset[,-trainColNum]

In the above, I have also removed the training flag from the training and test datasets.

Next we  invoke rpart. I strongly recommend you take some time to go through the documentation and understand the parameters and their defaults values.  Note that we need to remove the predicted variable from the dataset before passing the latter on to the algorithm, which is why we need to find the column index of the  predicted variable (first line below). Also note that we set the method parameter to “class“, which simply tells the algorithm that the predicted variable is discrete.  Finally, rpart uses Gini rule for splitting by default, and we’ll stick with this option.

#get column index of predicted variable in dataset
typeColNum <- grep("Class",names(Ionosphere))
#build model
rpart_model <- rpart(Class~.,data = trainset, method="class")
#plot tree
plot(rpart_model);text(rpart_model)

 

The resulting plot is shown in Figure 3 below.  It is  quite self-explanatory so I  won’t dwell on it here.

Figure 3: A classification tree for Ionosphere dataset

Figure 3: A classification tree for Ionosphere dataset

Next we see how good the model is by seeing how it fares against the test data.

#…and the moment of reckoning
rpart_predict <- predict(rpart_model,testset[,-typeColNum],type="class")
mean(rpart_predict==testset$Class)
[1] 0.8450704
#confusion matrix
table(pred=rpart_predict,true=testset$Class)
pred true bad good
bad 17 2
good 9 43

 

Note that we need to verify the above results by doing multiple runs, each using different training and test sets. I will  do this later, after discussing pruning.

Next, we prune the tree using the cost complexity criterion. Basically, the intent is to see if a shallower subtree can give us comparable results. If so, we’d be better of choosing the shallower tree because it reduces the likelihood of overfitting.

As described earlier, we choose the appropriate pruning parameter (aka cost-complexity parameter) \alpha by picking the value that results in the lowest prediction error. Note that all relevant computations have already been carried out by R when we built the original tree (the call to rpart in the code above). All that remains now is to pick the value of \alpha:

#cost-complexity pruning
printcp(rpart_model)
CP nsplit rel error xerror xstd
1 0.57 0 1.00 1.00 0.080178
2 0.20 1 0.43 0.46 0.062002
3 0.02 2 0.23 0.26 0.048565
4 0.01 4 0.19 0.35

It is clear from the above, that the lowest cross-validation error (xerror in the table) occurs for \alpha =0.02 (this is CP in the table above).   One can find CP programatically like so:

# get index of CP with lowest xerror
opt <- which.min(rpart_model$cptable[,"xerror"])
#get its value
cp <- rpart_model$cptable[opt, "CP"]

Next, we prune the tree based on this value of CP:

#prune tree
pruned_model <- prune(rpart_model,cp)
#plot tree
plot(pruned_model);text(pruned_model)

Note that rpart will use a default CP value of 0.01 if you don’t specify one in prune.

The pruned tree is shown in Figure 4 below.

Figure 4: A pruned classification tree for Ionosphere dataset

Figure 4: A pruned classification tree for Ionosphere dataset

Let’s see how this tree stacks up against the fully grown one shown in Fig 3.

#find proportion of correct predictions using test set
rpart_pruned_predict <- predict(pruned_model,testset[,-typeColNum],type="class")
mean(rpart_pruned_predict==testset$Class)
[1] 0.8873239

This seems like an improvement over the unpruned tree, but one swallow does not a summer make. We need to check that this holds up for different training and test sets. This is easily done by creating multiple random partitions of the dataset and checking the efficacy of pruning for each. To do this efficiently, I’ll create a function that takes the training fraction, number of runs (partitions) and the name of the dataset as inputs and outputs the proportion of correct predictions for each run. It also optionally prunes the tree. Here’s the code:

#function to do multiple runs
multiple_runs_classification <- function(train_fraction,n,dataset,prune_tree=FALSE){
fraction_correct <- rep(NA,n)
set.seed(42)
for (i in 1:n){
dataset[,”train”] <- ifelse(runif(nrow(dataset))<0.8,1,0)
trainColNum <- grep("train",names(dataset))
typeColNum <- grep("Class",names(dataset))
trainset <- dataset[dataset$train==1,-trainColNum]
testset <- dataset[dataset$train==0,-trainColNum]
rpart_model <- rpart(Class~.,data = trainset, method="class")
if(prune_tree==FALSE) {
rpart_test_predict <- predict(rpart_model,testset[,-typeColNum],type="class")
fraction_correct[i] <- mean(rpart_test_predict==testset$Class)
}else{
opt <- which.min(rpart_model$cptable[,"xerror"])
cp <- rpart_model$cptable[opt, "CP"]
pruned_model <- prune(rpart_model,cp)
rpart_pruned_predict <- predict(pruned_model,testset[,-typeColNum],type="class")
fraction_correct[i] <- mean(rpart_pruned_predict==testset$Class)
}
}
return(fraction_correct)
}

Note that in the above,  I have set the default value of the prune_tree to FALSE, so the function will execute the first branch of the if statement unless the default is overridden.

OK, so let’s do 50 runs with and without pruning, and check the mean and variance of the results for both sets of runs.

#50 runs, no pruning
unpruned_set <- multiple_runs_classification(0.8,50,Ionosphere)
mean(unpruned_set)
[1] 0.8772763
sd(unpruned_set)
[1] 0.03168975
#50 runs, with pruning
pruned_set <- multiple_runs_classification(0.8,50,Ionosphere,prune_tree=TRUE)
mean(pruned_set)
[1] 0.9042914
sd(pruned_set)
[1] 0.02970861

So we see that there is an improvement of about 3% with pruning. Also, if you were to plot the trees as we did earlier, you would see that this improvement is achieved with shallower trees. Again, I point out that this is not always the case. In fact, it often happens that pruning results in worse predictions, albeit with better reliability – a classic illustration of the bias-variance tradeoff.

Regression trees using rpart

In the previous section we saw how one can build decision trees for situations in which the predicted variable is discrete.  Let’s now look at the case in which the predicted variable is continuous. We’ll use the Boston Housing dataset from the mlbench package.  Much of the discussion of the earlier section applies here, so I’ll just display the code, explaining only the differences.

#load Boston Housing dataset
data(“BostonHousing”)
#set seed to ensure reproducible results
set.seed(42)
#split into training and test sets
BostonHousing[,”train”] <- ifelse(runif(nrow(BostonHousing))<0.8,1,0)
#separate training and test sets
trainset <- BostonHousing[BostonHousing$train==1,]
testset <- BostonHousing[BostonHousing$train==0,]
#get column index of train flag
trainColNum <- grep("train",names(trainset))
#remove train flag column from train and test sets
trainset <- trainset[,-trainColNum]
testset <- testset[,-trainColNum]

Next we invoke rpart, noting that the predicted variable is medv (median value of owner-occupied homes in $1000 units) and that we need to set the method parameter to “anova“. The latter tells rpart that the predicted variable is continuous (i.e that this is a regression problem).

#build model
rpart_model <- rpart(medv~.,data = trainset, method="anova")
#plot tree
plot(rpart_model);text(rpart_model)

The plot of the tree is shown in Figure 5 below.

Figure 5: A regression tree for Boston Housing dataset

Figure 5: A regression tree for Boston Housing dataset

Next, we need to see how good the predictions are. Since the dependent variable is continuous, we cannot compare the predictions directly against the test set. Instead, we calculate the root mean square (RMS) error. To do this, we request rpart to output the predictions as a vector – one prediction per record in the test dataset. The RMS error can then easily be calculated by comparing this vector with the medv column in the test dataset.

Here is the relevant code:

#…the moment of reckoning
rpart_test_predict <- predict(rpart_model,testset[,-resultColNum],type = "vector" )
#calculate RMS error
rmsqe <- sqrt(mean((rpart_test_predict-testset$medv)^2)))
rmsqe
[1] 4.586388

Again, we need to do multiple runs to check on the  reliability of the predictions. However, you already know how to do that so I will leave it to you.

Moving on, we prune the tree using the cost complexity criterion as before.  The code is exactly the same as in the classification problem.

# get index of CP with lowest xerror
opt <- which.min(rpart_model$cptable[,"xerror"])
#get its value
cp <- rpart_model$cptable[opt, "CP"]
#prune tree
pruned_model <- prune(rpart_model,cp)
#plot tree
plot(pruned_model);text(pruned_model)

The tree is unchanged so I won’t show it here. This means, as far as the cost complexity pruning is concerned, the optimal subtree is the same as the original tree. To confirm this, we’d need to do multiple runs as before – something that I’ve already left as as an exercise for you :).  Basically, you’ll need to write a function analogous to the one above, that computes the root mean square error instead of the proportion of correct predictions.

Wrapping up

This brings us to the end of my introduction to classification and regression trees using R.  Unlike some articles on the topic I have attempted to describe each of the steps in detail and provide at least some kind of a rationale for them. I hope you’ve found the description and code snippets useful.

I’ll end by reiterating a couple points I made early in this piece. The nice thing about decision trees is that they are easy to explain to the users of our predictions. This is primarily because they  reflect the way we think about how decisions are made in real life – via a set of binary choices based on appropriate criteria. That  said, in many practical situations decision trees turn out to be unstable:  small changes in the dataset can lead to wildly different trees. It turns out that this limitation can be addressed by building a variety of trees using different starting points and then averaging over  them.  This is the domain of the so-called random forest algorithm.We’ll make the journey from decision trees to random forests in a future post.

Postscript, 20th September 2016: I finally got around to finishing my article on random forests.

Written by K

February 16, 2016 at 6:33 pm

A gentle introduction to network graphs using R and Gephi

with 7 comments

Introduction

Graph theory is the an area of mathematics that analyses relationships between pairs of objects. Typically graphs consist of nodes (points representing objects) and edges (lines depicting relationships between objects). As one might imagine, graphs are extremely useful in visualizing relationships between objects. In this post, I provide a detailed introduction to network graphs using  R, the premier open source tool statistics package for calculations and the excellent Gephi software for visualization.

The article is organised as follows: I begin by defining the problem and then spend some time developing the concepts used in constructing the graph  Following this,  I do the data preparation in R  and then finally build the network graph using Gephi.

The problem

In an introductory article on cluster analysis, I provided an in-depth introduction to a couple of algorithms that can be used to categorise documents automatically.  Although these techniques are useful, they do not provide a feel for the relationships between different documents in the collection of interest.  In the present piece I show network graphs can be used to to visualise similarity-based relationships within a corpus.

Document similarity

There are many ways to quantify similarity between documents. A popular method is to use the notion of distance between documents. The basic idea is simple: documents that have many words in common are “closer” to each other than those that share fewer words. The problem with distance, however, is that it can be skewed by word count: documents that have an unusually high word  count will show up as outliers even though they may be similar (in terms of words used) to other documents in the corpus. For this reason, we will use another related measure of similarity that does not suffer from this problem – more about this in a minute.

Representing documents mathematically

As I explained in my article on cluster analysis, a document can be represented as a point in a conceptual space that has dimensionality equal to the number of distinct words in the collection of documents. I revisit and build on that explanation below.

Say one has a simple document consisting of the words “five plus six”, one can represent it mathematically in a 3 dimensional space in which the individual words are represented by the three axis (See Figure 1). Here each word is a coordinate axis (or dimension).  Now, if one connects the point representing the document (point A in the figure) to the origin of the word-space, one has a vector, which in this case is a directed line connecting the point in question to the origin.  Specifically, the point A can be represented by the coordinates (1, 1, 1) in this space. This is a nice quantitative representation of the fact that the words five, plus and one appear in the document exactly once. Note, however, that we’ve assumed the order of words does not matter. This is a reasonable assumption in some cases, but not always so.

Figure 1

Figure 1

As another example consider document, B, which consists of only two words: “five plus” (see Fig 2). Clearly this document shares some similarity with document but it is not identical.  Indeed, this becomes evident when we note that document (or point) B is simply the point $latex(1, 1, 0)$ in this space, which tells us that it has two coordinates (words/frequencies) in common with document (or point) A.

Figure 2

Figure 2

To be sure, in a realistic collection of documents we would have a large number of distinct words, so we’d have to work in a very high dimensional space. Nevertheless, the same principle holds: every document in the corpus can be represented as a vector consisting of a directed line from the origin to the point to which the document corresponds.

Cosine similarity

Now it is easy to see that two documents are identical if they correspond to the same point. In other words, if their vectors coincide. On the other hand, if they are completely dissimilar (no words in common), their vectors will be at right angles to each other.  What we need, therefore, is a quantity that varies from 0 to 1 depending on whether two documents (vectors) are dissimilar(at right angles to each other) or similar (coincide, or are parallel to each other).

Now here’s the ultra-cool thing, from your high school maths class, you know there is a trigonometric ratio which has exactly this property – the cosine!

What’s even cooler is that the cosine of the angle between two vectors is simply the dot product  of the two vectors, which is sum of the products of the individual elements of the vector,  divided by the product of the  lengths of the two vectors. In three dimensions this can be expressed mathematically as:

\cos(\theta)= \displaystyle \frac{x_1 x_2+y_1 y_2+z_1 z_2}{\sqrt{x_1^2+y_1^2+z_1^2}\sqrt{x_2^2+y_2^2+z_2^2}}...(1)

where the two vectors are (x_{1},y_{1},z_{1}) and (x_{2},y_{2},z_{2}) , and \theta is the angle between the two vectors (see Fig 2).

The upshot of the above is that the cosine of the angle between the vector representation of two documents is a reasonable measure of similarity between them. This quantity, sometimes referred to as cosine similarity, is what we’ll take as our similarity measure in the rest of this article.

The adjacency matrix

If we have a collection of N documents, we can calculate the similarity between every pair of documents as we did for A and B in the previous section. This would give us a set of N^2 numbers between 0 and 1, which can be conveniently represented as a matrix.  This is sometimes called the adjacency matrix. Beware, though, this term has many different meanings in the math literature. I use it in the sense specified above.

Since every document is identical to itself, the diagonal elements of the matrix will all be 1. These similarities are trivial (we know that every document is identical to itself!)  so we’ll set the diagonal elements to zero.

Another important practical point is that visualizing every relationship is going to make  a very messy graph. There would be N(N-1) edges in such a graph, which would make it impossible to make sense of if we have more than a handful of documents. For this reason, it is normal practice to choose a cutoff value of similarity below which it is set to zero.

Building the adjacency matrix using R

We now have enough background to get down to the main point of this article – visualizing relationships between documents.

The first step is to build the adjacency matrix.  In order to do this, we have to build the document term matrix (DTM) for the collection of documents,  a process which I have dealt with at length in my  introductory pieces on text mining and topic modeling. In fact, the steps are actually identical to those detailed in the second piece. I will therefore avoid lengthy explanations here. However,  I’ve listed all the code below with brief comments (for those who are interested in trying this out, the document corpus can be downloaded here and a pdf listing of the R code can be obtained here.)

OK, so here’s the code listing:

#load text mining library
library(tm)
#set working directory (modify path as needed)
setwd(“C:\\Users\\Kailash\\Documents\\TextMining”)
#load files into corpus
#get listing of .txt files in directory
filenames <- list.files(getwd(),pattern=”*.txt”)
#read files into a character vector
files <- lapply(filenames,readLines)
#create corpus from vector
docs <- Corpus(VectorSource(files))
#inspect a particular document in corpus
writeLines(as.character(docs[[30]]))
#start preprocessing
#Transform to lower case
docs <-tm_map(docs,content_transformer(tolower))
#remove potentially problematic symbols
toSpace <- content_transformer(function(x, pattern) { return (gsub(pattern, ” “, x))})
docs <- tm_map(docs, toSpace, “-“)
docs <- tm_map(docs, toSpace, “’”)
docs <- tm_map(docs, toSpace, “‘”)
docs <- tm_map(docs, toSpace, “•”)
docs <- tm_map(docs, toSpace, “””)
docs <- tm_map(docs, toSpace, ““”)
#remove punctuation
docs <- tm_map(docs, removePunctuation)
#Strip digits
docs <- tm_map(docs, removeNumbers)
#remove stopwords
docs <- tm_map(docs, removeWords, stopwords(“english”))
#remove whitespace
docs <- tm_map(docs, stripWhitespace)
#Good practice to check every now and then
writeLines(as.character(docs[[30]]))
#Stem document
docs <- tm_map(docs,stemDocument)
#fix up 1) differences between us and aussie english 2) general errors
docs <- tm_map(docs, content_transformer(gsub),
pattern = “organiz”, replacement = “organ”)
docs <- tm_map(docs, content_transformer(gsub),
pattern = “organis”, replacement = “organ”)
docs <- tm_map(docs, content_transformer(gsub),
pattern = “andgovern”, replacement = “govern”)
docs <- tm_map(docs, content_transformer(gsub),
pattern = “inenterpris”, replacement = “enterpris”)
docs <- tm_map(docs, content_transformer(gsub),
pattern = “team-“, replacement = “team”)
#define and eliminate all custom stopwords
myStopwords <- c(“can”, “say”,”one”,”way”,”use”,
“also”,”howev”,”tell”,”will”,
“much”,”need”,”take”,”tend”,”even”,
“like”,”particular”,”rather”,”said”,
“get”,”well”,”make”,”ask”,”come”,”end”,
“first”,”two”,”help”,”often”,”may”,
“might”,”see”,”someth”,”thing”,”point”,
“post”,”look”,”right”,”now”,”think”,”‘ve “,
“‘re “,”anoth”,”put”,”set”,”new”,”good”,
“want”,”sure”,”kind”,”larg”,”yes,”,”day”,”etc”,
“quit”,”sinc”,”attempt”,”lack”,”seen”,”awar”,
“littl”,”ever”,”moreov”,”though”,”found”,”abl”,
“enough”,”far”,”earli”,”away”,”achiev”,”draw”,
“last”,”never”,”brief”,”bit”,”entir”,”brief”,
“great”,”lot”)
docs <- tm_map(docs, removeWords, myStopwords)
#inspect a document as a check
writeLines(as.character(docs[[30]]))
#Create document-term matrix
dtm <- DocumentTermMatrix(docs)

The  rows of a DTM are document vectors akin to the vector representations of documents A and B discussed earlier. The DTM therefore contains all the information we need to calculate the cosine similarity between every pair of documents in the corpus (via equation 1). The R code below implements this, after taking care of a few preliminaries.

#convert dtm to matrix
m<-as.matrix(dtm)
#write as csv file
write.csv(m,file=”dtmEight2Late.csv”)
#Map filenames to matrix row numbers
#these numbers will be used to reference
#files in the network graph
filekey <- cbind(rownames(m),filenames)
write.csv(filekey,”filekey.csv”)
#compute cosine similarity between document vectors
#converting to distance matrix sets diagonal elements to 0
cosineSim <- function(x){
as.dist(x%*%t(x)/(sqrt(rowSums(x^2) %*% t(rowSums(x^2)))))
}
cs <- cosineSim(m)
write.csv(as.matrix(cs),file=”csEight2Late.csv”)
#adjacency matrix: set entries below a certain threshold to 0.
#We choose half the magnitude of the largest element of the matrix
#as the cutoff. This is an arbitrary choice
cs[cs < max(cs)/2] <- 0
cs <- round(cs,3)
write.csv(as.matrix(cs),file=”AdjacencyMatrix.csv”)

A few lines need a brief explanation:

First up, although the DTM is a matrix, it is internally stored in a special form suitable for sparse matrices. We therefore have to explicitly convert it into a proper matrix before using it to calculate similarity.

Second, the names I have given the documents are way too long to use as labels in the network diagram. I have therefore mapped the document names to the row numbers which we’ll use in our network graph later. The mapping back to the original document names is stored in filekey.csv. For future reference, the mapping is shown in Table 1 below.

File number Name
1 BeyondEntitiesAndRelationships.txt
2 bigdata.txt
3 ConditionsOverCauses.txt
4 EmergentDesignInEnterpriseIT.txt
5 FromInformationToKnowledge.txt
6 FromTheCoalface.txt
7 HeraclitusAndParmenides.txt
8 IroniesOfEnterpriseIT.txt
9 MakingSenseOfOrganizationalChange.txt
10 MakingSenseOfSensemaking.txt
11 ObjectivityAndTheEthicalDimensionOfDecisionMaking.txt
12 OnTheInherentAmbiguitiesOfManagingProjects.txt
13 OrganisationalSurprise.txt
14 ProfessionalsOrPoliticians.txt
15 RitualsInInformationSystemDesign.txt
16 RoutinesAndReality.txt
17 ScapegoatsAndSystems.txt
18 SherlockHolmesFailedProjects.txt
19 sherlockHolmesMgmtFetis.txt
20 SixHeresiesForBI.txt
21 SixHeresiesForEnterpriseArchitecture.txt
22 TheArchitectAndTheApparition.txt
23 TheCloudAndTheGrass.txt
24 TheConsultantsDilemma.txt
25 TheDangerWithin.txt
26 TheDilemmasOfEnterpriseIT.txt
27 TheEssenceOfEntrepreneurship.txt
28 ThreeTypesOfUncertainty.txt
29 TOGAFOrNotTOGAF.txt
30 UnderstandingFlexibility.txt

Table 1: File mappings

Finally, the distance function (as.dist) in the cosine similarity function sets the diagonal elements to zero  because the distance between a document and itself is zero…which is just a complicated way of saying that a document is identical to itself 🙂

The last three lines of code above simply implement the cutoff that I mentioned in the previous section. The comments explain the details so I need say no more about it.

…which finally brings us to Gephi.

Visualizing document similarity using Gephi

Gephi is an open source, Java based network analysis and visualisation tool. Before going any further, you may want to download and install it. While you’re at it you may also want to download this excellent quick start tutorial.

Go on, I’ll wait for you…

To begin with, there’s a little formatting quirk that we need to deal with. Gephi expects separators in csv files to be semicolons (;) . So, your first step is to open up the adjacency matrix that you created in the previous section (AdjacencyMatrix.csv) in a text editor and replace commas with semicolons.

Once you’ve done that, fire up Gephi, go to File > Open,  navigate to where your Adjacency matrix is stored and load the file. If it loads successfully, you should see a feedback panel as shown in Figure 3.  By default Gephi creates a directed graph (i.e one in which the edges have arrows pointing from one node to another). Change this to undirected and click OK.

Figure 3: Gephi import feedback

Figure 3: Gephi import feedback

Once that is done, click on overview (top left of the screen). You should end up with something like Figure 4.

Figure 4: Initial overview after loading adjacency matrix

Figure 4: Initial overview after loading adjacency matrix

Gephi has sketched out an initial network diagram which depicts the relationships between documents…but it needs a bit of work to make it look nicer and more informative. The quickstart tutorial mentioned earlier describes various features that can be used to manipulate and prettify the graph. In the remainder of this section, I list some that I found useful. Gephi offers many more. Do explore, there’s much more than  I can cover in an introductory post.

First some basics. You can:

  • Zoom and pan using mouse wheel and right button.
  • Adjust edge thicknesses using the slider next to text formatting options on bottom left of main panel.
  • Re-center graph via the magnifying glass icon on left of display panel (just above size adjuster).
  • Toggle node labels on/off by clicking on grey T symbol on bottom left panel.

Figure 5 shows the state of the diagram after labels have been added and edge thickness adjusted (note that your graph may vary in appearance).

Figure 5: graph with node labels and adjusted edge thicknesses

Figure 5: graph with node labels and adjusted edge thicknesses

The default layout of the graph is ugly and hard to interpret. Let’s work on fixing it up. To do this, go over to the layout panel on the left. Experiment with different layouts to see what they do. After some messing around, I found the Fruchtermann-Reingold and Force Atlas options to be good for this graph. In the end I used Force Atlas with a Repulsion Strength of 2000 (up from the default of 200) and an Attraction Strength of 1 (down from the default of 10). I also adjusted the figure size and node label font size from the graph panel in the center. The result is shown in Figure 6.

Figure 6: Graph after using Force Atlas layout

Figure 6: Graph after using Force Atlas layout

This is much better. For example, it is now evident that document 9 is the most connected one (which table 9 tells us is a transcript of a conversation with Neil Preston on organisational change).

It would be nice if we could colour code edges/nodes and size nodes by their degree of connectivity. This can be done via the ranking panel above the layout area where you’ve just been working.

In the Nodes tab select Degree as  the rank parameter (this is the degree of connectivity of the node) and hit apply. Select your preferred colours via the small icon just above the colour slider. Use the colour slider to adjust the degree of connectivity at which colour transitions occur.

Do the same for edges, selecting weight as the rank parameter(this is the degree of similarity between the two douments connected by the edge). With a bit of playing around, I got the graph shown in the screenshot below (Figure 7).

Figure 7: Connectivity-based colouring of edges and nodes.

Figure 5: Connectivity-based colouring of edges and nodes.

If you want to see numerical values for the rankings, hit the results list icon on the bottom left of the ranking panel. You can see numerical ranking values for both nodes and edges as shown in Figures 8 and 9.

Figure 8: Node ranking

Figure 8: Node ranking (see left of figure)

Figure 9: Edge ranking

Figure 9: Edge ranking

It is easy to see from the figure that documents 21 and 29 are the most similar in terms of cosine ranking. This makes sense, they are pieces in which I have ranted about the current state of enterprise architecture – the first article is about EA in general and the other about the TOGAF framework. If you have a quick skim through, you’ll see that they have a fair bit in common.

Finally, it would be nice if we could adjust node size to reflect the connectedness of the associated document. You can do this via the “gem” symbol on the top right of the ranking panel. Select appropriate min and max sizes (I chose defaults) and hit apply. The node size is now reflective of the connectivity of the node – i.e. the number of other documents to which it is cosine similar to varying degrees. The thickness of the edges reflect the degree of similarity. See Figure 10.

Figure 10: Node sizes reflecting connectedness

Figure 10: Node sizes reflecting connectedness

Now that looks good enough to export. To do this, hit the preview tab on main panel and make following adjustments to the default settings:

Under Node Labels:
1. Check Show Labels
2. Uncheck proportional size
3. Adjust font to required size

Under Edges:
1. Change thickness to 10
2. Check rescale weight

Hit refresh after making the above adjustments. You should get something like Fig 11.

Figure 11: Export preview

Figure 11: Export preview

All that remains now is to do the deed: hit export SVG/PDF/PNG to export the diagram. My output is displayed in Figure 12. It clearly shows the relationships between the different documents (nodes) in the corpus. The nodes with the highest connectivity are indicated via node size and colour  (purple for high, green for low) and strength of similarity is indicated by edge thickness.

Figure 12: Gephi network graph

Figure 12: Gephi network graph of document corpus

…which brings us to the end of this journey.

Wrapping up

The techniques of text analysis enable us to quantify relationships between documents. Document similarity is one such relationship. Numerical measures are good, but the comprehensibility of these can be further enhanced through meaningful visualisations.  Indeed, although my stated objective in this article was to provide an introduction to creating network graphs using Gephi and R (which I hope I’ve succeeded in doing), a secondary aim was to show how document similarity can be quantified and visualised. I sincerely hope you’ve found the discussion interesting and useful.

Many thanks for reading! As always, your feedback would be greatly appreciated.

Written by K

December 2, 2015 at 7:20 am

%d bloggers like this: