Eight to Late

Sensemaking and Analytics for Organizations

Archive for November 2015

The “value add” tax – a riff on corporate communication

leave a comment »

A mainstay of team building workshops is the old “what can we do better” exercise.  Over the years I’ve noticed that “improving communication” is an item that comes up again and again in these events.  This is frustrating for managers. For example, during a team-building debrief some years ago, an exasperated executive remarked, “Oh don’t pay any attention to that [better communication], it keeps coming up no matter what we do.”

The executive had a point.  The organisation had invested much effort in establishing new channels of communication – social media, online, face-to-face forums etc.  The uptake, however, was disappointing:  turnout at the face-to-face meetings was consistently low as was use of other channels.

As far as management was concerned, they had done their job by establishing communication channels and making them available to all. What more could they  be expected to do? The matter was dismissed with a collective shrug of suit-clad shoulders…until the next team building event, when the issue was highlighted by employees yet again.

After much hand-wringing, the organisation embarked on another “better communication cycle.”  Much effort was expended…again, with the same disappointing results.

Anecdotal evidence via conversations with friends and collaborators suggests that variants of this story play out in many organisations. This makes the issue well worth exploring. I won’t be so presumptuous as to offer answers; I’m well aware that folks much better qualified than I have spent years attempting to do so. Instead I raise a point which, though often overlooked, might well have something to do with the lack of genuine communication in organisations.

Communication experts have long agreed that face-to-face dialogue is the most effective mode of communication. Backing for this comes from the interactional or pragmatic view, which is based on the premise that communication is more about building relationships than conveying information. Among other things, face-to-face communication enables the communicating parties to observe and interpret non-verbal signals such as facial expression and gestures and, as we all know, these often “say” much more than what’s being said.

A few months ago I started paying closer attention to non-verbal cues. This can be hard to do because people are good at disguising their feelings. Involuntary expressions indicative of people’s real thoughts can be fleeting. A flicker of worry, fear or anger is quickly covered by a mask of indifference.

In meetings, difficult topics tend to be couched in platitudinous language. Platitudes are empty words that sound great but can be interpreted in many different ways. Reconciling those differences often leads to pointless arguments that are emotionally draining. Perhaps this is why people prefer to take refuge in indifference.

A while ago I was sitting in a meeting where the phrase “value add activity” (sic) cropped up once, then again…and then many times over. Soon it was apparent that everyone in the room had a very different conception of what constituted a “value add activity.” Some argued that project management is a value add activity, others disagreed vehemently arguing that project management is a bureaucratic exercise and that real value lies in creating something. Round and round the arguments went but there was no agreement on what constituted a “value add activity.” The discussion generated a lot of heat but shed no light whatsoever on the term.

A problem with communication in the corporate world is that it is loaded with such platitudes. To make sense of these, people have to pay what I call a “value add” tax – the effort in reaching a consensus on what the platitudinous terms mean. This can be emotionally extortionate because platitudes often touch upon issues that affect people’s sense of well-being.

Indifference is easier because we can then pretend to understand and agree with each other when we would rather not understand, let alone agree, at all.

Written by K

November 19, 2015 at 8:02 am

A gentle introduction to Naïve Bayes classification using R

with 14 comments

Preamble

One of the key problems of predictive analytics is to classify entities or events based on a knowledge of their attributes.  An example: one might want to classify customers into two categories, say, ‘High Value’ or ‘Low Value,’ based on a knowledge of their buying patterns.  Another example: to figure out the party allegiances of  representatives based on their voting records.  And yet another:  to predict the species a particular plant or animal specimen based on a list of its characteristics. Incidentally, if you haven’t been there already, it is worth having a look at Kaggle to get an idea of some of the real world classification problems that people tackle using techniques of predictive analytics.

Given the importance of classification-related problems, it is no surprise that analytics tools offer a range of options. My favourite (free!) tool, R, is no exception: it has a plethora of state of the art packages designed to handle a wide range of problems. One of the problems with this diversity of choice is that it is often confusing for beginners to figure out which one to use in a particular situation. Over the next several months, I intend to write up tutorial articles covering many of the common algorithms, with a particular focus on their strengths and weaknesses; explaining where they work well and where they don’t. I’ll kick-off this undertaking with a simple yet surprisingly effective algorithm – the Naïve Bayes classifier.

Just enough theory

I’m going to assume you have R and RStudio installed on your computer. If you need help with this, please follow the instructions here.

To introduce the Naive Bayes algorithm, I will use the HouseVotes84 dataset, which contains US congressional voting records for 1984. The data set is in the mlbench package which is not part of the base R installation. You will therefore need to install it if you don’t have it already.  Package installation is a breeze in RStudio – just go to Tools > Install Packages and follow the prompts.

The HouseVotes84 dataset describes how 435 representatives voted – yes (y), no (n) or unknown (NA) – on 16 key issues presented to Congress.  The dataset also provides the party affiliation of each representative – democrat or republican.

Let’s begin by exploring the dataset. To do this, we load mlbench, fetch the dataset and get some summary stats on it. (Note: a complete listing of the code in this article can be found here)

#load mlbench library
library(mlbench)
#set working directory if needed (modify path as needed)
setwd(“C:/Users/Kailash/Documents/NaiveBayes”)
#load HouseVotes84 dataset
data(“HouseVotes84”)

It is good to begin by exploring the data visually.  To this end, let’s do some bar plots using the basic graphic capabilities of R:

#barplots for specific issue
plot(as.factor(HouseVotes84[,2]))
title(main=”Votes cast for issue”, xlab=”vote”, ylab=”# reps”)
#by party
plot(as.factor(HouseVotes84[HouseVotes84$Class==’republican’,2]))
title(main=”Republican votes cast for issue 1″, xlab=”vote”, ylab=”# reps”)
plot(as.factor(HouseVotes84[HouseVotes84$Class==’democrat’,2]))
title(main=”Democrat votes cast for issue 1″, xlab=”vote”, ylab=”# reps”)

The plots are shown in Figures 1 through 3.

Fig 1: y and n votes for issue 1

Fig 1: y and n votes for issue 1

Fig 2: Republican votes for issue 1.

Fig 2: Republican votes for issue 1.

Fig 3: Democrat votes for issue 1.

Fig 3: Democrat votes for issue 1.

Among other things, such plots give us a feel for the probabilities associated with how representatives from parties tend to vote on specific issues.

The classification problem at hand is to figure out the party affiliation from a knowledge of voting patterns. For simplicity let us assume that there are only 3 issues voted on instead of the 16 in the actual dataset. In concrete terms we wish to answer the question, “what is the probability that a representative is, say, a democrat (D) given that he or she has voted, say,  (v1 = y, v2=n,v3 = y) on the three issues?” To keep things simple I’m assuming there are no NA values.

In the notation of conditional probability this can be written as,

P(D|v1=y, v2=n,v3=y)

(Note:  If you need a refresher on conditional probability, check out this post for a simple explanation.)

By Bayes theorem, which I’ve explained at length in this post, this can be recast as,

P(D|v1=y, v2=n,v3=y) = \displaystyle \frac{p(D) p(v1=y, v2=n,v3=y|D)}{p(v1=y, v2=n,v3=y)}......(1)

We’re interested only in relative probabilities of the representative being a democrat or republican because the predicted party affiliation depends only on which of the two probabilities is larger (the actual value of the probability is not important). This being the case, we can factor out any terms that are constant.  As it happens, the denominator of the above equation – the probability of a particular voting pattern – is a constant because it depends on the total number of representatives (from both parties)  who voted a particular way.

Now, using the chain rule of conditional probability, we can rewrite the numerator as:

p(D) p(v1=y, v2=n,v3=y|D)

= p(D)p(v1=y|D) p(v2=n,v3=y|D,v1=y)

Basically, the second term on the left hand side, p(v1=y, v2=n,v3=y|D), is the probability of getting a particular voting pattern  (y,n,y) assuming the rep is a Democrat (D). The definition of conditional probability allows us to rewrite this as the probability of getting a n vote for issue v2 and a y vote for issue v3 given that the rep is a Democrat who has voted y on issue v1. Again, this is simply a consequence of the definition of conditional probability.

Another application of the chain rule gives:

p(D) p(v1=y, v2=n,v3=y|D)

= p(D)p(v1=y|d) p(v2=n|D,v1=y) p(v3=y|D,v1=y,v2=n)

Where we have now factored out the n vote on the second issue.

The key assumption of Naïve Bayes is that the conditional probability of each feature given the class is independent of all other features. In mathematical terms this means that,

p(v2=n|D,v1=y) = p(v2=n|D)

and

p(v3=y|D,v1=y,v2=n) = p(v3=y|D)

The quantity of interest, the numerator of equation (1) can then be written as:

p(D) p(v1=y, v2=n,v3=y|D)

= p(D)p(v1=y|D)p(v2=n|D)p(v3=y|D).......(2)

The assumption of independent conditional probabilities is a drastic one.  What it is saying is that the features are completely independent of each other. This is clearly not the case in the situation above: how representatives vote on a particular issue is coloured by their beliefs and values. For example, the conditional probability of voting patterns  on socially progressive issues are definitely not independent of each other. However, as we shall see  in the next section, the Naïve Bayes assumption works well for this problem as it does in many other situations where we know upfront that it is grossly incorrect.

Another good example of the unreasonable efficacy of Naive Bayes is in spam filtering.  In the case of spam, the features are individual words in an email.  It is clear that certain word combinations tend to show up consistently in spam – for example, “online”, “meds”, “Viagra” and “pharmacy.” In other words, we know upfront that their occurrences are definitely not independent of each other. Nevertheless, Naïve Bayes based spam detectors which assume mutual independence of features do remarkably well in distinguishing spam from ham.

Why is this so?

To explain why, I return to a point I mentioned earlier: to figure out  the affiliation associated with a particular voting pattern (say, v1=y, v2=n,v3=y) one only needs to know which of the two probabilities p(R| v1=y, v2=n,v3=y) and p(D| V1=y, V2=n, V3=y) is greater than the other.  That is, the values of these probabilities are not important in determining the party affiliations.

This hints as to why the independence assumption might not be so quite so idiotic. Since the prediction depends only the on the maximum, the algorithm will get it right even if there are dependencies between feature providing the dependencies do not change which class has the maximum probability (once again, note that only the maximal class is important here, not  the value of the maximum).

Yet another reason for the surprising success of Naïve Bayes is that dependencies often cancel out across a large set of features. But, of course, there is no guarantee that this will always happen.

In general, Naïve Bayes algorithms work better for problems in which the dependent (predicted) variable is discrete, even when there are dependencies between features (spam detection is a good example).  They work less well for regression problems – i.e those in  which predicted variables are continuous.

I hope the above has given you an intuitive feel for how Naïve Bayes algorithms work. I don’t know about you, but my head’s definitely spinning after writing out all that mathematical notation.

It’s time to clear our heads by doing some computation.

Naïve Bayes in action

There are a couple of well-known implementations of Naïve Bayes in R. One of them is the naiveBayes method in the e1071 package and the other is NaiveBayes method in  the klaR package.  I’ll use the former for no other reason than it seems to be more popular. That said, I have used the latter too and can confirm that it works just as well.

We’ve already loaded and explored the HouseVotes84 dataset. One of the things you may have noticed when summarising the data is  that there are a fair number of NA values. Naïve Bayes algorithms typically handle NA values either by ignoring records that contain any NA values or by ignoring just the NA values. These choices are indicated by the value of the variable na.action in the naiveBayes algorithm, which is set to na.omit (to ignore the record) or na.pass (to ignore the value).

Just for fun, we’ll take a different approach. We’ll impute NA values for a given issue and party by looking at how other representatives from the same party voted on the issue. This is very much in keeping with the Bayesian spirit: we infer unknowns based on a justifiable belief – that is, belief based on the evidence.

To do this I write two functions: one to  compute the number of NA values for a given issue (vote) and class (party affiliation), and the other to calculate the fraction of yes votes for a given issue (column) and class (party affiliation).

#Functions needed for imputation
#function to return number of NAs by vote and class (democrat or republican)
na_by_col_class <- function (col,cls){return(sum(is.na(HouseVotes84[,col]) & HouseVotes84$Class==cls))}
#function to compute the conditional probability that a member of a party will cast a ‘yes’ vote for
#a particular issue. The probability is based on all members of the party who #actually cast a vote on the issue (ignores NAs).
p_y_col_class <- function(col,cls){
sum_y<-sum(HouseVotes84[,col]==’y’ & HouseVotes84$Class==cls,na.rm = TRUE)
sum_n<-sum(HouseVotes84[,col]==’n’ & HouseVotes84$Class==cls,na.rm = TRUE)
return(sum_y/(sum_y+sum_n))}
#Check that functions work!
> p_y_col_class(2,’democrat’)
[1] 0.6046512
> p_y_col_class(2,’republican’)
[1] 0.1878788
> na_by_col_class(2,’democrat’)
[1] 9
> na_by_col_class(2,’republican’)
> [1] 3

Before proceeding,  you might want to go back to the data and convince yourself that these values are sensible.

We can now impute the NA values based on the above. We do this by randomly assigning values ( y or n) to NAs, based on the proportion of members of a party who have voted y or n. In practice, we do this by invoking the uniform distribution and setting an NA value to y if the random number returned is less than the probability of a yes vote and to n otherwise. This is not as complicated as it sounds; you should be able to figure the logic out from the code below.

#impute missing values.
for (i in 2:ncol(HouseVotes84)) {
if(sum(is.na(HouseVotes84[,i])>0)) {
c1 <- which(is.na(HouseVotes84[,i])& HouseVotes84$Class==’democrat’,arr.ind = TRUE)
c2 <- which(is.na(HouseVotes84[,i])& HouseVotes84$Class==’republican’,arr.ind = TRUE)
HouseVotes84[c1,i] <-
ifelse(runif(na_by_col_class(i,’democrat’))<p_y_col_class(i,’democrat’),’y’,’n’)
HouseVotes84[c2,i] <-
ifelse(runif(na_by_col_class(i,’republican’))<p_y_col_class(i,’republican’),’y’,’n’)}
}

Note that the which function filters  indices by the criteria specified in the arguments and ifelse is a vectorised conditional function which enables us to apply logical criteria to multiple elements of a vector.

At this point it is a good idea to check that the NAs in each column have been set according to the voting patterns of non-NAs for a given party. You can use the p_y_col_class() function to check that the new probabilities are close to the old ones. You might want to do this before you proceed any further.

The next step is to divide the available data into training and test datasets. The former will be used to train the algorithm and produce a predictive model. The effectiveness of the model will then be tested using the test dataset. There is a great deal of science and art behind the creation of training and testing datasets. An important consideration is that both sets must contain records that are representative of the entire dataset. This can be difficult to do, especially when data is scarce and there are predictors that do not vary too much…or vary wildly for that matter. On the other hand, problems can also arise when there are redundant predictors. Indeed, the much of the art of successful prediction lies in figuring out which predictors are likely to lead to better predictions, an area known as feature selection. However, that’s a topic for another time. Our current dataset does not suffer from any of these complications so we’ll simply divide the it in an 80/20 proportion, assigning the larger number of records to the training set.

#divide into test and training sets
#create new col “train” and assign 1 or 0 in 80/20 proportion via random uniform dist
HouseVotes84[,”train”] <- ifelse(runif(nrow(HouseVotes84))<0.80,1,0)
#get col number of train / test indicator column (needed later)
trainColNum <- grep(“train”,names(HouseVotes84))
#separate training and test sets and remove training column before modeling
trainHouseVotes84 <- HouseVotes84[HouseVotes84$train==1,-trainColNum]
testHouseVotes84 <- HouseVotes84[HouseVotes84$train==0,-trainColNum]

Now we’re finally good to build our Naive Bayes model (machine learning folks call this model training rather than model building – and I have to admit, it does sound a lot cooler).

The code to train the model is anticlimactically simple:

#load e1071 library and invoke naiveBayes method
library(e1071)
nb_model <- naiveBayes(Class~.,data = trainHouseVotes84)

Here we’ve invokedthe naiveBayes method from the e1071 package. The first argument uses R’s formula notation.In this notation, the dependent variable (to be predicted) appears on the left hand side of the ~ and the independent variables (predictors or features) are on the right hand side. The dot (.) is simply shorthand for “all variable other than the dependent one.” The second argument is the dataframe that contains the training data. Check out the documentation for the other arguments of naiveBayes; it will take me too far afield to cover them here. Incidentally, you can take a look at the model using the summary() or str() functions, or even just entering the model name in the R console:

nb_model
summary(nb_model)
str(nb_model)

Note that I’ve suppressed the output above.

Now that we have a model, we can do some predicting. We do this by feeding our test data into our model and comparing the predicted party affiliations with the known ones. The latter is done via the wonderfully named confusion matrix – a table in which true and predicted values for each of the predicted classes are displayed in a matrix format. This again is just a couple of lines of code:

#…and the moment of reckoning
nb_test_predict <- predict(nb_model,testHouseVotes84[,-1])
#confusion matrix
table(pred=nb_test_predict,true=testHouseVotes84$Class)
pred true democrat republican
democrat 38 3
republican 5 22

The numbers you get will be different because your training/test sets are almost certainly different from mine.

In the confusion matrix (as defined above), the true values are in columns and the predicted values in rows. So, the algorithm has correctly classified 38 out of 43 (i.e. 38+5) Democrats and 22 out of 25 Republicans (i.e. 22+3). That’s pretty decent. However, we need to keep in mind that this could well be quirk of the choice of dataset. To address this, we should get a numerical measure of the efficacy of the algorithm and for different training and testing datasets. A simple measure of efficacy would be the fraction of predictions that the algorithm gets right. For the training/testing set above, this is simply 60/68 (see the confusion matrix above). The simplest way to calculate this in R is:

#fraction of correct predictions
mean(nb_test_predict==testHouseVotes84$Class)
[1] 0.8823529

A natural question to ask at this point is: how good is this prediction. This question cannot be answered with only a single run of the model; we need to do many runs and look at the spread of the results. To do this, we’ll create a function which takes the number of times the model should be run and the training fraction as inputs and spits out a vector containing the proportion of correct predictions for each run. Here’s the function

#function to create, run and record model results
nb_multiple_runs <- function(train_fraction,n){
fraction_correct <- rep(NA,n)
for (i in 1:n){
HouseVotes84[,”train”] <- ifelse(runif(nrow(HouseVotes84))<train_fraction,1,0)
trainColNum <- grep(“train”,names(HouseVotes84))
trainHouseVotes84 <- HouseVotes84[HouseVotes84$train==1,-trainColNum]
testHouseVotes84 <- HouseVotes84[HouseVotes84$train==0,-trainColNum]
nb_model <- naiveBayes(Class~.,data = trainHouseVotes84)
nb_test_predict <- predict(nb_model,testHouseVotes84[,-1])
fraction_correct[i] <- mean(nb_test_predict==testHouseVotes84$Class)
}
return(fraction_correct)
}

I’ve not commented the above code as it is essentially a repeat of the steps described earlier. Also, note that I have not made any effort to make the code generic or efficient.

Let’s do 20 runs with the same training fraction (0.8) as before:

#20 runs, 80% of data randomly selected for training set in each run
fraction_correct_predictions <- nb_multiple_runs(0.8,20)
fraction_correct_predictions
[1] 0.9417476 0.9036145 0.9294118 0.9302326 0.9213483 0.9404762 0.8777778 0.9102564
[9] 0.9102564 0.9080460 0.9139785 0.9200000 0.9090909 0.9239130 0.9605263 0.9333333
[17] 0.9052632 0.8977273 0.9642857 0.8518519
#summary of results
summary(fraction_correct_predictions)
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.8519 0.9074 0.9170 0.9177 0.9310 0.9643
#standard deviation
sd(fraction_correct_predictions)
[1] 0.02582419

We see that the outcome of the runs are quite close together, in the 0.85 to 0.95 range with a standard deviation of 0.025. This tells us that Naive Bayes does a pretty decent job with this data.

Wrapping up

I originally intended to cover a few more case studies in this post, a couple of which highlight the shortcomings of the Naive Bayes algorithm. However, I realize that doing so would make this post unreasonably long, so I’ll stop here with a few closing remarks, and a promise to write up the rest of the story in a subsequent post.

To sum up: I have illustrated the use of a popular Naive Bayes implementation in R and attempted to convey an intuition for how the algorithm works.  As we have seen, the algorithm works quite well in the example case, despite the violation of the assumption of independent conditional probabilities.

The reason for the unreasonable effectiveness of the algorithm is two-fold. Firstly, the algorithm picks the predicted class based on the largest predicted probability, so ordering is more important than the actual value of the probability. Secondly, in many cases, a bias one way for a particular vote may well be counteracted by a bias the other way for another vote. That is, biases tend to cancel out, particularly if there are a large number of features.

That said, there are many cases in which the algorithm fails miserably – and we’ll look at some of these in a future post.  However, despite its well known shortcomings, Naive Bayes is often the first port of call in prediction problems simply because it is easy to set up and is fast compared to many of the iterative algorithms we will explore later in this series of articles.

Endnote

Thanks for reading! If you liked this piece, you might enjoy the other articles in my “Gentle introduction to analytics using R” series. Here are the links:

A gentle introduction to text mining using R

A gentle introduction to cluster analysis using R

A gentle introduction to topic modeling using R

Written by K

November 6, 2015 at 7:33 am

%d bloggers like this: