Eight to Late

Sensemaking and Analytics for Organizations

A gentle introduction to network graphs using R and Gephi

leave a comment »


Graph theory is the an area of mathematics that analyses relationships between pairs of objects. Typically graphs consist of nodes (points representing objects) and edges (lines depicting relationships between objects). As one might imagine, graphs are extremely useful in visualizing relationships between objects. In this post, I provide a detailed introduction to network graphs using  R, the premier open source tool statistics package for calculations and the excellent Gephi software for visualization.

The article is organised as follows: I begin by defining the problem and then spend some time developing the tools necessary to build the graph. I then use R to prepare the data for visualizing before doing the visualization in Gephi.

The problem

In an introductory article on cluster analysis, I provided an in-depth introduction to a couple of algorithms that can be used to categorise documents automatically.  Although these techniques are useful, they do not provide a feel for the relationships between different documents in the collection of interest.  In the present piece I show network graphs can be used to to visualise similarity-based relationships within a corpus.

Document similarity

There are many ways to quantify similarity between documents. A popular method is to use the notion of distance between documents. The basic idea is simple: documents that have many words in common are “closer” to each other than those that share fewer words. The problem with distance, however, is that it can be skewed by word count: documents that have an unusually high word  count will show up as outliers even though they may be similar (in terms of words used) to other documents in the corpus. For this reason, we will use another related measure of similarity that does not suffer from this problem – more about this in a minute.

Representing documents mathematically

As I explained in my article on cluster analysis, a document can be represented as a point in a conceptual space that has dimensionality equal to the number of distinct words in the collection of documents. I revisit and build on that explanation below.

Say one has a simple document consisting of the words “five plus six”, one can represent it mathematically in a 3 dimensional space in which the individual words are represented by the three axis (See Figure 1). Here each word is a coordinate axis (or dimension).  Now, if one connects the point representing the document (point A in the figure) to the origin of the word-space, one has a vector, which in this case is a directed line connecting the point in question to the origin.  Specifically, the point A can be represented by the coordinates (1, 1, 1) in this space. This is a nice quantitative representation of the fact that the words five, plus and one appear in the document exactly once. Note, however, that we’ve assumed the order of words does not matter. This is a reasonable assumption in some cases, but not always so.

Figure 1

Figure 1

As another example consider document, B, which consists of only two words: “five plus” (see Fig 2). Clearly this document shares some similarity with document but it is not identical.  Indeed, this becomes evident when we note that document (or point) B is simply the point $latex(1, 1, 0)$ in this space, which tells us that it has two coordinates (words/frequencies) in common with document (or point) A.

Figure 2

Figure 2

To be sure, in a realistic collection of documents we would have a large number of distinct words, so we’d have to work in a very high dimensional space. Nevertheless, the same principle holds: every document in the corpus can be represented as a vector consisting of a directed line from the origin to the point to which the document corresponds.

Cosine similarity

Now it is easy to see that two documents are identical if they correspond to the same point. In other words, if their vectors coincide. On the other hand, if they are completely dissimilar (no words in common), their vectors will be at right angles to each other.  What we need, therefore, is a quantity that varies from 0 to 1 depending on whether two documents (vectors) are dissimilar(at right angles to each other) or similar (coincide, or are parallel to each other).

Now here’s the ultra-cool thing, from your high school maths class, you know there is a trigonometric ratio which has exactly this property – the cosine!

What’s even cooler is that the cosine of the angle between two vectors is simply the dot product  of the two vectors, which is sum of the products of the individual elements of the vector,  divided by the product of the  lengths of the two vectors. In three dimensions this can be expressed mathematically as:

\cos(\theta)= \displaystyle \frac{x_1 x_2+y_1 y_2+z_1 z_2}{\sqrt{x_1^2+y_1^2+z_1^2}\sqrt{x_2^2+y_2^2+z_2^2}}...(1)

where the two vectors are (x_{1},y_{1},z_{1}) and (x_{2},y_{2},z_{2}) , and $\latex \theta$ is the angle between the two vectors (see Fig 2).

The upshot of the above is that the cosine of the angle between the vector representation of two documents is a reasonable measure of similarity between them. This quantity, sometimes referred to as cosine similarity, is what we’ll take as our similarity measure in the rest of this article.

The adjacency matrix

If we have a collection of N documents, we can calculate the similarity between every pair of documents as we did for A and B in the previous section. This would give us a set of N^2 numbers between 0 and 1, which can be conveniently represented as a matrix.  This is sometimes called the adjacency matrix. Beware, though, this term has many different meanings in the math literature. I use it in the sense specified above.

Since every document is identical to itself, the diagonal elements of the matrix will all be 1. These similarities are trivial (we know that every document is identical to itself!)  so we’ll set the diagonal elements to zero.

Another important practical point is that visualizing every relationship is going to make  a very messy graph. There would be N(N-1) edges in such a graph, which would make it impossible to make sense of if we have more than a handful of documents. For this reason, it is normal practice to choose a cutoff value of similarity below which it is set to zero.

Building the adjacency matrix using R

We now have enough background to get down to the main point of this article – visualizing relationships between documents.

The first step is to build the adjacency matrix.  In order to do this, we have to build the document term matrix (DTM) for the collection of documents,  a process which I have dealt with at length in my  introductory pieces on text mining and topic modeling. In fact, the steps are actually identical to those detailed in the second piece. I will therefore avoid lengthy explanations here. However,  I’ve listed all the code below with brief comments (for those who are interested in trying this out, the document corpus can be downloaded here and a pdf listing of the R code can be obtained here.)

OK, so here’s the code listing:


#load text mining library


#set working directory (modify path as needed)


#load files into corpus
#get listing of .txt files in directory
filenames <- list.files(getwd(),pattern=”*.txt”)


#read files into a character vector
files <- lapply(filenames,readLines)


#create corpus from vector
docs <- Corpus(VectorSource(files))


#inspect a particular document in corpus


#start preprocessing
#Transform to lower case
docs <-tm_map(docs,content_transformer(tolower))


#remove potentially problematic symbols
toSpace <- content_transformer(function(x, pattern) { return (gsub(pattern, ” “, x))})
docs <- tm_map(docs, toSpace, “-“)
docs <- tm_map(docs, toSpace, “’”)
docs <- tm_map(docs, toSpace, “‘”)
docs <- tm_map(docs, toSpace, “•”)
docs <- tm_map(docs, toSpace, “””)
docs <- tm_map(docs, toSpace, ““”)


#remove punctuation
docs <- tm_map(docs, removePunctuation)
#Strip digits
docs <- tm_map(docs, removeNumbers)
#remove stopwords
docs <- tm_map(docs, removeWords, stopwords(“english”))
#remove whitespace
docs <- tm_map(docs, stripWhitespace)
#Good practice to check every now and then
#Stem document
docs <- tm_map(docs,stemDocument)


#fix up 1) differences between us and aussie english 2) general errors
docs <- tm_map(docs, content_transformer(gsub),
pattern = “organiz”, replacement = “organ”)
docs <- tm_map(docs, content_transformer(gsub),
pattern = “organis”, replacement = “organ”)
docs <- tm_map(docs, content_transformer(gsub),
pattern = “andgovern”, replacement = “govern”)
docs <- tm_map(docs, content_transformer(gsub),
pattern = “inenterpris”, replacement = “enterpris”)
docs <- tm_map(docs, content_transformer(gsub),
pattern = “team-“, replacement = “team”)
#define and eliminate all custom stopwords
myStopwords <- c(“can”, “say”,”one”,”way”,”use”,
“post”,”look”,”right”,”now”,”think”,”‘ve “,
“‘re “,”anoth”,”put”,”set”,”new”,”good”,
docs <- tm_map(docs, removeWords, myStopwords)
#inspect a document as a check
#Create document-term matrix
dtm <- DocumentTermMatrix(docs)

The  rows of a DTM are document vectors akin to the vector representations of documents A and B discussed earlier. The DTM therefore contains all the information we need to calculate the cosine similarity between every pair of documents in the corpus (via equation 1). The R code below implements this, after taking care of a few preliminaries.

#convert dtm to matrix
#write as csv file
#Map filenames to matrix row numbers
#these numbers will be used to reference
#files in the network graph
filekey <- cbind(rownames(m),filenames)
#compute cosine similarity between document vectors
#converting to distance matrix sets diagonal elements to 0
cosineSim <- function(x){
as.dist(x%*%t(x)/(sqrt(rowSums(x^2) %*% t(rowSums(x^2)))))
cs <- cosineSim(m)
#adjacency matrix: set entries below a certain threshold to 0.
#We choose half the magnitude of the largest element of the matrix
#as the cutoff. This is an arbitrary choice
cs[cs < max(cs)/2] <- 0
cs <- round(cs,3)

A few lines need a brief explanation:

First up, although the DTM is a matrix, it is internally stored in a special form suitable for sparse matrices. We therefore have to explicitly convert it into a proper matrix before using it to calculate similarity.

Second, the names I have given the documents are way too long to use as labels in the network diagram. I have therefore mapped the document names to the row numbers which we’ll use in our network graph later. The mapping back to the original document names is stored in filekey.csv. For future reference, the mapping is shown in Table 1 below.

File number Name
1 BeyondEntitiesAndRelationships.txt
2 bigdata.txt
3 ConditionsOverCauses.txt
4 EmergentDesignInEnterpriseIT.txt
5 FromInformationToKnowledge.txt
6 FromTheCoalface.txt
7 HeraclitusAndParmenides.txt
8 IroniesOfEnterpriseIT.txt
9 MakingSenseOfOrganizationalChange.txt
10 MakingSenseOfSensemaking.txt
11 ObjectivityAndTheEthicalDimensionOfDecisionMaking.txt
12 OnTheInherentAmbiguitiesOfManagingProjects.txt
13 OrganisationalSurprise.txt
14 ProfessionalsOrPoliticians.txt
15 RitualsInInformationSystemDesign.txt
16 RoutinesAndReality.txt
17 ScapegoatsAndSystems.txt
18 SherlockHolmesFailedProjects.txt
19 sherlockHolmesMgmtFetis.txt
20 SixHeresiesForBI.txt
21 SixHeresiesForEnterpriseArchitecture.txt
22 TheArchitectAndTheApparition.txt
23 TheCloudAndTheGrass.txt
24 TheConsultantsDilemma.txt
25 TheDangerWithin.txt
26 TheDilemmasOfEnterpriseIT.txt
27 TheEssenceOfEntrepreneurship.txt
28 ThreeTypesOfUncertainty.txt
30 UnderstandingFlexibility.txt

Table 1: File mappings

Finally, the distance function (as.dist) in the cosine similarity function sets the diagonal elements to zero  because the distance between a document and itself is zero…which is just a complicated way of saying that a document is identical to itself :-)

The last three lines of code above simply implement the cutoff that I mentioned in the previous section. The comments explain the details so I need say no more about it.

…which finally brings us to Gephi.

Visualizing document similarity using Gephi

Gephi is an open source, Java based network analysis and visualisation tool. Before going any further, you may want to download and install it. While you’re at it you may also want to download this excellent quick start tutorial.

Go on, I’ll wait for you…

To begin with, there’s a little formatting quirk that we need to deal with. Gephi expects separators in csv files to be semicolons (;) . So, your first step is to open up the adjacency matrix that you created in the previous section (AdjacencyMatrix.csv) in a text editor and replace commas with semicolons.

Once you’ve done that, fire up Gephi, go to File > Open,  navigate to where your Adjacency matrix is stored and load the file. If it loads successfully, you should see a feedback panel as shown in Figure 3.  By default Gephi creates a directed graph (i.e one in which the edges have arrows pointing from one node to another). Change this to undirected and click OK.

Figure 3: Gephi import feedback

Figure 3: Gephi import feedback

Once that is done, click on overview (top left of the screen). You should end up with something like Figure 4.

Figure 4: Initial overview after loading adjacency matrix

Figure 4: Initial overview after loading adjacency matrix

Gephi has sketched out an initial network diagram which depicts the relationships between documents…but it needs a bit of work to make it look nicer and more informative. The quickstart tutorial mentioned earlier describes various features that can be used to manipulate and prettify the graph. In the remainder of this section, I list some that I found useful. Gephi offers many more. Do explore, there’s much more than  I can cover in an introductory post.

First some basics. You can:

  • Zoom and pan using mouse wheel and right button.
  • Adjust edge thicknesses using the slider next to text formatting options on bottom left of main panel.
  • Re-center graph via the magnifying glass icon on left of display panel (just above size adjuster).
  • Toggle node labels on/off by clicking on grey T symbol on bottom left panel.

Figure 5 shows the state of the diagram after labels have been added and edge thickness adjusted (note that your graph may vary in appearance).

Figure 5: graph with node labels and adjusted edge thicknesses

Figure 5: graph with node labels and adjusted edge thicknesses

The default layout of the graph is ugly and hard to interpret. Let’s work on fixing it up. To do this, go over to the layout panel on the left. Experiment with different layouts to see what they do. After some messing around, I found the Fruchtermann-Reingold and Force Atlas options to be good for this graph. In the end I used Force Atlas with a Repulsion Strength of 2000 (up from the default of 200) and an Attraction Strength of 1 (down from the default of 10). I also adjusted the figure size and node label font size from the graph panel in the center. The result is shown in Figure 6.

Figure 6: Graph after using Force Atlas layout

Figure 6: Graph after using Force Atlas layout

This is much better. For example, it is now evident that document 9 is the most connected one (which table 9 tells us is a transcript of a conversation with Neil Preston on organisational change).

It would be nice if we could colour code edges/nodes and size nodes by their degree of connectivity. This can be done via the ranking panel above the layout area where you’ve just been working.

In the Nodes tab select Degree as  the rank parameter (this is the degree of connectivity of the node) and hit apply. Select your preferred colours via the small icon just above the colour slider. Use the colour slider to adjust the degree of connectivity at which colour transitions occur.

Do the same for edges, selecting weight as the rank parameter(this is the degree of similarity between the two douments connected by the edge). With a bit of playing around, I got the graph shown in the screenshot below (Figure 7).

Figure 7: Connectivity-based colouring of edges and nodes.

Figure 5: Connectivity-based colouring of edges and nodes.

If you want to see numerical values for the rankings, hit the results list icon on the bottom left of the ranking panel. You can see numerical ranking values for both nodes and edges as shown in Figures 8 and 9.

Figure 8: Node ranking

Figure 8: Node ranking (see left of figure)

Figure 9: Edge ranking

Figure 9: Edge ranking

It is easy to see from the figure that documents 21 and 29 are the most similar in terms of cosine ranking. This makes sense, they are pieces in which I have ranted about the current state of enterprise architecture – the first article is about EA in general and the other about the TOGAF framework. If you have a quick skim through, you’ll see that they have a fair bit in common.

Finally, it would be nice if we could adjust node size to reflect the connectedness of the associated document. You can do this via the “gem” symbol on the top right of the ranking panel. Select appropriate min and max sizes (I chose defaults) and hit apply. The node size is now reflective of the connectivity of the node – i.e. the number of other documents to which it is cosine similar to varying degrees. The thickness of the edges reflect the degree of similarity. See Figure 10.


Figure 10: Node sizes reflecting connectedness

Figure 10: Node sizes reflecting connectedness

Now that looks good enough to export. To do this, hit the preview tab on main panel and make following adjustments to the default settings:

Under Node Labels:
1. Check Show Labels
2. Uncheck proportional size
3. Adjust font to required size

Under Edges:
1. Change thickness to 10
2. Check rescale weight

Hit refresh after making the above adjustments. You should get something like Fig 11.

Figure 11: Export preview

Figure 11: Export preview

All that remains now is to do the deed: hit export SVG/PDF/PNG to export the diagram. My output is displayed in Figure 12. It clearly shows the relationships between the different documents (nodes) in the corpus. The nodes with the highest connectivity are indicated via node size and colour  (purple for high, green for low) and strength of similarity is indicated by edge thickness.

Figure 12: Gephi network graph

Figure 12: Gephi network graph of document corpus

…which brings us to the end of this journey.

Wrapping up

The techniques of text analysis enable us to quantify relationships between documents. Document similarity is one such relationship. Numerical measures are good, but the comprehensibility of these can be further enhanced through meaningful visualisations.  Indeed, although my stated objective in this article was to provide an introduction to creating network graphs using Gephi and R (which I hope I’ve succeeded in doing), a secondary aim was to show how document similarity can be quantified and visualised. I sincerely hope you’ve found the discussion interesting and useful.

Many thanks for reading! As always, your feedback would be greatly appreciated.

Written by K

December 2, 2015 at 7:20 am

The “value add” tax – a riff on corporate communication

leave a comment »

A mainstay of team building workshops is the old “what can we do better” exercise.  Over the years I’ve noticed that “improving communication” is an item that comes up again and again in these events.  This is frustrating for managers. For example, during a team-building debrief some years ago, an exasperated executive remarked, “Oh don’t pay any attention to that [better communication], it keeps coming up no matter what we do.”

The executive had a point.  The organisation had invested much effort in establishing new channels of communication – social media, online, face-to-face forums etc.  The uptake, however, was disappointing:  turnout at the face-to-face meetings was consistently low as was use of other channels.

As far as management was concerned, they had done their job by establishing communication channels and making them available to all. What more could they  be expected to do? The matter was dismissed with a collective shrug of suit-clad shoulders…until the next team building event, when the issue was highlighted by employees yet again.

After much hand-wringing, the organisation embarked on another “better communication cycle.”  Much effort was expended…again, with the same disappointing results.

Anecdotal evidence via conversations with friends and collaborators suggests that variants of this story play out in many organisations. This makes the issue well worth exploring. I won’t be so presumptuous as to offer answers; I’m well aware that folks much better qualified than I have spent years attempting to do so. Instead I raise a point which, though often overlooked, might well have something to do with the lack of genuine communication in organisations.

Communication experts have long agreed that face-to-face dialogue is the most effective mode of communication. Backing for this comes from the interactional or pragmatic view, which is based on the premise that communication is more about building relationships than conveying information. Among other things, face-to-face communication enables the communicating parties to observe and interpret non-verbal signals such as facial expression and gestures and, as we all know, these often “say” much more than what’s being said.

A few months ago I started paying closer attention to non-verbal cues. This can be hard to do because people are good at disguising their feelings. Involuntary expressions indicative of people’s real thoughts can be fleeting. A flicker of worry, fear or anger is quickly covered by a mask of indifference.

In meetings, difficult topics tend to be couched in platitudinous language. Platitudes are empty words that sound great but can be interpreted in many different ways. Reconciling those differences often leads to pointless arguments that are emotionally draining. Perhaps this is why people prefer to take refuge in indifference.

A while ago I was sitting in a meeting where the phrase “value add activity” (sic) cropped up once, then again…and then many times over. Soon it was apparent that everyone in the room had a very different conception of what constituted a “value add activity.” Some argued that project management is a value add activity, others disagreed vehemently arguing that project management is a bureaucratic exercise and that real value lies in creating something. Round and round the arguments went but there was no agreement on what constituted a “value add activity.” The discussion generated a lot of heat but shed no light whatsoever on the term.

A problem with communication in the corporate world is that it is loaded with such platitudes. To make sense of these, people have to pay what I call a “value add” tax – the effort in reaching a consensus on what the platitudinous terms mean. This can be emotionally extortionate because platitudes often touch upon issues that affect people’s sense of well-being.

Indifference is easier because we can then pretend to understand and agree with each other when we would rather not understand, let alone agree, at all.

Written by K

November 19, 2015 at 8:02 am

A gentle introduction to Naïve Bayes classification using R

with 3 comments


One of the key problems of predictive analytics is to classify entities or events based on a knowledge of their attributes.  An example: one might want to classify customers into two categories, say, ‘High Value’ or ‘Low Value,’ based on a knowledge of their buying patterns.  Another example: to figure out the party allegiances of  representatives based on their voting records.  And yet another:  to predict the species a particular plant or animal specimen based on a list of its characteristics. Incidentally, if you haven’t been there already, it is worth having a look at Kaggle to get an idea of some of the real world classification problems that people tackle using techniques of predictive analytics.

Given the importance of classification-related problems, it is no surprise that analytics tools offer a range of options. My favourite (free!) tool, R, is no exception: it has a plethora of state of the art packages designed to handle a wide range of problems. One of the problems with this diversity of choice is that it is often confusing for beginners to figure out which one to use in a particular situation. Over the next several months, I intend to write up tutorial articles covering many of the common algorithms, with a particular focus on their strengths and weaknesses; explaining where they work well and where they don’t. I’ll kick-off this undertaking with a simple yet surprisingly effective algorithm – the Naïve Bayes classifier.

Just enough theory

I’m going to assume you have R and RStudio installed on your computer. If you need help with this, please follow the instructions here.

To introduce the Naive Bayes algorithm, I will use the HouseVotes84 dataset, which contains US congressional voting records for 1984. The data set is in the mlbench package which is not part of the base R installation. You will therefore need to install it if you don’t have it already.  Package installation is a breeze in RStudio – just go to Tools > Install Packages and follow the prompts.

The HouseVotes84 dataset describes how 435 representatives voted – yes (y), no (n) or unknown (NA) – on 16 key issues presented to Congress.  The dataset also provides the party affiliation of each representative – democrat or republican.

Let’s begin by exploring the dataset. To do this, we load mlbench, fetch the dataset and get some summary stats on it. (Note: a complete listing of the code in this article can be found here)

#load mlbench library
#set working directory if needed (modify path as needed)
#load HouseVotes84 dataset

It is good to begin by exploring the data visually.  To this end, let’s do some bar plots using the basic graphic capabilities of R:

#barplots for specific issue
title(main=”Votes cast for issue”, xlab=”vote”, ylab=”# reps”)
#by party
title(main=”Republican votes cast for issue 1″, xlab=”vote”, ylab=”# reps”)
title(main=”Democrat votes cast for issue 1″, xlab=”vote”, ylab=”# reps”)

The plots are shown in Figures 1 through 3.

Fig 1: y and n votes for issue 1

Fig 1: y and n votes for issue 1

Fig 2: Republican votes for issue 1.

Fig 2: Republican votes for issue 1.

Fig 3: Democrat votes for issue 1.

Fig 3: Democrat votes for issue 1.

Among other things, such plots give us a feel for the probabilities associated with how representatives from parties tend to vote on specific issues.

The classification problem at hand is to figure out the party affiliation from a knowledge of voting patterns. For simplicity let us assume that there are only 3 issues voted on instead of the 16 in the actual dataset. In concrete terms we wish to answer the question, “what is the probability that a representative is, say, a democrat (D) given that he or she has voted, say,  (v1 = y, v2=n,v3 = y) on the three issues?” To keep things simple I’m assuming there are no NA values.

In the notation of conditional probability this can be written as,

P(D|v1=y, v2=n,v3=y)

(Note:  If you need a refresher on conditional probability, check out this post for a simple explanation.)

By Bayes theorem, which I’ve explained at length in this post, this can be recast as,

P(D|v1=y, v2=n,v3=y) = \displaystyle \frac{p(D) p(v1=y, v2=n,v3=y|D)}{p(v1=y, v2=n,v3=y)}......(1)

We’re interested only in relative probabilities of the representative being a democrat or republican because the predicted party affiliation depends only on which of the two probabilities is larger (the actual value of the probability is not important). This being the case, we can factor out any terms that are constant.  As it happens, the denominator of the above equation – the probability of a particular voting pattern – is a constant because it depends on the total number of representatives (from both parties)  who voted a particular way.

Now, using the chain rule of conditional probability, we can rewrite the numerator as:

p(D) p(v1=y, v2=n,v3=y|D)

= p(D)p(v1=y|D) p(v2=n,v3=y|D,v1=y)

Basically, the second term on the left hand side, p(v1=y, v2=n,v3=y|D), is the probability of getting a particular voting pattern  (y,n,y) assuming the rep is a Democrat (D). The definition of conditional probability allows us to rewrite this as the probability of getting a n vote for issue v2 and a y vote for issue v3 given that the rep is a Democrat who has voted y on issue v1. Again, this is simply a consequence of the definition of conditional probability.

Another application of the chain rule gives:

p(D) p(v1=y, v2=n,v3=y|D)

= p(D)p(v1=y|d) p(v2=n|D,v1=y) p(v3=y|D,v1=y,v2=n)

Where we have now factored out the n vote on the second issue.

The key assumption of Naïve Bayes is that the conditional probability of each feature given the class is independent of all other features. In mathematical terms this means that,

p(v2=n|D,v1=y) = p(v2=n|D)


p(v3=y|D,v1=y,v2=n) = p(v3=y|D)

The quantity of interest, the numerator of equation (1) can then be written as:

p(D) p(v1=y, v2=n,v3=y|D)

= p(D)p(v1=y|D)p(v2=n|D)p(v3=y|D).......(2)

The assumption of independent conditional probabilities is a drastic one.  What it is saying is that the features are completely independent of each other. This is clearly not the case in the situation above: how representatives vote on a particular issue is coloured by their beliefs and values. For example, the conditional probability of voting patterns  on socially progressive issues are definitely not independent of each other. However, as we shall see  in the next section, the Naïve Bayes assumption works well for this problem as it does in many other situations where we know upfront that it is grossly incorrect.

Another good example of the unreasonable efficacy of Naive Bayes is in spam filtering.  In the case of spam, the features are individual words in an email.  It is clear that certain word combinations tend to show up consistently in spam – for example, “online”, “meds”, “Viagra” and “pharmacy.” In other words, we know upfront that their occurrences are definitely not independent of each other. Nevertheless, Naïve Bayes based spam detectors which assume mutual independence of features do remarkably well in distinguishing spam from ham.

Why is this so?

To explain why, I return to a point I mentioned earlier: to figure out  the affiliation associated with a particular voting pattern (say, v1=y, v2=n,v3=y) one only needs to know which of the two probabilities p(R| v1=y, v2=n,v3=y) and p(D| V1=y, V2=n, V3=y) is greater than the other.  That is, the values of these probabilities are not important in determining the party affiliations.

This hints as to why the independence assumption might not be so quite so idiotic. Since the prediction depends only the on the maximum, the algorithm will get it right even if there are dependencies between feature providing the dependencies do not change which class has the maximum probability (once again, note that only the maximal class is important here, not  the value of the maximum).

Yet another reason for the surprising success of Naïve Bayes is that dependencies often cancel out across a large set of features. But, of course, there is no guarantee that this will always happen.

In general, Naïve Bayes algorithms work better for problems in which the dependent (predicted) variable is discrete, even when there are dependencies between features (spam detection is a good example).  They work less well for regression problems – i.e those in  which predicted variables are continuous.

I hope the above has given you an intuitive feel for how Naïve Bayes algorithms work. I don’t know about you, but my head’s definitely spinning after writing out all that mathematical notation.

It’s time to clear our heads by doing some computation.

Naïve Bayes in action

There are a couple of well-known implementations of Naïve Bayes in R. One of them is the naiveBayes method in the e1071 package and the other is NaiveBayes method in  the klaR package.  I’ll use the former for no other reason than it seems to be more popular. That said, I have used the latter too and can confirm that it works just as well.

We’ve already loaded and explored the HouseVotes84 dataset. One of the things you may have noticed when summarising the data is  that there are a fair number of NA values. Naïve Bayes algorithms typically handle NA values either by ignoring records that contain any NA values or by ignoring just the NA values. These choices are indicated by the value of the variable na.action in the naiveBayes algorithm, which is set to na.omit (to ignore the record) or na.pass (to ignore the value).

Just for fun, we’ll take a different approach. We’ll impute NA values for a given issue and party by looking at how other representatives from the same party voted on the issue. This is very much in keeping with the Bayesian spirit: we infer unknowns based on a justifiable belief – that is, belief based on the evidence.

To do this I write two functions: one to  compute the number of NA values for a given issue (vote) and class (party affiliation), and the other to calculate the fraction of yes votes for a given issue (column) and class (party affiliation).

#Functions needed for imputation
#function to return number of NAs by vote and class (democrat or republican)
na_by_col_class <- function (col,cls){return(sum(is.na(HouseVotes84[,col]) & HouseVotes84$Class==cls))}
#function to compute the conditional probability that a member of a party will cast a ‘yes’ vote for
#a particular issue. The probability is based on all members of the party who #actually cast a vote on the issue (ignores NAs).
p_y_col_class <- function(col,cls){
sum_y<-sum(HouseVotes84[,col]==’y’ & HouseVotes84$Class==cls,na.rm = TRUE)
sum_n<-sum(HouseVotes84[,col]==’n’ & HouseVotes84$Class==cls,na.rm = TRUE)
#Check that functions work!
> p_y_col_class(2,’democrat’)
[1] 0.6046512
> p_y_col_class(2,’republican’)
[1] 0.1878788
> na_by_col_class(2,’democrat’)
[1] 9
> na_by_col_class(2,’republican’)
> [1] 3

Before proceeding,  you might want to go back to the data and convince yourself that these values are sensible.

We can now impute the NA values based on the above. We do this by randomly assigning values ( y or n) to NAs, based on the proportion of members of a party who have voted y or n. In practice, we do this by invoking the uniform distribution and setting an NA value to y if the random number returned is less than the probability of a yes vote and to n otherwise. This is not as complicated as it sounds; you should be able to figure the logic out from the code below.

#impute missing values.
for (i in 2:ncol(HouseVotes84)) {
if(sum(is.na(HouseVotes84[,i])>0)) {
c1 <- which(is.na(HouseVotes84[,i])& HouseVotes84$Class==’democrat’,arr.ind = TRUE)
c2 <- which(is.na(HouseVotes84[,i])& HouseVotes84$Class==’republican’,arr.ind = TRUE)
HouseVotes84[c1,i] <-
HouseVotes84[c2,i] <-

Note that the which function filters  indices by the criteria specified in the arguments and ifelse is a vectorised conditional function which enables us to apply logical criteria to multiple elements of a vector.

At this point it is a good idea to check that the NAs in each column have been set according to the voting patterns of non-NAs for a given party. You can use the p_y_col_class() function to check that the new probabilities are close to the old ones. You might want to do this before you proceed any further.

The next step is to divide the available data into training and test datasets. The former will be used to train the algorithm and produce a predictive model. The effectiveness of the model will then be tested using the test dataset. There is a great deal of science and art behind the creation of training and testing datasets. An important consideration is that both sets must contain records that are representative of the entire dataset. This can be difficult to do, especially when data is scarce and there are predictors that do not vary too much…or vary wildly for that matter. On the other hand, problems can also arise when there are redundant predictors. Indeed, the much of the art of successful prediction lies in figuring out which predictors are likely to lead to better predictions, an area known as feature selection. However, that’s a topic for another time. Our current dataset does not suffer from any of these complications so we’ll simply divide the it in an 80/20 proportion, assigning the larger number of records to the training set.

#divide into test and training sets
#create new col “train” and assign 1 or 0 in 80/20 proportion via random uniform dist
HouseVotes84[,”train”] <- ifelse(runif(nrow(HouseVotes84))<0.80,1,0)
#get col number of train / test indicator column (needed later)
trainColNum <- grep(“train”,names(HouseVotes84))
#separate training and test sets and remove training column before modeling
trainHouseVotes84 <- HouseVotes84[HouseVotes84$train==1,-trainColNum]
testHouseVotes84 <- HouseVotes84[HouseVotes84$train==0,-trainColNum]

Now we’re finally good to build our Naive Bayes model (machine learning folks call this model training rather than model building – and I have to admit, it does sound a lot cooler).

The code to train the model is anticlimactically simple:

#load e1071 library and invoke naiveBayes method
nb_model <- naiveBayes(Class~.,data = trainHouseVotes84)

Here we’ve invokedthe naiveBayes method from the e1071 package. The first argument uses R’s formula notation.In this notation, the dependent variable (to be predicted) appears on the left hand side of the ~ and the independent variables (predictors or features) are on the right hand side. The dot (.) is simply shorthand for “all variable other than the dependent one.” The second argument is the dataframe that contains the training data. Check out the documentation for the other arguments of naiveBayes; it will take me too far afield to cover them here. Incidentally, you can take a look at the model using the summary() or str() functions, or even just entering the model name in the R console:


Note that I’ve suppressed the output above.

Now that we have a model, we can do some predicting. We do this by feeding our test data into our model and comparing the predicted party affiliations with the known ones. The latter is done via the wonderfully named confusion matrix – a table in which true and predicted values for each of the predicted classes are displayed in a matrix format. This again is just a couple of lines of code:

#…and the moment of reckoning
nb_test_predict <- predict(nb_model,testHouseVotes84[,-1])
#confusion matrix
pred true democrat republican
democrat 38 3
republican 5 22

The numbers you get will be different because your training/test sets are almost certainly different from mine.

In the confusion matrix (as defined above), the true values are in columns and the predicted values in rows. So, the algorithm has correctly classified 38 out of 43 (i.e. 38+5) Democrats and 22 out of 25 Republicans (i.e. 22+3). That’s pretty decent. However, we need to keep in mind that this could well be quirk of the choice of dataset. To address this, we should get a numerical measure of the efficacy of the algorithm and for different training and testing datasets. A simple measure of efficacy would be the fraction of predictions that the algorithm gets right. For the training/testing set above, this is simply 60/68 (see the confusion matrix above). The simplest way to calculate this in R is:

#fraction of correct predictions
[1] 0.8823529

A natural question to ask at this point is: how good is this prediction. This question cannot be answered with only a single run of the model; we need to do many runs and look at the spread of the results. To do this, we’ll create a function which takes the number of times the model should be run and the training fraction as inputs and spits out a vector containing the proportion of correct predictions for each run. Here’s the function

#function to create, run and record model results
nb_multiple_runs <- function(train_fraction,n){
fraction_correct <- rep(NA,n)
for (i in 1:n){
HouseVotes84[,”train”] <- ifelse(runif(nrow(HouseVotes84))<train_fraction,1,0)
trainColNum <- grep(“train”,names(HouseVotes84))
trainHouseVotes84 <- HouseVotes84[HouseVotes84$train==1,-trainColNum]
testHouseVotes84 <- HouseVotes84[HouseVotes84$train==0,-trainColNum]
nb_model <- naiveBayes(Class~.,data = trainHouseVotes84)
nb_test_predict <- predict(nb_model,testHouseVotes84[,-1])
fraction_correct[i] <- mean(nb_test_predict==testHouseVotes84$Class)

I’ve not commented the above code as it is essentially a repeat of the steps described earlier. Also, note that I have not made any effort to make the code generic or efficient.

Let’s do 20 runs with the same training fraction (0.8) as before:

#20 runs, 80% of data randomly selected for training set in each run
fraction_correct_predictions <- nb_multiple_runs(0.8,20)
[1] 0.9417476 0.9036145 0.9294118 0.9302326 0.9213483 0.9404762 0.8777778 0.9102564
[9] 0.9102564 0.9080460 0.9139785 0.9200000 0.9090909 0.9239130 0.9605263 0.9333333
[17] 0.9052632 0.8977273 0.9642857 0.8518519
#summary of results
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.8519 0.9074 0.9170 0.9177 0.9310 0.9643
#standard deviation
[1] 0.02582419

We see that the outcome of the runs are quite close together, in the 0.85 to 0.95 range with a standard deviation of 0.025. This tells us that Naive Bayes does a pretty decent job with this data.

Wrapping up

I originally intended to cover a few more case studies in this post, a couple of which highlight the shortcomings of the Naive Bayes algorithm. However, I realize that doing so would make this post unreasonably long, so I’ll stop here with a few closing remarks, and a promise to write up the rest of the story in a subsequent post.

To sum up: I have illustrated the use of a popular Naive Bayes implementation in R and attempted to convey an intuition for how the algorithm works.  As we have seen, the algorithm works quite well in the example case, despite the violation of the assumption of independent conditional probabilities.

The reason for the unreasonable effectiveness of the algorithm is two-fold. Firstly, the algorithm picks the predicted class based on the largest predicted probability, so ordering is more important than the actual value of the probability. Secondly, in many cases, a bias one way for a particular vote may well be counteracted by a bias the other way for another vote. That is, biases tend to cancel out, particularly if there are a large number of features.

That said, there are many cases in which the algorithm fails miserably – and we’ll look at some of these in a future post.  However, despite its well known shortcomings, Naive Bayes is often the first port of call in prediction problems simply because it is easy to set up and is fast compared to many of the iterative algorithms we will explore later in this series of articles.


Thanks for reading! If you liked this piece, you might enjoy the other articles in my “Gentle introduction to analytics using R” series. Here are the links:

A gentle introduction to text mining using R

A gentle introduction to cluster analysis using R

A gentle introduction to topic modeling using R

Written by K

November 6, 2015 at 7:33 am

The proof of concept – a business fable

with one comment

The Head of Data Management was a troubled man.  The phrase “Big Data” had cropped up more than a few times in a lunch conversation with Jack, a senior manager from Marketing.

Jack had pointedly asked what the data management team was doing about “Big Data”…or whether they were still stuck in a data time warp. This comment had put the Head of Data Management on the defensive. He’d mumbled something about a proof of concept with vendors specializing in Big Data solutions being on the drawing board.

He spent the afternoon dialing various vendors in his contact list, setting up appointments to talk about what they could offer.  His words were music to the vendors’ ears; he had no trouble getting them to bite.

The meetings went well. He was soon able to get a couple of vendors to agree to doing proofs of concept, which amounted to setting up trial versions of their software on the organisation’s servers, thus giving IT staff an opportunity to test-drive and compare the vendors’ offerings.

The software was duly installed and the concept duly proven.

…but the Head of Data Management was still a troubled man.

He had sought an appointment with Jack to inform him about the successful proof of concept.  Jack had listened to his spiel, but instead of being impressed had asked a simple question that had the Head of Data Management stumped.

“What has your proof of concept proved?” Jack asked.

“What do…do you mean?” stammered the Head of Data Management.

“I don’t think I can put it any clearer than I already have. What have you proved by doing this so-called proof of concept?”

“Umm… we have proved that the technology works,” came the uncertain reply.

“Surely we know that the technology works,” said Jack, a tad exasperated.

“Ah, but we don’t know that it works for us,” shot back the Head of Data Management.

“Look, I’m just a marketing guy, I know very little about IT,” said Jack, “but I do know a thing or two about product development and marketing. I can say with some confidence that the technology– whatever it is – does what it is supposed to do. You don’t need to run a proof of concept to prove that it works. I’m sure the vendors would have done that before they put their product on the market. So, my question remains: what has your proof of concept proved?”

“Well we’ve proved that it does work for us…:

“Proved it works for what??” asked Jack, exasperation mounting. “Let me put it as clearly as I can – what business problem have you solved that you could not address earlier.”

“Well we’ve taken some of the data from our largest databases – the sales database, from your area of work, and loaded it into the Big Data infrastructure.”

“…and then what?”

“Well that’s about it…for now.”

“So, all you have is another database. You haven’t actually done anything that cannot be done on our existing databases. You haven’t tackled a business problem using your new toy,” said Jack.

“Yes, but this is just the start. We’ll now start doing analytics and all that other stuff we were talking about.”

“I’m sorry to say, this is a classic example of putting the cart before the horse,” said Jack, shaking his head in disbelief.

“How so?” challenged the Head of Data Management.

“Should be obvious but may be it isn’t, so let me spell it out. You’ve jumped to a solution – the technology you’ve installed – without taking the time to define the business problems that the technology should address.”

“…but…but you talked about Big Data when we had lunch the other day.”

“Indeed I did. And what I expected was the start of a dialogue between your people and mine about the kinds of problem we would like to address. We know our problems well, you guys know the technology; but the problems should always drive the solution. What you’ve done now is akin to a solution in search of a problem, a cart before a horse”

“I see,” said the Head of Data Management slowly.

And Jack could only hope that he really did see.


This is a work of fiction. It could never happen in real life ;-)

Written by K

October 21, 2015 at 7:37 am

A gentle introduction to topic modeling using R

with 4 comments


The standard way to search for documents on the internet is via keywords or keyphrases. This is pretty much what Google and other search engines do routinely…and they do it well.  However, as useful as this is, it has its limitations. Consider, for example, a situation in which you are confronted with a large collection of documents but have no idea what they are about. One of the first things you might want to do is to classify these documents into topics or themes. Among other things this would help you figure out if there’s anything interest while also directing you to the relevant subset(s) of the corpus. For small collections, one could do this by simply going through each document but this is clearly infeasible for corpuses containing thousands of documents.

Topic modeling – the theme of this post – deals with the problem of automatically classifying sets of documents into themes

The article is organised as follows: I first provide some background on topic modelling. The algorithm that I use, Latent Dirichlet Allocation (LDA), involves some pretty heavy maths which I’ll avoid altogether. However, I will provide an intuitive explanation of how LDA works before moving on to a practical example which uses the topicmodels library in R. As in my previous articles in this series (see this post and this one), I will discuss the steps in detail along with explanations and provide accessible references for concepts that cannot be covered in the space of a blog post.

(Aside: Beware, LDA is also an abbreviation for Linear Discriminant Analysis a classification technique that I hope to cover later in my ongoing series on text and data analytics).

Latent Dirichlet Allocation – a math-free introduction

In essence, LDA is a technique that facilitates the automatic discovery of themes in a collection of documents.

The basic assumption behind LDA is that each of the documents in a collection consist of a mixture of collection-wide topics. However, in reality we observe only documents and words, not topics – the latter are part of the hidden (or latent) structure of documents. The aim is to infer the latent topic structure given the words and document.  LDA does this by recreating the documents in the corpus by adjusting the relative importance of topics in documents and words in topics iteratively.

Here’s a brief explanation of how the algorithm works, quoted directly from this answer by Edwin Chen on Quora:

  • Go through each document, and randomly assign each word in the document to one of the K topics. (Note: One of the shortcomings of LDA is that one has to specify the number of topics, denoted by K, upfront. More about this later.)
  • This assignment already gives you both topic representations of all the documents and word distributions of all the topics (albeit not very good ones).
  • So to improve on them, for each document d…
  • ….Go through each word w in d…
  • ……..And for each topic t, compute two things: 1) p(topic t | document d) = the proportion of words in document d that are currently assigned to topic t, and 2) p(word w | topic t) = the proportion of assignments to topic t over all documents that come from this word w. Reassign w a new topic, where you choose topic t with probability p(topic t | document d) * p(word w | topic t) (according to our generative model, this is essentially the probability that topic t generated word w, so it makes sense that we resample the current word’s topic with this probability).  (Note: p(a|b) is the conditional probability of a given that b has already occurred – see this post for more on conditional probabilities)
  • ……..In other words, in this step, we’re assuming that all topic assignments except for the current word in question are correct, and then updating the assignment of the current word using our model of how documents are generated.
  • After repeating the previous step a large number of times, you’ll eventually reach a roughly steady state where your assignments are pretty good. So use these assignments to estimate the topic mixtures of each document (by counting the proportion of words assigned to each topic within that document) and the words associated to each topic (by counting the proportion of words assigned to each topic overall).

For another simple explanation of how LDA works in, check out  this article by Matthew Jockers. For a more technical exposition, take a look at this video by David Blei, one of the inventors of the algorithm.

The iterative process described in the last point above is implemented using a technique called Gibbs sampling.  I’ll say a bit more about Gibbs sampling later, but you may want to have a look at this paper by Philip Resnick and Eric Hardesty that explains the nitty-gritty of the algorithm (Warning: it involves a fair bit of math, but has some good intuitive explanations as  well).

As a general point, I should also emphasise that you do not need to understand the ins and outs of an algorithm to use it but it does help to understand, at least at a high level, what the algorithm is doing. One needs to develop a feel for algorithms even if one doesn’t understand the details. Indeed, most people working in analytics do not know the details of the algorithms they use, but that doesn’t stop them from using algorithms intelligently. Purists may disagree. I think they are wrong.

Finally – because you’re no doubt wondering  :-) – the term “Dirichlet” in LDA refers to the fact that topics and words are assumed to follow Dirichlet distributions. There is no “good” reason for this apart from convenience – Dirichlet distributions provide good approximations to word distributions in documents and, perhaps more important, are computationally convenient.


As in my previous articles on text mining, I will use a collection of 30 posts from this blog as an example corpus. The corpus can be downloaded here. I will assume that you have R and RStudio installed. Follow this link if you need help with that.

The preprocessing steps are much the same as described in my previous articles.  Nevertheless, I’ll risk boring you with a detailed listing so that you can reproduce my results yourself:


#load text mining library


#set working directory (modify path as needed)


#load files into corpus
#get listing of .txt files in directory
filenames <- list.files(getwd(),pattern=”*.txt”)


#read files into a character vector
files <- lapply(filenames,readLines)


#create corpus from vector
docs <- Corpus(VectorSource(files))


#inspect a particular document in corpus


#start preprocessing
#Transform to lower case
docs <-tm_map(docs,content_transformer(tolower))


#remove potentially problematic symbols
toSpace <- content_transformer(function(x, pattern) { return (gsub(pattern, ” “, x))})
docs <- tm_map(docs, toSpace, “-“)
docs <- tm_map(docs, toSpace, “’”)
docs <- tm_map(docs, toSpace, “‘”)
docs <- tm_map(docs, toSpace, “•”)
docs <- tm_map(docs, toSpace, “””)
docs <- tm_map(docs, toSpace, ““”)


#remove punctuation
docs <- tm_map(docs, removePunctuation)
#Strip digits
docs <- tm_map(docs, removeNumbers)
#remove stopwords
docs <- tm_map(docs, removeWords, stopwords(“english”))
#remove whitespace
docs <- tm_map(docs, stripWhitespace)
#Good practice to check every now and then
#Stem document
docs <- tm_map(docs,stemDocument)


#fix up 1) differences between us and aussie english 2) general errors
docs <- tm_map(docs, content_transformer(gsub),
pattern = “organiz”, replacement = “organ”)
docs <- tm_map(docs, content_transformer(gsub),
pattern = “organis”, replacement = “organ”)
docs <- tm_map(docs, content_transformer(gsub),
pattern = “andgovern”, replacement = “govern”)
docs <- tm_map(docs, content_transformer(gsub),
pattern = “inenterpris”, replacement = “enterpris”)
docs <- tm_map(docs, content_transformer(gsub),
pattern = “team-“, replacement = “team”)
#define and eliminate all custom stopwords
myStopwords <- c(“can”, “say”,”one”,”way”,”use”,
“post”,”look”,”right”,”now”,”think”,”‘ve “,
“‘re “,”anoth”,”put”,”set”,”new”,”good”,
docs <- tm_map(docs, removeWords, myStopwords)
#inspect a document as a check


#Create document-term matrix
dtm <- DocumentTermMatrix(docs)
#convert rownames to filenames
rownames(dtm) <- filenames
#collapse matrix by summing over columns
freq <- colSums(as.matrix(dtm))
#length should be total number of terms
#create sort order (descending)
ord <- order(freq,decreasing=TRUE)
#List all terms in decreasing order of freq and write to disk

Check out the  preprocessing section in either this article or this one for detailed explanations of the code. The document term matrix (DTM) produced by the above code will be the main input into the LDA algorithm of the next section.

Topic modelling using LDA

We are now ready to do some topic modelling. We’ll use the topicmodels package written by Bettina Gruen and Kurt Hornik. Specifically, we’ll use the LDA function with the Gibbs sampling option mentioned earlier, and I’ll say  more about it in a second. The LDA function has a fairly large number of parameters. I’ll describe these briefly below. For more, please check out this vignette by Gruen and Hornik.

For the most part, we’ll use the default parameter values supplied by the LDA function,custom setting only the parameters that are required by the Gibbs sampling algorithm.

Gibbs sampling works by performing a random walk in such a way that reflects the characteristics of a desired distribution. Because the starting point of the walk is chosen at random, it is necessary to discard the first few steps of the walk (as these do not correctly reflect the properties of distribution). This is referred to as the burn-in period. We set the burn-in parameter to  4000. Following the burn-in period, we perform 2000 iterations, taking every 500th  iteration for further use.  The reason we do this is to avoid correlations between samples. We use 5 different starting points (nstart=5) – that is, five independent runs. Each starting point requires a seed integer (this also ensures reproducibility),  so I have provided 5 random integers in my seed list. Finally I’ve set best to TRUE (actually a default setting), which instructs the algorithm to return results of the run with the highest posterior probability.

Some words of caution are in order here. It should be emphasised that the settings above do not guarantee  the convergence of the algorithm to a globally optimal solution. Indeed, Gibbs sampling will, at best, find only a locally optimal solution, and even this is hard to prove mathematically in specific practical problems such as the one we are dealing with here. The upshot of this is that it is best to do lots of runs with different settings of parameters to check the stability of your results. The bottom line is that our interest is purely practical so it is good enough if the results make sense. We’ll leave issues  of mathematical rigour to those better qualified to deal with them :-)

As mentioned earlier,  there is an important parameter that must be specified upfront: k, the number of topics that the algorithm should use to classify documents. There are mathematical approaches to this, but they often do not yield semantically meaningful choices of k (see this post on stackoverflow for an example). From a practical point of view, one can simply run the algorithm for different values of k and make a choice based by inspecting the results. This is what we’ll do.

OK, so the first step is to set these parameters in R… and while we’re at it, let’s also load the topicmodels library (Note: you might need to install this package as it is not a part of the base R installation).

#load topic models library


#Set parameters for Gibbs sampling
burnin <- 4000
iter <- 2000
thin <- 500
seed <-list(2003,5,63,100001,765)
nstart <- 5
best <- TRUE


#Number of topics
k <- 5

That done, we can now do the actual work – run the topic modelling algorithm on our corpus. Here is the code:

#Run LDA using Gibbs sampling
ldaOut <-LDA(dtm,k, method=”Gibbs”, control=list(nstart=nstart, seed = seed, best=best, burnin = burnin, iter = iter, thin=thin))


#write out results
#docs to topics
ldaOut.topics <- as.matrix(topics(ldaOut))


#top 6 terms in each topic
ldaOut.terms <- as.matrix(terms(ldaOut,6))


#probabilities associated with each topic assignment
topicProbabilities <- as.data.frame(ldaOut@gamma)


#Find relative importance of top 2 topics
topic1ToTopic2 <- lapply(1:nrow(dtm),function(x)


#Find relative importance of second and third most important topics
topic2ToTopic3 <- lapply(1:nrow(dtm),function(x)


#write to file

The LDA algorithm returns an object that contains a lot of information. Of particular interest to us are the document to topic assignments, the top terms in each topic and the probabilities associated with each of those terms. These are printed out in the first three calls to write.csv above. There are a few important points to note here:

  1. Each document is considered to be a mixture of all topics (5 in this case). The assignments in the first file list the top topic – that is, the one with the highest probability (more about this in point 3 below).
  2. Each topic contains all terms (words) in the corpus, albeit with different probabilities. We list only the top  6 terms in the second file.
  3. The last file lists the probabilities with  which each topic is assigned to a document. This is therefore a 30 x 5 matrix – 30 docs and 5 topics. As one might expect, the highest probability in each row corresponds to the topic assigned to that document.  The “goodness” of the primary assignment (as discussed in point 1) can be assessed by taking the ratio of the highest to second-highest probability and the second-highest to the third-highest probability and so on. This is what I’ve done in the last nine lines of the code above.

Take some time to examine the output and confirm for yourself that that the primary topic assignments are best when the ratios of probabilities discussed in point 3 are highest. You should also experiment with different values of k to see if you can find better topic distributions. In the interests of space I will restrict myself to k = 5.

The table below lists the top 6 terms in topics 1 through 5.

Topic 1 Topic 2 Topic 3 Topic 4 Topic 5
1 work question chang system project
2 practic map organ data manag
3 mani time consult model approach
4 flexibl ibi manag design organ
5 differ issu work process decis
6 best plan problem busi problem

The table below lists the document to (primary) topic assignments:


Document Topic
BeyondEntitiesAndRelationships.txt 4
bigdata.txt 4
ConditionsOverCauses.txt 5
EmergentDesignInEnterpriseIT.txt 4
FromInformationToKnowledge.txt 2
FromTheCoalface.txt 1
HeraclitusAndParmenides.txt 3
IroniesOfEnterpriseIT.txt 3
MakingSenseOfOrganizationalChange.txt 5
MakingSenseOfSensemaking.txt 2
ObjectivityAndTheEthicalDimensionOfDecisionMaking.txt 5
OnTheInherentAmbiguitiesOfManagingProjects.txt 5
OrganisationalSurprise.txt 5
ProfessionalsOrPoliticians.txt 3
RitualsInInformationSystemDesign.txt 4
RoutinesAndReality.txt 4
ScapegoatsAndSystems.txt 5
SherlockHolmesFailedProjects.txt 3
sherlockHolmesMgmtFetis.txt 3
SixHeresiesForBI.txt 4
SixHeresiesForEnterpriseArchitecture.txt 3
TheArchitectAndTheApparition.txt 3
TheCloudAndTheGrass.txt 2
TheConsultantsDilemma.txt 3
TheDangerWithin.txt 5
TheDilemmasOfEnterpriseIT.txt 3
TheEssenceOfEntrepreneurship.txt 1
ThreeTypesOfUncertainty.txt 5
UnderstandingFlexibility.txt 1

From a quick perusal of the two tables it appears that the algorithm has done a pretty decent job. For example,topic 4 is about data and system design, and the documents assigned to it are on topic. However, it is far from perfect – for example, the interview I did with Neil Preston on organisational change (MakingSenseOfOrganizationalChange.txt) has been assigned to topic 5, which seems to be about project management. It ought to be associated with Topic 3, which is about change. Let’s see if we can resolve this by looking at probabilities associated with topics.

The table below lists the topic probabilities by document:

Topic 1 Topic 2 Topic 3 Topic 4 Topic 5
BeyondEn 0.071 0.064 0.024 0.741 0.1
bigdata. 0.182 0.221 0.182 0.26 0.156
Conditio 0.144 0.109 0.048 0.205 0.494
Emergent 0.121 0.226 0.204 0.236 0.213
FromInfo 0.096 0.643 0.026 0.169 0.066
FromTheC 0.636 0.082 0.058 0.086 0.138
Heraclit 0.137 0.091 0.503 0.162 0.107
IroniesO 0.101 0.088 0.388 0.26 0.162
MakingSe 0.13 0.206 0.262 0.089 0.313
MakingSe 0.09 0.715 0.055 0.067 0.074
Objectiv 0.216 0.078 0.086 0.242 0.378
OnTheInh 0.18 0.234 0.102 0.12 0.364
Organisa 0.089 0.095 0.07 0.092 0.655
Professi 0.155 0.064 0.509 0.128 0.144
RitualsI 0.103 0.064 0.044 0.676 0.112
Routines 0.108 0.042 0.033 0.69 0.127
Scapegoa 0.135 0.088 0.043 0.185 0.549
Sherlock 0.093 0.082 0.398 0.195 0.232
sherlock 0.108 0.136 0.453 0.123 0.18
SixHeres 0.159 0.11 0.078 0.516 0.138
SixHeres 0.104 0.111 0.366 0.212 0.207
TheArchi 0.111 0.221 0.522 0.088 0.058
TheCloud 0.185 0.333 0.198 0.136 0.148
TheConsu 0.105 0.184 0.518 0.096 0.096
TheDange 0.114 0.079 0.037 0.079 0.69
TheDilem 0.125 0.128 0.389 0.261 0.098
TheEssen 0.713 0.059 0.031 0.113 0.084
ThreeTyp 0.09 0.076 0.042 0.083 0.708
TOGAFOrN 0.158 0.232 0.352 0.151 0.107
Understa 0.658 0.065 0.072 0.101 0.105

In the table, the highest probability in each row is in bold. Also, in cases where the maximum and the second/third largest probabilities are close, I have highlighted the second (and third) highest probabilities in red.   It is clear that Neil’s interview (9th document in the above table) has 3  topics with comparable probabilities – topic 5 (project management), topic 3 (change) and topic 2 (issue mapping / ibis), in decreasing order of probabilities. In general, if a document has multiple topics with comparable probabilities, it simply means that the document speaks to all those topics in proportions indicated by the probabilities. A reading of Neil’s interview will convince you that our conversation did indeed range over all those topics.

That said, the algorithm is far from perfect. You might have already noticed a few poor assignments. Here is one – my post on Sherlock Holmes and the case of the failed project has been assigned to topic 3; I reckon it belongs in topic 5. There are a number of others, but I won’t belabor the point, except to reiterate that this precisely why you definitely want to experiment with different settings of the iteration parameters (to check for stability) and, more important, try a range of different values of k to find the optimal number of topics.

To conclude

Topic modelling provides a quick and convenient way to perform unsupervised classification of a corpus of documents.  As always, though, one needs to examine the results carefully to check that they make sense.

I’d like to end with a general observation. Classifying documents is an age-old concern that cuts across disciplines. So it is no surprise that topic modelling has got a look-in from diverse communities. Indeed, when I was reading up and learning about LDA, I found that some of the best introductory articles in the area have been written by academics working in English departments! This is one of the things I love about working in text analysis, there is a wealth of material on the web written from diverse perspectives. The term cross-disciplinary often tends to be a platitude , but in this case it is simply a statement of fact.

I hope that I have been able to convince you to explore this rapidly evolving field. Exciting times ahead, come join the fun.

Written by K

September 29, 2015 at 7:18 pm

Setting up an internal data analytics practice – some thoughts from a wayfarer

leave a comment »


This year has been hugely exciting so far: I’ve been exploring and playing with various techniques that fall under the general categories of data mining and text analytics. What’s been particularly satisfying is that I’ve been fortunate to find meaningful applications for these techniques within my organization.

Although I have a fair way to travel yet, I’ve learnt that common wisdom about data analytics – especially the stuff that comes from software vendors and high-end consultancies – can be misleading, even plain wrong. Hence this post in which I dispatch some myths and share a few pointers on establishing data analytics capabilities within an organization.

Busting a few myths

Let’s get right to it by taking a critical look at a few myths about setting up an internal data analytics practice.

  1. Requires high-end technology and a big budget: this myth is easy to bust because I can speak from recent experience. No, you do not need cutting-edge technology or an oversized budget.   You can get started for with an outlay of 0$ – yes, that’s right, for free!  All you need to is the open-source statistical package R (check out this section of my article on text mining for more on installing and using R) and the willingness to roll-up your sleeves and learn (more about this  later).  No worries if you prefer to stick with familiar tools – you can even begin with Excel.
  2. Needs specialist skills: another myth floating around is that you need Phd level knowledge in statistics or applied mathematics to do practical work in analytics. Sorry, but that’s plain wrong. You do need a PhD to do research in the analytics and develop your own algorithms, but not if you want to apply algorithms written by others.Yes, you will need to develop an understanding of the algorithms you plan to use, a feel for how they work and the ability to tell whether the results make sense. There are many good resources that can help you develop these skills – see, for example, the outstanding books by James, Witten, Hastie and Tibshirani and Kuhn and Johnson.
  3. Must have sponsorship from the top: this one is possibly a little more controversial than the previous two. It could be argued that it is impossible to gain buy in for a new capability without sponsorship from top management. However, in my experience, it is OK to start small by finding potential internal “customers” for analytics services through informal conversations with folks in different functions.I started by having informal conversations with managers in two different areas: IT infrastructure and sales / marketing.  I picked these two areas because I knew that they had several gigabytes of under-exploited data – a good bit of it unstructured – and a lot of open questions that could potentially be answered (at least partially) via methods of data and text analytics.  It turned out I was right. I’m currently doing a number of proofs of concept and small projects in both these areas.  So you don’t need sponsorship from the top as long as you can get buy in from people who have problems they believe you can solve. If you deliver, they may even advocate your cause to their managers.

A caveat is in order at this point:  my organization is not the same as yours, so you may well need to follow a different path from mine. Nevertheless, I do believe that it is always possible to find a way to start without needing permission or incurring official wrath.  In that spirit, I now offer some suggestions to help kick-start your efforts

Getting started

As the truism goes, the hardest part of any new effort is getting started.  The first thing to keep in mind is to start small. This is true even if you have official sponsorship and a king-sized budget. It is very tempting to spend a lot of time garnering management support for investing in high-end technology.  Don’t do it!  Do the following instead:

  1. Develop an understanding of the problems faced by people you plan to approach: The best way to do this is to talk to analysts or frontline managers. In my case, I was fortunate to have access to some very savvy analysts in IT service management and marketing who gave me a slew of interesting ideas to pursue. A word of advice: it is best not to talk to senior managers until you have a few concrete results that you can quantify in terms of dollar values.
  2. Invest time and effort in understanding analytics algorithms and gaining practical experience with them: As mentioned earlier, I started with R – and I believe it is the best choice. Not just because it is free but also because there are a host of packages available to tackle just about any analytics problem you might encounter.  There are some excellent free resources available to get you started with R (check out this listing on the r-statistics blog, for example).It is important that you start cutting code as you learn. This will help you build a repertoire of techniques and approaches as you progress. If you get stuck when coding, chances are you will find a solution on the wonderful stackoverflow site.
  3. Evangelise, evangelise, evangelise: You are, in effect, trying to sell an idea to people within your organization. You therefore have to identify people who might be able to help you and then convince them that your idea has merit. The best way to do the latter is to have concrete examples of problems that you have tackled. This is a chicken-and-egg situation in that you can’t have any examples until you gain support.  I got support by approaching people I know well. I found that most – no, all – of them were happy to provide me with interesting ideas and access to their data.
  4. Begin with small (but real) problems: It is important to start with the “low-hanging fruit” – the problems that would take the least effort to solve. However, it is equally important to address real problems, i.e. those that matter to someone.
  5. Leverage your organisation’s existing data infrastructure: From what I’ve written thus far, I may have given you the impression that the tools of data analytics stand separate from your existing data infrastructure. Nothing could be further from the truth. In reality, I often do the initial work  (basic preprocessing and exploratory analysis) using my organisation’s relational database infrastructure. Relational databases have sophisticated analytical extensions to SQL as well as efficient bulk data cleansing and transport facilities. Using these make good sense, particularly if your R installation is on a desktop or laptop computer as it is in my case. Moreover, many enterprise database vendors now offer add-on options that integrate R with their products. This gives you the best of both worlds – relational and analytical capabilities on an enterprise-class platform.
  6. Build relationships with the data management team: Remember the work you are doing falls under the ambit of the group that is officially responsible for managing data in your organization. It is therefore important that you keep them informed of what you’re doing.  Sooner or later your paths will cross, and you want to be sure that there are no nasty surprises (for either side!) at that point. Moreover, if you build connections with them early, you may even find that the data management team supports your efforts.

Having waxed verbose, I should mention that my effort is work in progress and I do not know where it will lead. Nevertheless, I offer these suggestions as a wayfarer who is considerably further down the road from where he started.

Parting thoughts

You may have noticed that I’ve refrained from using the overused and over-hyped term “Big Data” in this piece. This is deliberate. Indeed, the techniques I have been using have nothing to do with the size of the datasets. To be honest, I’ve applied them to datasets ranging from a few thousand to a few hundred thousand records, both of which qualify as Very Small Data in today’s world.

Your vendor will be only too happy to sell you Big Data infrastructure that will set you back a good many dollars. However, the chances are good that you do not need it right now.  You’ll be much better off going back to them after you hit the limits of your current data processing infrastructure. Moreover, you’ll also be better informed about your needs then.

You may also be wondering why I haven’t said much about the composition of the analytics team (barring the point about not needing PhD statisticians) and how it should be organized.  The reason I haven’t done so is that I believe the right composition and organizational structure will emerge from the initial projects done and feedback received from internal customers. The resulting structure will be better suited to the organization than one that is imposed upfront.  Long time readers of this blog might recognize this as a tenet of emergent design.

Finally, I should reiterate that my efforts are still very much in progress and I know not where they will lead. However, even if they go nowhere, I would have learnt something about my organization and picked up a useful, practical skill. And that is good enough for me.

Written by K

September 3, 2015 at 8:28 pm

From inactivism to interactivism – managerial attitudes to planning

with 2 comments


Managers display a range of attitudes towards planning for the future.  In an essay entitled Systems, Messes and Interactive Planning, the management guru/philosopher Russell Ackoff classified attitudes to organizational planning into four distinct types which I describe in detail below. I suspect you may recognise examples of each of these in your organisation…indeed, you might even see shades of yourself :-)


This attitude, as its name suggests, is characterized by a lack of meaningful action. Inactivism is often displayed by managers in organisations that favour the status quo.  These organisations are happy with the way things are, and therefore see no need to change. However, lack of meaningful action does not mean lack of action. On the contrary, it often takes a great deal of effort to fend off change and keep things the way they are. As Ackoff states:

Inactive organizations require a great deal of activity to keep changes from being made. They accomplish nothing in a variety of ways. First, they require that all important decisions be made “at the top.” The route to the top is deliberately designed like an obstacle course. This keeps most recommendations for change from ever getting there. Those that do are likely to have been delayed enough to make them irrelevant when they reach their destination. Those proposals that reach the top are likely to be farther delayed, often by being sent back down or out for modification or evaluation. The organization thus behaves like a sponge and is about as active…

The inactive manager spends a lot of time and effort in ensuring that things remain the way they are. Hence they act only when a stituation forces them to. Ackoff puts it in his inimitable way by stating that, “Inactivist  managers tend to want what they get rather than get what they want.”


Reactivist managers are a step worse than inactivists  because they believe that disaster is already upon them. This is the type of manager who hankers after the “golden days of yore when things were much better than they are today.” As a result of their deep unease of where they are now, they may try to undo the status quo.  As Ackoff points out, unlike inactivists, reactivists do not ride the tide but try to swim against it.

Typically reactivist managers are wary of technology and new concepts. Moreover, they tend to give more importance to seniority and experience rather than proven competence. They also tend to be fans of simplistic solutions to complex problems…like “solving” the problem of a behind-schedule software project by throwing more people at it.


Preactivists are the opposite of reactivists in that they believe the future is going to be better than the past. Consequently, their efforts are geared towards understanding what the future will look like and how they can prepare for it.  Typically, preactive managers are concerned with facts, figures and forecasts; they are firm believers in scientific planning methods that they have learnt in management schools. As such, one might say that this is the most common species of manager in present  day organisations. Those who are not natural preactivists will fly the preactivist flag when they’re asked for their opinions by their managers because it’s the expected answer.

A key characteristic of preactivist managers is that they tend to revel in creating plans rather than implementing them. As Ackoff puts it, “Preactivists see planning as a sequence of discrete steps which terminate with acceptance or rejection of their plans. What happens to their plans is the responsibility of others.


Interactivists planners are not satisfied with the present, but unlike reactivists or preactivists, they do not hanker for the past, nor do they believe the future is automatically going to be better. They do want to make things better than they were or currently are, but they are continually adjusting their plans for the future by learning from and responding to events.  In short, they believe they can shape the future by their actions.

Experimentation is the hallmark of interactivists.  They are willing to try different approaches and learn from them. Although they believe in learning by experience, they do not want to wait for experiences to happen; they would rather induce them by (often small-scale) experimentation.

Ackoff labels interactivists as idealisers – people who pursue ideals they know cannot be achieved, but can be approximated or even reformulated in the light of new knowledge. As he puts it:

They treat ideals as relative absolutes: ultimate objectives whose formulation depends on our current knowledge and understanding of ourselves and our environment. Therefore, they require continuous reformulation in light of what we learn from approaching them.

To use a now fashionable term, interactivists are intrapreneurs.


Although Ackoff shows a clear bias towards  interactivists in his article, he does mention that specific situations may call for other types of planners. As he puts it:

Despite my obvious bias in my characterization of these four postures, there are circumstances in which each is most appropriate. Put simply, if the internal and external dynamics of a system (the tide) are taking one where one wants to go and are doing so quickly enough, inactivism is appropriate. If the direction of change is right but the movement is too slow, preactivism is appropriate. If the change is taking one where one does not want to go and one prefers to stay where one is or was, reactivism is appropriate. However, if one is not willing to settle for the past, the present or the future that appears likely now, interactivism is appropriate.

The key point he makes is that inactivists and preactivists treat planning as a ritual because they see the future as something they cannot change. They can only plan for it (and hope for the best). Interactivists, on the other hand, look for opportunities to influence events and thus potentially change the future. Although both preactivists and interactivists are forward-looking, interactivists tend to be long-term thinkers as compared to preactivists who are more concerned about the short to medium term future.


Ackoff’s classification of planners in organisations is interesting because it highlights the kind of future-focused attitude that managers ought to take.  The sad fact, though, is that a significant number of managers are myopic preactivists, focused on this year’s performance targets rather than what their organisations might look like five or even ten years down the line. This is not the fault of individuals, though. The blame for the undue prevalence of myopic preactivism can be laid squarely on the deep-seated management dogma that rewards short-termism.

Written by K

August 20, 2015 at 9:30 pm


Get every new post delivered to your Inbox.

Join 364 other followers

%d bloggers like this: