Archive for September 2015
A gentle introduction to topic modeling using R
Introduction
The standard way to search for documents on the internet is via keywords or keyphrases. This is pretty much what Google and other search engines do routinely…and they do it well. However, as useful as this is, it has its limitations. Consider, for example, a situation in which you are confronted with a large collection of documents but have no idea what they are about. One of the first things you might want to do is to classify these documents into topics or themes. Among other things this would help you figure out if there’s anything interest while also directing you to the relevant subset(s) of the corpus. For small collections, one could do this by simply going through each document but this is clearly infeasible for corpuses containing thousands of documents.
Topic modeling – the theme of this post – deals with the problem of automatically classifying sets of documents into themes
The article is organised as follows: I first provide some background on topic modelling. The algorithm that I use, Latent Dirichlet Allocation (LDA), involves some pretty heavy maths which I’ll avoid altogether. However, I will provide an intuitive explanation of how LDA works before moving on to a practical example which uses the topicmodels library in R. As in my previous articles in this series (see this post and this one), I will discuss the steps in detail along with explanations and provide accessible references for concepts that cannot be covered in the space of a blog post.
(Aside: Beware, LDA is also an abbreviation for Linear Discriminant Analysis a classification technique that I hope to cover later in my ongoing series on text and data analytics).
Latent Dirichlet Allocation – a math-free introduction
In essence, LDA is a technique that facilitates the automatic discovery of themes in a collection of documents.
The basic assumption behind LDA is that each of the documents in a collection consist of a mixture of collection-wide topics. However, in reality we observe only documents and words, not topics – the latter are part of the hidden (or latent) structure of documents. The aim is to infer the latent topic structure given the words and document. LDA does this by recreating the documents in the corpus by adjusting the relative importance of topics in documents and words in topics iteratively.
Here’s a brief explanation of how the algorithm works, quoted directly from this answer by Edwin Chen on Quora:
- Go through each document, and randomly assign each word in the document to one of the K topics. (Note: One of the shortcomings of LDA is that one has to specify the number of topics, denoted by K, upfront. More about this later.)
- This assignment already gives you both topic representations of all the documents and word distributions of all the topics (albeit not very good ones).
- So to improve on them, for each document d…
- ….Go through each word w in d…
- ……..And for each topic t, compute two things: 1) p(topic t | document d) = the proportion of words in document d that are currently assigned to topic t, and 2) p(word w | topic t) = the proportion of assignments to topic t over all documents that come from this word w. Reassign w a new topic, where you choose topic t with probability p(topic t | document d) * p(word w | topic t) (according to our generative model, this is essentially the probability that topic t generated word w, so it makes sense that we resample the current word’s topic with this probability). (Note: p(a|b) is the conditional probability of a given that b has already occurred – see this post for more on conditional probabilities)
- ……..In other words, in this step, we’re assuming that all topic assignments except for the current word in question are correct, and then updating the assignment of the current word using our model of how documents are generated.
- After repeating the previous step a large number of times, you’ll eventually reach a roughly steady state where your assignments are pretty good. So use these assignments to estimate the topic mixtures of each document (by counting the proportion of words assigned to each topic within that document) and the words associated to each topic (by counting the proportion of words assigned to each topic overall).
For another simple explanation of how LDA works in, check out this article by Matthew Jockers. For a more technical exposition, take a look at this video by David Blei, one of the inventors of the algorithm.
The iterative process described in the last point above is implemented using a technique called Gibbs sampling. I’ll say a bit more about Gibbs sampling later, but you may want to have a look at this paper by Philip Resnick and Eric Hardesty that explains the nitty-gritty of the algorithm (Warning: it involves a fair bit of math, but has some good intuitive explanations as well).
As a general point, I should also emphasise that you do not need to understand the ins and outs of an algorithm to use it but it does help to understand, at least at a high level, what the algorithm is doing. One needs to develop a feel for algorithms even if one doesn’t understand the details. Indeed, most people working in analytics do not know the details of the algorithms they use, but that doesn’t stop them from using algorithms intelligently. Purists may disagree. I think they are wrong.
Finally – because you’re no doubt wondering 🙂 – the term “Dirichlet” in LDA refers to the fact that topics and words are assumed to follow Dirichlet distributions. There is no “good” reason for this apart from convenience – Dirichlet distributions provide good approximations to word distributions in documents and, perhaps more important, are computationally convenient.
Preprocessing
As in my previous articles on text mining, I will use a collection of 30 posts from this blog as an example corpus. The corpus can be downloaded here. I will assume that you have R and RStudio installed. Follow this link if you need help with that.
The preprocessing steps are much the same as described in my previous articles. Nevertheless, I’ll risk boring you with a detailed listing so that you can reproduce my results yourself:
docs <- tm_map(docs, toSpace, “-“)
docs <- tm_map(docs, toSpace, “’”)
docs <- tm_map(docs, toSpace, “‘”)
docs <- tm_map(docs, toSpace, “•”)
docs <- tm_map(docs, toSpace, “””)
docs <- tm_map(docs, toSpace, ““”)
pattern = “organiz”, replacement = “organ”)
docs <- tm_map(docs, content_transformer(gsub),
pattern = “organis”, replacement = “organ”)
docs <- tm_map(docs, content_transformer(gsub),
pattern = “andgovern”, replacement = “govern”)
docs <- tm_map(docs, content_transformer(gsub),
pattern = “inenterpris”, replacement = “enterpris”)
docs <- tm_map(docs, content_transformer(gsub),
pattern = “team-“, replacement = “team”)
“also”,”howev”,”tell”,”will”,
“much”,”need”,”take”,”tend”,”even”,
“like”,”particular”,”rather”,”said”,
“get”,”well”,”make”,”ask”,”come”,”end”,
“first”,”two”,”help”,”often”,”may”,
“might”,”see”,”someth”,”thing”,”point”,
“post”,”look”,”right”,”now”,”think”,”‘ve “,
“‘re “,”anoth”,”put”,”set”,”new”,”good”,
“want”,”sure”,”kind”,”larg”,”yes,”,”day”,”etc”,
“quit”,”sinc”,”attempt”,”lack”,”seen”,”awar”,
“littl”,”ever”,”moreov”,”though”,”found”,”abl”,
“enough”,”far”,”earli”,”away”,”achiev”,”draw”,
“last”,”never”,”brief”,”bit”,”entir”,”brief”,
“great”,”lot”)
write.csv(freq[ord],”word_freq.csv”)
Check out the preprocessing section in either this article or this one for detailed explanations of the code. The document term matrix (DTM) produced by the above code will be the main input into the LDA algorithm of the next section.
Topic modelling using LDA
We are now ready to do some topic modelling. We’ll use the topicmodels package written by Bettina Gruen and Kurt Hornik. Specifically, we’ll use the LDA function with the Gibbs sampling option mentioned earlier, and I’ll say more about it in a second. The LDA function has a fairly large number of parameters. I’ll describe these briefly below. For more, please check out this vignette by Gruen and Hornik.
For the most part, we’ll use the default parameter values supplied by the LDA function,custom setting only the parameters that are required by the Gibbs sampling algorithm.
Gibbs sampling works by performing a random walk in such a way that reflects the characteristics of a desired distribution. Because the starting point of the walk is chosen at random, it is necessary to discard the first few steps of the walk (as these do not correctly reflect the properties of distribution). This is referred to as the burn-in period. We set the burn-in parameter to 4000. Following the burn-in period, we perform 2000 iterations, taking every 500th iteration for further use. The reason we do this is to avoid correlations between samples. We use 5 different starting points (nstart=5) – that is, five independent runs. Each starting point requires a seed integer (this also ensures reproducibility), so I have provided 5 random integers in my seed list. Finally I’ve set best to TRUE (actually a default setting), which instructs the algorithm to return results of the run with the highest posterior probability.
Some words of caution are in order here. It should be emphasised that the settings above do not guarantee the convergence of the algorithm to a globally optimal solution. Indeed, Gibbs sampling will, at best, find only a locally optimal solution, and even this is hard to prove mathematically in specific practical problems such as the one we are dealing with here. The upshot of this is that it is best to do lots of runs with different settings of parameters to check the stability of your results. The bottom line is that our interest is purely practical so it is good enough if the results make sense. We’ll leave issues of mathematical rigour to those better qualified to deal with them 🙂
As mentioned earlier, there is an important parameter that must be specified upfront: k, the number of topics that the algorithm should use to classify documents. There are mathematical approaches to this, but they often do not yield semantically meaningful choices of k (see this post on stackoverflow for an example). From a practical point of view, one can simply run the algorithm for different values of k and make a choice based by inspecting the results. This is what we’ll do.
OK, so the first step is to set these parameters in R… and while we’re at it, let’s also load the topicmodels library (Note: you might need to install this package as it is not a part of the base R installation).
iter <- 2000
thin <- 500
seed <-list(2003,5,63,100001,765)
nstart <- 5
best <- TRUE
That done, we can now do the actual work – run the topic modelling algorithm on our corpus. Here is the code:
write.csv(ldaOut.topics,file=paste(“LDAGibbs”,k,”DocsToTopics.csv”))
write.csv(ldaOut.terms,file=paste(“LDAGibbs”,k,”TopicsToTerms.csv”))
write.csv(topicProbabilities,file=paste(“LDAGibbs”,k,”TopicProbabilities.csv”))
sort(topicProbabilities[x,])[k]/sort(topicProbabilities[x,])[k-1])
sort(topicProbabilities[x,])[k-1]/sort(topicProbabilities[x,])[k-2])
write.csv(topic2ToTopic3,file=paste(“LDAGibbs”,k,”Topic2ToTopic3.csv”))
The LDA algorithm returns an object that contains a lot of information. Of particular interest to us are the document to topic assignments, the top terms in each topic and the probabilities associated with each of those terms. These are printed out in the first three calls to write.csv above. There are a few important points to note here:
- Each document is considered to be a mixture of all topics (5 in this case). The assignments in the first file list the top topic – that is, the one with the highest probability (more about this in point 3 below).
- Each topic contains all terms (words) in the corpus, albeit with different probabilities. We list only the top 6 terms in the second file.
- The last file lists the probabilities with which each topic is assigned to a document. This is therefore a 30 x 5 matrix – 30 docs and 5 topics. As one might expect, the highest probability in each row corresponds to the topic assigned to that document. The “goodness” of the primary assignment (as discussed in point 1) can be assessed by taking the ratio of the highest to second-highest probability and the second-highest to the third-highest probability and so on. This is what I’ve done in the last nine lines of the code above.
Take some time to examine the output and confirm for yourself that that the primary topic assignments are best when the ratios of probabilities discussed in point 3 are highest. You should also experiment with different values of k to see if you can find better topic distributions. In the interests of space I will restrict myself to k = 5.
The table below lists the top 6 terms in topics 1 through 5.
Topic 1 | Topic 2 | Topic 3 | Topic 4 | Topic 5 | |
1 | work | question | chang | system | project |
2 | practic | map | organ | data | manag |
3 | mani | time | consult | model | approach |
4 | flexibl | ibi | manag | design | organ |
5 | differ | issu | work | process | decis |
6 | best | plan | problem | busi | problem |
The table below lists the document to (primary) topic assignments:
Document | Topic |
BeyondEntitiesAndRelationships.txt | 4 |
bigdata.txt | 4 |
ConditionsOverCauses.txt | 5 |
EmergentDesignInEnterpriseIT.txt | 4 |
FromInformationToKnowledge.txt | 2 |
FromTheCoalface.txt | 1 |
HeraclitusAndParmenides.txt | 3 |
IroniesOfEnterpriseIT.txt | 3 |
MakingSenseOfOrganizationalChange.txt | 5 |
MakingSenseOfSensemaking.txt | 2 |
ObjectivityAndTheEthicalDimensionOfDecisionMaking.txt | 5 |
OnTheInherentAmbiguitiesOfManagingProjects.txt | 5 |
OrganisationalSurprise.txt | 5 |
ProfessionalsOrPoliticians.txt | 3 |
RitualsInInformationSystemDesign.txt | 4 |
RoutinesAndReality.txt | 4 |
ScapegoatsAndSystems.txt | 5 |
SherlockHolmesFailedProjects.txt | 3 |
sherlockHolmesMgmtFetis.txt | 3 |
SixHeresiesForBI.txt | 4 |
SixHeresiesForEnterpriseArchitecture.txt | 3 |
TheArchitectAndTheApparition.txt | 3 |
TheCloudAndTheGrass.txt | 2 |
TheConsultantsDilemma.txt | 3 |
TheDangerWithin.txt | 5 |
TheDilemmasOfEnterpriseIT.txt | 3 |
TheEssenceOfEntrepreneurship.txt | 1 |
ThreeTypesOfUncertainty.txt | 5 |
TOGAFOrNotTOGAF.txt | 3 |
UnderstandingFlexibility.txt | 1 |
From a quick perusal of the two tables it appears that the algorithm has done a pretty decent job. For example,topic 4 is about data and system design, and the documents assigned to it are on topic. However, it is far from perfect – for example, the interview I did with Neil Preston on organisational change (MakingSenseOfOrganizationalChange.txt) has been assigned to topic 5, which seems to be about project management. It ought to be associated with Topic 3, which is about change. Let’s see if we can resolve this by looking at probabilities associated with topics.
The table below lists the topic probabilities by document:
Topic 1 | Topic 2 | Topic 3 | Topic 4 | Topic 5 | |
BeyondEn | 0.071 | 0.064 | 0.024 | 0.741 | 0.1 |
bigdata. | 0.182 | 0.221 | 0.182 | 0.26 | 0.156 |
Conditio | 0.144 | 0.109 | 0.048 | 0.205 | 0.494 |
Emergent | 0.121 | 0.226 | 0.204 | 0.236 | 0.213 |
FromInfo | 0.096 | 0.643 | 0.026 | 0.169 | 0.066 |
FromTheC | 0.636 | 0.082 | 0.058 | 0.086 | 0.138 |
Heraclit | 0.137 | 0.091 | 0.503 | 0.162 | 0.107 |
IroniesO | 0.101 | 0.088 | 0.388 | 0.26 | 0.162 |
MakingSe | 0.13 | 0.206 | 0.262 | 0.089 | 0.313 |
MakingSe | 0.09 | 0.715 | 0.055 | 0.067 | 0.074 |
Objectiv | 0.216 | 0.078 | 0.086 | 0.242 | 0.378 |
OnTheInh | 0.18 | 0.234 | 0.102 | 0.12 | 0.364 |
Organisa | 0.089 | 0.095 | 0.07 | 0.092 | 0.655 |
Professi | 0.155 | 0.064 | 0.509 | 0.128 | 0.144 |
RitualsI | 0.103 | 0.064 | 0.044 | 0.676 | 0.112 |
Routines | 0.108 | 0.042 | 0.033 | 0.69 | 0.127 |
Scapegoa | 0.135 | 0.088 | 0.043 | 0.185 | 0.549 |
Sherlock | 0.093 | 0.082 | 0.398 | 0.195 | 0.232 |
sherlock | 0.108 | 0.136 | 0.453 | 0.123 | 0.18 |
SixHeres | 0.159 | 0.11 | 0.078 | 0.516 | 0.138 |
SixHeres | 0.104 | 0.111 | 0.366 | 0.212 | 0.207 |
TheArchi | 0.111 | 0.221 | 0.522 | 0.088 | 0.058 |
TheCloud | 0.185 | 0.333 | 0.198 | 0.136 | 0.148 |
TheConsu | 0.105 | 0.184 | 0.518 | 0.096 | 0.096 |
TheDange | 0.114 | 0.079 | 0.037 | 0.079 | 0.69 |
TheDilem | 0.125 | 0.128 | 0.389 | 0.261 | 0.098 |
TheEssen | 0.713 | 0.059 | 0.031 | 0.113 | 0.084 |
ThreeTyp | 0.09 | 0.076 | 0.042 | 0.083 | 0.708 |
TOGAFOrN | 0.158 | 0.232 | 0.352 | 0.151 | 0.107 |
Understa | 0.658 | 0.065 | 0.072 | 0.101 | 0.105 |
In the table, the highest probability in each row is in bold. Also, in cases where the maximum and the second/third largest probabilities are close, I have highlighted the second (and third) highest probabilities in red. It is clear that Neil’s interview (9th document in the above table) has 3 topics with comparable probabilities – topic 5 (project management), topic 3 (change) and topic 2 (issue mapping / ibis), in decreasing order of probabilities. In general, if a document has multiple topics with comparable probabilities, it simply means that the document speaks to all those topics in proportions indicated by the probabilities. A reading of Neil’s interview will convince you that our conversation did indeed range over all those topics.
That said, the algorithm is far from perfect. You might have already noticed a few poor assignments. Here is one – my post on Sherlock Holmes and the case of the failed project has been assigned to topic 3; I reckon it belongs in topic 5. There are a number of others, but I won’t belabor the point, except to reiterate that this precisely why you definitely want to experiment with different settings of the iteration parameters (to check for stability) and, more important, try a range of different values of k to find the optimal number of topics.
To conclude
Topic modelling provides a quick and convenient way to perform unsupervised classification of a corpus of documents. As always, though, one needs to examine the results carefully to check that they make sense.
I’d like to end with a general observation. Classifying documents is an age-old concern that cuts across disciplines. So it is no surprise that topic modelling has got a look-in from diverse communities. Indeed, when I was reading up and learning about LDA, I found that some of the best introductory articles in the area have been written by academics working in English departments! This is one of the things I love about working in text analysis, there is a wealth of material on the web written from diverse perspectives. The term cross-disciplinary often tends to be a platitude , but in this case it is simply a statement of fact.
I hope that I have been able to convince you to explore this rapidly evolving field. Exciting times ahead, come join the fun.
Setting up an internal data analytics practice – some thoughts from a wayfarer
Introduction
This year has been hugely exciting so far: I’ve been exploring and playing with various techniques that fall under the general categories of data mining and text analytics. What’s been particularly satisfying is that I’ve been fortunate to find meaningful applications for these techniques within my organization.
Although I have a fair way to travel yet, I’ve learnt that common wisdom about data analytics – especially the stuff that comes from software vendors and high-end consultancies – can be misleading, even plain wrong. Hence this post in which I dispatch some myths and share a few pointers on establishing data analytics capabilities within an organization.
Busting a few myths
Let’s get right to it by taking a critical look at a few myths about setting up an internal data analytics practice.
- Requires high-end technology and a big budget: this myth is easy to bust because I can speak from recent experience. No, you do not need cutting-edge technology or an oversized budget. You can get started for with an outlay of 0$ – yes, that’s right, for free! All you need to is the open-source statistical package R (check out this section of my article on text mining for more on installing and using R) and the willingness to roll-up your sleeves and learn (more about this later). No worries if you prefer to stick with familiar tools – you can even begin with Excel.
- Needs specialist skills: another myth floating around is that you need Phd level knowledge in statistics or applied mathematics to do practical work in analytics. Sorry, but that’s plain wrong. You do need a PhD to do research in the analytics and develop your own algorithms, but not if you want to apply algorithms written by others.Yes, you will need to develop an understanding of the algorithms you plan to use, a feel for how they work and the ability to tell whether the results make sense. There are many good resources that can help you develop these skills – see, for example, the outstanding books by James, Witten, Hastie and Tibshirani and Kuhn and Johnson.
- Must have sponsorship from the top: this one is possibly a little more controversial than the previous two. It could be argued that it is impossible to gain buy in for a new capability without sponsorship from top management. However, in my experience, it is OK to start small by finding potential internal “customers” for analytics services through informal conversations with folks in different functions.I started by having informal conversations with managers in two different areas: IT infrastructure and sales / marketing. I picked these two areas because I knew that they had several gigabytes of under-exploited data – a good bit of it unstructured – and a lot of open questions that could potentially be answered (at least partially) via methods of data and text analytics. It turned out I was right. I’m currently doing a number of proofs of concept and small projects in both these areas. So you don’t need sponsorship from the top as long as you can get buy in from people who have problems they believe you can solve. If you deliver, they may even advocate your cause to their managers.
A caveat is in order at this point: my organization is not the same as yours, so you may well need to follow a different path from mine. Nevertheless, I do believe that it is always possible to find a way to start without needing permission or incurring official wrath. In that spirit, I now offer some suggestions to help kick-start your efforts
Getting started
As the truism goes, the hardest part of any new effort is getting started. The first thing to keep in mind is to start small. This is true even if you have official sponsorship and a king-sized budget. It is very tempting to spend a lot of time garnering management support for investing in high-end technology. Don’t do it! Do the following instead:
- Develop an understanding of the problems faced by people you plan to approach: The best way to do this is to talk to analysts or frontline managers. In my case, I was fortunate to have access to some very savvy analysts in IT service management and marketing who gave me a slew of interesting ideas to pursue. A word of advice: it is best not to talk to senior managers until you have a few concrete results that you can quantify in terms of dollar values.
- Invest time and effort in understanding analytics algorithms and gaining practical experience with them: As mentioned earlier, I started with R – and I believe it is the best choice. Not just because it is free but also because there are a host of packages available to tackle just about any analytics problem you might encounter. There are some excellent free resources available to get you started with R (check out this listing on the r-statistics blog, for example).It is important that you start cutting code as you learn. This will help you build a repertoire of techniques and approaches as you progress. If you get stuck when coding, chances are you will find a solution on the wonderful stackoverflow site.
- Evangelise, evangelise, evangelise: You are, in effect, trying to sell an idea to people within your organization. You therefore have to identify people who might be able to help you and then convince them that your idea has merit. The best way to do the latter is to have concrete examples of problems that you have tackled. This is a chicken-and-egg situation in that you can’t have any examples until you gain support. I got support by approaching people I know well. I found that most – no, all – of them were happy to provide me with interesting ideas and access to their data.
- Begin with small (but real) problems: It is important to start with the “low-hanging fruit” – the problems that would take the least effort to solve. However, it is equally important to address real problems, i.e. those that matter to someone.
- Leverage your organisation’s existing data infrastructure: From what I’ve written thus far, I may have given you the impression that the tools of data analytics stand separate from your existing data infrastructure. Nothing could be further from the truth. In reality, I often do the initial work (basic preprocessing and exploratory analysis) using my organisation’s relational database infrastructure. Relational databases have sophisticated analytical extensions to SQL as well as efficient bulk data cleansing and transport facilities. Using these make good sense, particularly if your R installation is on a desktop or laptop computer as it is in my case. Moreover, many enterprise database vendors now offer add-on options that integrate R with their products. This gives you the best of both worlds – relational and analytical capabilities on an enterprise-class platform.
- Build relationships with the data management team: Remember the work you are doing falls under the ambit of the group that is officially responsible for managing data in your organization. It is therefore important that you keep them informed of what you’re doing. Sooner or later your paths will cross, and you want to be sure that there are no nasty surprises (for either side!) at that point. Moreover, if you build connections with them early, you may even find that the data management team supports your efforts.
Having waxed verbose, I should mention that my effort is work in progress and I do not know where it will lead. Nevertheless, I offer these suggestions as a wayfarer who is considerably further down the road from where he started.
Parting thoughts
You may have noticed that I’ve refrained from using the overused and over-hyped term “Big Data” in this piece. This is deliberate. Indeed, the techniques I have been using have nothing to do with the size of the datasets. To be honest, I’ve applied them to datasets ranging from a few thousand to a few hundred thousand records, both of which qualify as Very Small Data in today’s world.
Your vendor will be only too happy to sell you Big Data infrastructure that will set you back a good many dollars. However, the chances are good that you do not need it right now. You’ll be much better off going back to them after you hit the limits of your current data processing infrastructure. Moreover, you’ll also be better informed about your needs then.
You may also be wondering why I haven’t said much about the composition of the analytics team (barring the point about not needing PhD statisticians) and how it should be organized. The reason I haven’t done so is that I believe the right composition and organizational structure will emerge from the initial projects done and feedback received from internal customers. The resulting structure will be better suited to the organization than one that is imposed upfront. Long time readers of this blog might recognize this as a tenet of emergent design.
Finally, I should reiterate that my efforts are still very much in progress and I know not where they will lead. However, even if they go nowhere, I would have learnt something about my organization and picked up a useful, practical skill. And that is good enough for me.