Eight to Late

A gentle introduction to topic modeling using R

with 2 comments


The standard way to search for documents on the internet is via keywords or keyphrases. This is pretty much what Google and other search engines do routinely…and they do it well.  However, as useful as this is, it has its limitations. Consider, for example, a situation in which you are confronted with a large collection of documents but have no idea what they are about. One of the first things you might want to do is to classify these documents into topics or themes. Among other things this would help you figure out if there’s anything interest while also directing you to the relevant subset(s) of the corpus. For small collections, one could do this by simply going through each document but this is clearly infeasible for corpuses containing thousands of documents.

Topic modeling – the theme of this post – deals with the problem of automatically classifying sets of documents into themes

The article is organised as follows: I first provide some background on topic modelling. The algorithm that I use, Latent Dirichlet Allocation (LDA), involves some pretty heavy maths which I’ll avoid altogether. However, I will provide an intuitive explanation of how LDA works before moving on to a practical example which uses the topicmodels library in R. As in my previous articles in this series (see this post and this one), I will discuss the steps in detail along with explanations and provide accessible references for concepts that cannot be covered in the space of a blog post.

(Aside: Beware, LDA is also an abbreviation for Linear Discriminant Analysis a classification technique that I hope to cover later in my ongoing series on text and data analytics).

Latent Dirichlet Allocation – a math-free introduction

In essence, LDA is a technique that facilitates the automatic discovery of themes in a collection of documents.

The basic assumption behind LDA is that each of the documents in a collection consist of a mixture of collection-wide topics. However, in reality we observe only documents and words, not topics – the latter are part of the hidden (or latent) structure of documents. The aim is to infer the latent topic structure given the words and document.  LDA does this by recreating the documents in the corpus by adjusting the relative importance of topics in documents and words in topics iteratively.

Here’s a brief explanation of how the algorithm works, quoted directly from this answer by Edwin Chen on Quora:

  • Go through each document, and randomly assign each word in the document to one of the K topics. (Note: One of the shortcomings of LDA is that one has to specify the number of topics, denoted by K, upfront. More about this later.)
  • This assignment already gives you both topic representations of all the documents and word distributions of all the topics (albeit not very good ones).
  • So to improve on them, for each document d…
  • ….Go through each word w in d…
  • ……..And for each topic t, compute two things: 1) p(topic t | document d) = the proportion of words in document d that are currently assigned to topic t, and 2) p(word w | topic t) = the proportion of assignments to topic t over all documents that come from this word w. Reassign w a new topic, where you choose topic t with probability p(topic t | document d) * p(word w | topic t) (according to our generative model, this is essentially the probability that topic t generated word w, so it makes sense that we resample the current word’s topic with this probability).  (Note: p(a|b) is the conditional probability of a given that b has already occurred – see this post for more on conditional probabilities)
  • ……..In other words, in this step, we’re assuming that all topic assignments except for the current word in question are correct, and then updating the assignment of the current word using our model of how documents are generated.
  • After repeating the previous step a large number of times, you’ll eventually reach a roughly steady state where your assignments are pretty good. So use these assignments to estimate the topic mixtures of each document (by counting the proportion of words assigned to each topic within that document) and the words associated to each topic (by counting the proportion of words assigned to each topic overall).

For another simple explanation of how LDA works in, check out  this article by Matthew Jockers. For a more technical exposition, take a look at this video by David Blei, one of the inventors of the algorithm.

The iterative process described in the last point above is implemented using a technique called Gibbs sampling.  I’ll say a bit more about Gibbs sampling later, but you may want to have a look at this paper by Philip Resnick and Eric Hardesty that explains the nitty-gritty of the algorithm (Warning: it involves a fair bit of math, but has some good intuitive explanations as  well).

As a general point, I should also emphasise that you do not need to understand the ins and outs of an algorithm to use it but it does help to understand, at least at a high level, what the algorithm is doing. One needs to develop a feel for algorithms even if one doesn’t understand the details. Indeed, most people working in analytics do not know the details of the algorithms they use, but that doesn’t stop them from using algorithms intelligently. Purists may disagree. I think they are wrong.

Finally – because you’re no doubt wondering  :-) – the term “Dirichlet” in LDA refers to the fact that topics and words are assumed to follow Dirichlet distributions. There is no “good” reason for this apart from convenience – Dirichlet distributions provide good approximations to word distributions in documents and, perhaps more important, are computationally convenient.


As in my previous articles on text mining, I will use a collection of 30 posts from this blog as an example corpus. The corpus can be downloaded here. I will assume that you have R and RStudio installed. Follow this link if you need help with that.

The preprocessing steps are much the same as described in my previous articles.  Nevertheless, I’ll risk boring you with a detailed listing so that you can reproduce my results yourself:


#load text mining library


#set working directory (modify path as needed)


#load files into corpus
#get listing of .txt files in directory
filenames <- list.files(getwd(),pattern=”*.txt”)


#read files into a character vector
files <- lapply(filenames,readLines)


#create corpus from vector
docs <- Corpus(VectorSource(files))


#inspect a particular document in corpus


#start preprocessing
#Transform to lower case
docs <-tm_map(docs,content_transformer(tolower))


#remove potentially problematic symbols
toSpace <- content_transformer(function(x, pattern) { return (gsub(pattern, ” “, x))})
docs <- tm_map(docs, toSpace, “-“)
docs <- tm_map(docs, toSpace, “’”)
docs <- tm_map(docs, toSpace, “‘”)
docs <- tm_map(docs, toSpace, “•”)
docs <- tm_map(docs, toSpace, “””)
docs <- tm_map(docs, toSpace, ““”)


#remove punctuation
docs <- tm_map(docs, removePunctuation)
#Strip digits
docs <- tm_map(docs, removeNumbers)
#remove stopwords
docs <- tm_map(docs, removeWords, stopwords(“english”))
#remove whitespace
docs <- tm_map(docs, stripWhitespace)
#Good practice to check every now and then
#Stem document
docs <- tm_map(docs,stemDocument)


#fix up 1) differences between us and aussie english 2) general errors
docs <- tm_map(docs, content_transformer(gsub),
pattern = “organiz”, replacement = “organ”)
docs <- tm_map(docs, content_transformer(gsub),
pattern = “organis”, replacement = “organ”)
docs <- tm_map(docs, content_transformer(gsub),
pattern = “andgovern”, replacement = “govern”)
docs <- tm_map(docs, content_transformer(gsub),
pattern = “inenterpris”, replacement = “enterpris”)
docs <- tm_map(docs, content_transformer(gsub),
pattern = “team-“, replacement = “team”)
#define and eliminate all custom stopwords
myStopwords <- c(“can”, “say”,”one”,”way”,”use”,
“post”,”look”,”right”,”now”,”think”,”‘ve “,
“‘re “,”anoth”,”put”,”set”,”new”,”good”,
docs <- tm_map(docs, removeWords, myStopwords)
#inspect a document as a check


#Create document-term matrix
dtm <- DocumentTermMatrix(docs)
#convert rownames to filenames
rownames(dtm) <- filenames
#collapse matrix by summing over columns
freq <- colSums(as.matrix(dtm))
#length should be total number of terms
#create sort order (descending)
ord <- order(freq,decreasing=TRUE)
#List all terms in decreasing order of freq and write to disk

Check out the  preprocessing section in either this article or this one for detailed explanations of the code. The document term matrix (DTM) produced by the above code will be the main input into the LDA algorithm of the next section.

Topic modelling using LDA

We are now ready to do some topic modelling. We’ll use the topicmodels package written by Bettina Gruen and Kurt Hornik. Specifically, we’ll use the LDA function with the Gibbs sampling option mentioned earlier, and I’ll say  more about it in a second. The LDA function has a fairly large number of parameters. I’ll describe these briefly below. For more, please check out this vignette by Gruen and Hornik.

For the most part, we’ll use the default parameter values supplied by the LDA function,custom setting only the parameters that are required by the Gibbs sampling algorithm.

Gibbs sampling works by performing a random walk in such a way that reflects the characteristics of a desired distribution. Because the starting point of the walk is chosen at random, it is necessary to discard the first few steps of the walk (as these do not correctly reflect the properties of distribution). This is referred to as the burn-in period. We set the burn-in parameter to  4000. Following the burn-in period, we perform 2000 iterations, taking every 500th  iteration for further use.  The reason we do this is to avoid correlations between samples. We use 5 different starting points (nstart=5) – that is, five independent runs. Each starting point requires a seed integer (this also ensures reproducibility),  so I have provided 5 random integers in my seed list. Finally I’ve set best to TRUE (actually a default setting), which instructs the algorithm to return results of the run with the highest posterior probability.

Some words of caution are in order here. It should be emphasised that the settings above do not guarantee  the convergence of the algorithm to a globally optimal solution. Indeed, Gibbs sampling will, at best, find only a locally optimal solution, and even this is hard to prove mathematically in specific practical problems such as the one we are dealing with here. The upshot of this is that it is best to do lots of runs with different settings of parameters to check the stability of your results. The bottom line is that our interest is purely practical so it is good enough if the results make sense. We’ll leave issues  of mathematical rigour to those better qualified to deal with them :-)

As mentioned earlier,  there is an important parameter that must be specified upfront: k, the number of topics that the algorithm should use to classify documents. There are mathematical approaches to this, but they often do not yield semantically meaningful choices of k (see this post on stackoverflow for an example). From a practical point of view, one can simply run the algorithm for different values of k and make a choice based by inspecting the results. This is what we’ll do.

OK, so the first step is to set these parameters in R… and while we’re at it, let’s also load the topicmodels library (Note: you might need to install this package as it is not a part of the base R installation).

#load topic models library


#Set parameters for Gibbs sampling
burnin <- 4000
iter <- 2000
thin <- 500
seed <-list(2003,5,63,100001,765)
nstart <- 5
best <- TRUE


#Number of topics
k <- 5

That done, we can now do the actual work – run the topic modelling algorithm on our corpus. Here is the code:

#Run LDA using Gibbs sampling
ldaOut <-LDA(dtm,k, method=”Gibbs”, control=list(nstart=nstart, seed = seed, best=best, burnin = burnin, iter = iter, thin=thin))


#write out results
#docs to topics
ldaOut.topics <- as.matrix(topics(ldaOut))


#top 6 terms in each topic
ldaOut.terms <- as.matrix(terms(ldaOut,6))


#probabilities associated with each topic assignment
topicProbabilities <- as.data.frame(ldaOut@gamma)


#Find relative importance of top 2 topics
topic1ToTopic2 <- lapply(1:nrow(dtm),function(x)


#Find relative importance of second and third most important topics
topic2ToTopic3 <- lapply(1:nrow(dtm),function(x)


#write to file

The LDA algorithm returns an object that contains a lot of information. Of particular interest to us are the document to topic assignments, the top terms in each topic and the probabilities associated with each of those terms. These are printed out in the first three calls to write.csv above. There are a few important points to note here:

  1. Each document is considered to be a mixture of all topics (5 in this case). The assignments in the first file list the top topic – that is, the one with the highest probability (more about this in point 3 below).
  2. Each topic contains all terms (words) in the corpus, albeit with different probabilities. We list only the top  6 terms in the second file.
  3. The last file lists the probabilities with  which each topic is assigned to a document. This is therefore a 30 x 5 matrix – 30 docs and 5 topics. As one might expect, the highest probability in each row corresponds to the topic assigned to that document.  The “goodness” of the primary assignment (as discussed in point 1) can be assessed by taking the ratio of the highest to second-highest probability and the second-highest to the third-highest probability and so on. This is what I’ve done in the last nine lines of the code above.

Take some time to examine the output and confirm for yourself that that the primary topic assignments are best when the ratios of probabilities discussed in point 3 are highest. You should also experiment with different values of k to see if you can find better topic distributions. In the interests of space I will restrict myself to k = 5.

The table below lists the top 6 terms in topics 1 through 5.

Topic 1 Topic 2 Topic 3 Topic 4 Topic 5
1 work question chang system project
2 practic map organ data manag
3 mani time consult model approach
4 flexibl ibi manag design organ
5 differ issu work process decis
6 best plan problem busi problem

The table below lists the document to (primary) topic assignments:


Document Topic
BeyondEntitiesAndRelationships.txt 4
bigdata.txt 4
ConditionsOverCauses.txt 5
EmergentDesignInEnterpriseIT.txt 4
FromInformationToKnowledge.txt 2
FromTheCoalface.txt 1
HeraclitusAndParmenides.txt 3
IroniesOfEnterpriseIT.txt 3
MakingSenseOfOrganizationalChange.txt 5
MakingSenseOfSensemaking.txt 2
ObjectivityAndTheEthicalDimensionOfDecisionMaking.txt 5
OnTheInherentAmbiguitiesOfManagingProjects.txt 5
OrganisationalSurprise.txt 5
ProfessionalsOrPoliticians.txt 3
RitualsInInformationSystemDesign.txt 4
RoutinesAndReality.txt 4
ScapegoatsAndSystems.txt 5
SherlockHolmesFailedProjects.txt 3
sherlockHolmesMgmtFetis.txt 3
SixHeresiesForBI.txt 4
SixHeresiesForEnterpriseArchitecture.txt 3
TheArchitectAndTheApparition.txt 3
TheCloudAndTheGrass.txt 2
TheConsultantsDilemma.txt 3
TheDangerWithin.txt 5
TheDilemmasOfEnterpriseIT.txt 3
TheEssenceOfEntrepreneurship.txt 1
ThreeTypesOfUncertainty.txt 5
UnderstandingFlexibility.txt 1

From a quick perusal of the two tables it appears that the algorithm has done a pretty decent job. For example,topic 4 is about data and system design, and the documents assigned to it are on topic. However, it is far from perfect – for example, the interview I did with Neil Preston on organisational change (MakingSenseOfOrganizationalChange.txt) has been assigned to topic 5, which seems to be about project management. It ought to be associated with Topic 3, which is about change. Let’s see if we can resolve this by looking at probabilities associated with topics.

The table below lists the topic probabilities by document:

Topic 1 Topic 2 Topic 3 Topic 4 Topic 5
BeyondEn 0.071 0.064 0.024 0.741 0.1
bigdata. 0.182 0.221 0.182 0.26 0.156
Conditio 0.144 0.109 0.048 0.205 0.494
Emergent 0.121 0.226 0.204 0.236 0.213
FromInfo 0.096 0.643 0.026 0.169 0.066
FromTheC 0.636 0.082 0.058 0.086 0.138
Heraclit 0.137 0.091 0.503 0.162 0.107
IroniesO 0.101 0.088 0.388 0.26 0.162
MakingSe 0.13 0.206 0.262 0.089 0.313
MakingSe 0.09 0.715 0.055 0.067 0.074
Objectiv 0.216 0.078 0.086 0.242 0.378
OnTheInh 0.18 0.234 0.102 0.12 0.364
Organisa 0.089 0.095 0.07 0.092 0.655
Professi 0.155 0.064 0.509 0.128 0.144
RitualsI 0.103 0.064 0.044 0.676 0.112
Routines 0.108 0.042 0.033 0.69 0.127
Scapegoa 0.135 0.088 0.043 0.185 0.549
Sherlock 0.093 0.082 0.398 0.195 0.232
sherlock 0.108 0.136 0.453 0.123 0.18
SixHeres 0.159 0.11 0.078 0.516 0.138
SixHeres 0.104 0.111 0.366 0.212 0.207
TheArchi 0.111 0.221 0.522 0.088 0.058
TheCloud 0.185 0.333 0.198 0.136 0.148
TheConsu 0.105 0.184 0.518 0.096 0.096
TheDange 0.114 0.079 0.037 0.079 0.69
TheDilem 0.125 0.128 0.389 0.261 0.098
TheEssen 0.713 0.059 0.031 0.113 0.084
ThreeTyp 0.09 0.076 0.042 0.083 0.708
TOGAFOrN 0.158 0.232 0.352 0.151 0.107
Understa 0.658 0.065 0.072 0.101 0.105

In the table, the highest probability in each row is in bold. Also, in cases where the maximum and the second/third largest probabilities are close, I have highlighted the second (and third) highest probabilities in red.   It is clear that Neil’s interview (9th document in the above table) has 3  topics with comparable probabilities – topic 5 (project management), topic 3 (change) and topic 2 (issue mapping / ibis), in decreasing order of probabilities. In general, if a document has multiple topics with comparable probabilities, it simply means that the document speaks to all those topics in proportions indicated by the probabilities. A reading of Neil’s interview will convince you that our conversation did indeed range over all those topics.

That said, the algorithm is far from perfect. You might have already noticed a few poor assignments. Here is one – my post on Sherlock Holmes and the case of the failed project has been assigned to topic 3; I reckon it belongs in topic 5. There are a number of others, but I won’t belabor the point, except to reiterate that this precisely why you definitely want to experiment with different settings of the iteration parameters (to check for stability) and, more important, try a range of different values of k to find the optimal number of topics.

To conclude

Topic modelling provides a quick and convenient way to perform unsupervised classification of a corpus of documents.  As always, though, one needs to examine the results carefully to check that they make sense.

I’d like to end with a general observation. Classifying documents is an age-old concern that cuts across disciplines. So it is no surprise that topic modelling has got a look-in from diverse communities. Indeed, when I was reading up and learning about LDA, I found that some of the best introductory articles in the area have been written by academics working in English departments! This is one of the things I love about working in text analysis, there is a wealth of material on the web written from diverse perspectives. The term cross-disciplinary often tends to be a platitude , but in this case it is simply a statement of fact.

I hope that I have been able to convince you to explore this rapidly evolving field. Exciting times ahead, come join the fun.

Written by K

September 29, 2015 at 7:18 pm

Setting up an internal data analytics practice – some thoughts from a wayfarer

leave a comment »


This year has been hugely exciting so far: I’ve been exploring and playing with various techniques that fall under the general categories of data mining and text analytics. What’s been particularly satisfying is that I’ve been fortunate to find meaningful applications for these techniques within my organization.

Although I have a fair way to travel yet, I’ve learnt that common wisdom about data analytics – especially the stuff that comes from software vendors and high-end consultancies – can be misleading, even plain wrong. Hence this post in which I dispatch some myths and share a few pointers on establishing data analytics capabilities within an organization.

Busting a few myths

Let’s get right to it by taking a critical look at a few myths about setting up an internal data analytics practice.

  1. Requires high-end technology and a big budget: this myth is easy to bust because I can speak from recent experience. No, you do not need cutting-edge technology or an oversized budget.   You can get started for with an outlay of 0$ – yes, that’s right, for free!  All you need to is the open-source statistical package R (check out this section of my article on text mining for more on installing and using R) and the willingness to roll-up your sleeves and learn (more about this  later).  No worries if you prefer to stick with familiar tools – you can even begin with Excel.
  2. Needs specialist skills: another myth floating around is that you need Phd level knowledge in statistics or applied mathematics to do practical work in analytics. Sorry, but that’s plain wrong. You do need a PhD to do research in the analytics and develop your own algorithms, but not if you want to apply algorithms written by others.Yes, you will need to develop an understanding of the algorithms you plan to use, a feel for how they work and the ability to tell whether the results make sense. There are many good resources that can help you develop these skills – see, for example, the outstanding books by James, Witten, Hastie and Tibshirani and Kuhn and Johnson.
  3. Must have sponsorship from the top: this one is possibly a little more controversial than the previous two. It could be argued that it is impossible to gain buy in for a new capability without sponsorship from top management. However, in my experience, it is OK to start small by finding potential internal “customers” for analytics services through informal conversations with folks in different functions.I started by having informal conversations with managers in two different areas: IT infrastructure and sales / marketing.  I picked these two areas because I knew that they had several gigabytes of under-exploited data – a good bit of it unstructured – and a lot of open questions that could potentially be answered (at least partially) via methods of data and text analytics.  It turned out I was right. I’m currently doing a number of proofs of concept and small projects in both these areas.  So you don’t need sponsorship from the top as long as you can get buy in from people who have problems they believe you can solve. If you deliver, they may even advocate your cause to their managers.

A caveat is in order at this point:  my organization is not the same as yours, so you may well need to follow a different path from mine. Nevertheless, I do believe that it is always possible to find a way to start without needing permission or incurring official wrath.  In that spirit, I now offer some suggestions to help kick-start your efforts

Getting started

As the truism goes, the hardest part of any new effort is getting started.  The first thing to keep in mind is to start small. This is true even if you have official sponsorship and a king-sized budget. It is very tempting to spend a lot of time garnering management support for investing in high-end technology.  Don’t do it!  Do the following instead:

  1. Develop an understanding of the problems faced by people you plan to approach: The best way to do this is to talk to analysts or frontline managers. In my case, I was fortunate to have access to some very savvy analysts in IT service management and marketing who gave me a slew of interesting ideas to pursue. A word of advice: it is best not to talk to senior managers until you have a few concrete results that you can quantify in terms of dollar values.
  2. Invest time and effort in understanding analytics algorithms and gaining practical experience with them: As mentioned earlier, I started with R – and I believe it is the best choice. Not just because it is free but also because there are a host of packages available to tackle just about any analytics problem you might encounter.  There are some excellent free resources available to get you started with R (check out this listing on the r-statistics blog, for example).It is important that you start cutting code as you learn. This will help you build a repertoire of techniques and approaches as you progress. If you get stuck when coding, chances are you will find a solution on the wonderful stackoverflow site.
  3. Evangelise, evangelise, evangelise: You are, in effect, trying to sell an idea to people within your organization. You therefore have to identify people who might be able to help you and then convince them that your idea has merit. The best way to do the latter is to have concrete examples of problems that you have tackled. This is a chicken-and-egg situation in that you can’t have any examples until you gain support.  I got support by approaching people I know well. I found that most – no, all – of them were happy to provide me with interesting ideas and access to their data.
  4. Begin with small (but real) problems: It is important to start with the “low-hanging fruit” – the problems that would take the least effort to solve. However, it is equally important to address real problems, i.e. those that matter to someone.
  5. Leverage your organisation’s existing data infrastructure: From what I’ve written thus far, I may have given you the impression that the tools of data analytics stand separate from your existing data infrastructure. Nothing could be further from the truth. In reality, I often do the initial work  (basic preprocessing and exploratory analysis) using my organisation’s relational database infrastructure. Relational databases have sophisticated analytical extensions to SQL as well as efficient bulk data cleansing and transport facilities. Using these make good sense, particularly if your R installation is on a desktop or laptop computer as it is in my case. Moreover, many enterprise database vendors now offer add-on options that integrate R with their products. This gives you the best of both worlds – relational and analytical capabilities on an enterprise-class platform.
  6. Build relationships with the data management team: Remember the work you are doing falls under the ambit of the group that is officially responsible for managing data in your organization. It is therefore important that you keep them informed of what you’re doing.  Sooner or later your paths will cross, and you want to be sure that there are no nasty surprises (for either side!) at that point. Moreover, if you build connections with them early, you may even find that the data management team supports your efforts.

Having waxed verbose, I should mention that my effort is work in progress and I do not know where it will lead. Nevertheless, I offer these suggestions as a wayfarer who is considerably further down the road from where he started.

Parting thoughts

You may have noticed that I’ve refrained from using the overused and over-hyped term “Big Data” in this piece. This is deliberate. Indeed, the techniques I have been using have nothing to do with the size of the datasets. To be honest, I’ve applied them to datasets ranging from a few thousand to a few hundred thousand records, both of which qualify as Very Small Data in today’s world.

Your vendor will be only too happy to sell you Big Data infrastructure that will set you back a good many dollars. However, the chances are good that you do not need it right now.  You’ll be much better off going back to them after you hit the limits of your current data processing infrastructure. Moreover, you’ll also be better informed about your needs then.

You may also be wondering why I haven’t said much about the composition of the analytics team (barring the point about not needing PhD statisticians) and how it should be organized.  The reason I haven’t done so is that I believe the right composition and organizational structure will emerge from the initial projects done and feedback received from internal customers. The resulting structure will be better suited to the organization than one that is imposed upfront.  Long time readers of this blog might recognize this as a tenet of emergent design.

Finally, I should reiterate that my efforts are still very much in progress and I know not where they will lead. However, even if they go nowhere, I would have learnt something about my organization and picked up a useful, practical skill. And that is good enough for me.

Written by K

September 3, 2015 at 8:28 pm

From inactivism to interactivism – managerial attitudes to planning

with 2 comments


Managers display a range of attitudes towards planning for the future.  In an essay entitled Systems, Messes and Interactive Planning, the management guru/philosopher Russell Ackoff classified attitudes to organizational planning into four distinct types which I describe in detail below. I suspect you may recognise examples of each of these in your organisation…indeed, you might even see shades of yourself :-)


This attitude, as its name suggests, is characterized by a lack of meaningful action. Inactivism is often displayed by managers in organisations that favour the status quo.  These organisations are happy with the way things are, and therefore see no need to change. However, lack of meaningful action does not mean lack of action. On the contrary, it often takes a great deal of effort to fend off change and keep things the way they are. As Ackoff states:

Inactive organizations require a great deal of activity to keep changes from being made. They accomplish nothing in a variety of ways. First, they require that all important decisions be made “at the top.” The route to the top is deliberately designed like an obstacle course. This keeps most recommendations for change from ever getting there. Those that do are likely to have been delayed enough to make them irrelevant when they reach their destination. Those proposals that reach the top are likely to be farther delayed, often by being sent back down or out for modification or evaluation. The organization thus behaves like a sponge and is about as active…

The inactive manager spends a lot of time and effort in ensuring that things remain the way they are. Hence they act only when a stituation forces them to. Ackoff puts it in his inimitable way by stating that, “Inactivist  managers tend to want what they get rather than get what they want.”


Reactivist managers are a step worse than inactivists  because they believe that disaster is already upon them. This is the type of manager who hankers after the “golden days of yore when things were much better than they are today.” As a result of their deep unease of where they are now, they may try to undo the status quo.  As Ackoff points out, unlike inactivists, reactivists do not ride the tide but try to swim against it.

Typically reactivist managers are wary of technology and new concepts. Moreover, they tend to give more importance to seniority and experience rather than proven competence. They also tend to be fans of simplistic solutions to complex problems…like “solving” the problem of a behind-schedule software project by throwing more people at it.


Preactivists are the opposite of reactivists in that they believe the future is going to be better than the past. Consequently, their efforts are geared towards understanding what the future will look like and how they can prepare for it.  Typically, preactive managers are concerned with facts, figures and forecasts; they are firm believers in scientific planning methods that they have learnt in management schools. As such, one might say that this is the most common species of manager in present  day organisations. Those who are not natural preactivists will fly the preactivist flag when they’re asked for their opinions by their managers because it’s the expected answer.

A key characteristic of preactivist managers is that they tend to revel in creating plans rather than implementing them. As Ackoff puts it, “Preactivists see planning as a sequence of discrete steps which terminate with acceptance or rejection of their plans. What happens to their plans is the responsibility of others.


Interactivists planners are not satisfied with the present, but unlike reactivists or preactivists, they do not hanker for the past, nor do they believe the future is automatically going to be better. They do want to make things better than they were or currently are, but they are continually adjusting their plans for the future by learning from and responding to events.  In short, they believe they can shape the future by their actions.

Experimentation is the hallmark of interactivists.  They are willing to try different approaches and learn from them. Although they believe in learning by experience, they do not want to wait for experiences to happen; they would rather induce them by (often small-scale) experimentation.

Ackoff labels interactivists as idealisers – people who pursue ideals they know cannot be achieved, but can be approximated or even reformulated in the light of new knowledge. As he puts it:

They treat ideals as relative absolutes: ultimate objectives whose formulation depends on our current knowledge and understanding of ourselves and our environment. Therefore, they require continuous reformulation in light of what we learn from approaching them.

To use a now fashionable term, interactivists are intrapreneurs.


Although Ackoff shows a clear bias towards  interactivists in his article, he does mention that specific situations may call for other types of planners. As he puts it:

Despite my obvious bias in my characterization of these four postures, there are circumstances in which each is most appropriate. Put simply, if the internal and external dynamics of a system (the tide) are taking one where one wants to go and are doing so quickly enough, inactivism is appropriate. If the direction of change is right but the movement is too slow, preactivism is appropriate. If the change is taking one where one does not want to go and one prefers to stay where one is or was, reactivism is appropriate. However, if one is not willing to settle for the past, the present or the future that appears likely now, interactivism is appropriate.

The key point he makes is that inactivists and preactivists treat planning as a ritual because they see the future as something they cannot change. They can only plan for it (and hope for the best). Interactivists, on the other hand, look for opportunities to influence events and thus potentially change the future. Although both preactivists and interactivists are forward-looking, interactivists tend to be long-term thinkers as compared to preactivists who are more concerned about the short to medium term future.


Ackoff’s classification of planners in organisations is interesting because it highlights the kind of future-focused attitude that managers ought to take.  The sad fact, though, is that a significant number of managers are myopic preactivists, focused on this year’s performance targets rather than what their organisations might look like five or even ten years down the line. This is not the fault of individuals, though. The blame for the undue prevalence of myopic preactivism can be laid squarely on the deep-seated management dogma that rewards short-termism.

Written by K

August 20, 2015 at 9:30 pm

A gentle introduction to cluster analysis using R

with 4 comments


Welcome to the second part of my introductory series on text analysis using R (the first article can be accessed here).  My aim in the present piece is to provide a  practical introduction to cluster analysis. I’ll begin with some background before moving on to the nuts and bolts of clustering. We have a fair bit to cover, so let’s get right to it.

A common problem when analysing large collections of documents is to categorize them in some meaningful way. This is easy enough if one has a predefined classification scheme that is known to fit the collection (and if the collection is small enough to be browsed manually). One can then simply scan the documents, looking for keywords appropriate to each category and classify the documents based on the results. More often than not, however, such a classification scheme is not available and the collection too large. One then needs to use algorithms that can classify documents automatically based on their structure and content.

The present post is a practical introduction to a couple of automatic text categorization techniques, often referred to as clustering algorithms.  As the Wikipedia article on clustering tells us:

Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense or another) to each other than to those in other groups (clusters).

As one might guess from the above, the results of clustering depend rather critically on the method one uses to group objects. Again, quoting from the Wikipedia piece:

Cluster analysis itself is not one specific algorithm, but the general task to be solved. It can be achieved by various algorithms that differ significantly in their notion of what constitutes a cluster and how to efficiently find them. Popular notions of clusters include groups with small distances [Note: we’ll use distance-based methods] among the cluster members, dense areas of the data space, intervals or particular statistical distributions [i.e. distributions of words within documents and the entire collection].

…and a bit later:

…the notion of a “cluster” cannot be precisely defined, which is one of the reasons why there are so many clustering algorithms. There is a common denominator: a group of data objects. However, different researchers employ different cluster models, and for each of these cluster models again different algorithms can be given. The notion of a cluster, as found by different algorithms, varies significantly in its properties. Understanding these “cluster models” is key to understanding the differences between the various algorithms.

An upshot of the above is that it is not always straightforward to interpret the output of clustering algorithms. Indeed, we will see this in the example discussed below.

With that said for an introduction, let’s move on to the nut and bolts of clustering.

Preprocessing the corpus

In this section I cover the steps required to create the R objects necessary in order to do clustering. It goes over territory that I’ve covered in detail in the first article in this series – albeit with a few tweaks, so you may want to skim through even if you’ve read my previous piece.

To begin with I’ll assume you have R and RStudio (a free development environment for R) installed on your computer and are familiar with the basic functionality in the text mining ™ package.  If you need help with this, please look at the instructions in my previous article on text mining.

As in the first part of this series,  I will use 30 posts from my blog as the example collection (or corpus, in text mining-speak). The corpus can be downloaded here. For completeness, I will run through the entire sequence of steps – right from loading the corpus into R, to running the two clustering algorithms.

Ready? Let’s go…

The first step is to fire up RStudio and navigate to the directory in which you have unpacked the example corpus. Once this is done, load the text mining package, tm.  Here’s the relevant code (Note: a complete listing of the code in this article can be accessed here):

[1] “C:/Users/Kailash/Documents”

#set working directory – fix path as needed!
#load tm library

Loading required package: NLP

Note: R commands are in blue, output in black or red; lines that start with # are comments.

If you get an error here, you probably need to download and install the tm package. You can do this in RStudio by going to Tools > Install Packages and entering “tm”. When installing a new package, R automatically checks for and installs any dependent packages.

The next step is to load the collection of documents into an object that can be manipulated by functions in the tm package.

#Create Corpus
docs <- Corpus(DirSource(“C:/Users/Kailash/Documents/TextMining”))
#inspect a particular document

…<output not shown>

The next step is to clean up the corpus. This includes things such as transforming to a consistent case, removing non-standard symbols & punctuation, and removing numbers (assuming that numbers do not contain useful information, which is the case here):

#Transform to lower case
docs <- tm_map(docs,content_transformer(tolower))
#remove potentiallyy problematic symbols
toSpace <- content_transformer(function(x, pattern) { return (gsub(pattern, ” “, x))})
docs <- tm_map(docs, toSpace, “-“)
docs <- tm_map(docs, toSpace, “:”)
docs <- tm_map(docs, toSpace, “‘”)
docs <- tm_map(docs, toSpace, “•”)
docs <- tm_map(docs, toSpace, “•    “)
docs <- tm_map(docs, toSpace, ” -“)
docs <- tm_map(docs, toSpace, ““”)
docs <- tm_map(docs, toSpace, “””)
#remove punctuation
docs <- tm_map(docs, removePunctuation)
#Strip digits
docs <- tm_map(docs, removeNumbers)

Note: please see my previous article for more on content_transformer and the toSpace function defined above.

Next we remove stopwords – common words (like “a” “and” “the”, for example) and eliminate extraneous whitespaces.

#remove stopwords
docs <- tm_map(docs, removeWords, stopwords(“english”))
#remove whitespace
docs <- tm_map(docs, stripWhitespace)

flexibility eye beholder action increase organisational flexibility say redeploying employees likely seen affected move constrains individual flexibility dual meaning characteristic many organizational platitudes excellence synergy andgovernance interesting exercise analyse platitudes expose difference espoused actual meanings sign wishing many hours platitude deconstructing fun

At this point it is critical to inspect the corpus because  stopword removal in tm can be flaky. Yes, this is annoying but not a showstopper because one can remove problematic words manually once one has identified them – more about this in a minute.

Next, we stem the document – i.e. truncate words to their base form. For example, “education”, “educate” and “educative” are stemmed to “educat.”:

docs <- tm_map(docs,stemDocument)

Stemming works well enough, but there are some fixes that need to be done due to my inconsistent use of British/Aussie and US English. Also, we’ll take this opportunity to fix up some concatenations like “andgovernance” (see paragraph printed out above). Here’s the code:


docs <- tm_map(docs, content_transformer(gsub),pattern = “organiz”, replacement = “organ”)
docs <- tm_map(docs, content_transformer(gsub), pattern = “organis”, replacement = “organ”)
docs <- tm_map(docs, content_transformer(gsub), pattern = “andgovern”, replacement = “govern”)
docs <- tm_map(docs, content_transformer(gsub), pattern = “inenterpris”, replacement = “enterpris”)
docs <- tm_map(docs, content_transformer(gsub), pattern = “team-“, replacement = “team”)

The next step is to remove the stopwords that were missed by R. The best way to do this  for a small corpus is to go through it and compile a list of words to be eliminated. One can then create a custom vector containing words to be removed and use the removeWords transformation to do the needful. Here is the code (Note:  + indicates a continuation of a statement from the previous line):

myStopwords <- c(“can”, “say”,”one”,”way”,”use”,
+                  “also”,”howev”,”tell”,”will”,
+                  “much”,”need”,”take”,”tend”,”even”,
+                  “like”,”particular”,”rather”,”said”,
+                  “get”,”well”,”make”,”ask”,”come”,”end”,
+                  “first”,”two”,”help”,”often”,”may”,
+                  “might”,”see”,”someth”,”thing”,”point”,
+                  “post”,”look”,”right”,”now”,”think”,”’ve “,
+                  “’re “)
#remove custom stopwords
docs <- tm_map(docs, removeWords, myStopwords)

Again, it is a good idea to check that the offending words have really been eliminated.

The final preprocessing step is to create a document-term matrix (DTM) – a matrix that lists all occurrences of words in the corpus.  In a DTM, documents are represented by rows and the terms (or words) by columns.  If a word occurs in a particular document n times, then the matrix entry for corresponding to that row and column is n, if it doesn’t occur at all, the entry is 0.

Creating a DTM is straightforward– one simply uses the built-in DocumentTermMatrix function provided by the tm package like so:

dtm <- DocumentTermMatrix(docs)
#print a summary

<<DocumentTermMatrix (documents: 30, terms: 4131)>>
Non-/sparse entries: 13312/110618
Sparsity           : 89%
Maximal term length: 48
Weighting          : term frequency (tf)

This brings us to the end of the preprocessing phase. Next, I’ll briefly explain how distance-based algorithms work before going on to the actual work of clustering.

An intuitive introduction to the algorithms

As mentioned in the introduction, the basic idea behind document or text clustering is to categorise documents into groups based on likeness. Let’s take a brief look at how the algorithms work their magic.

Consider the structure of the DTM. Very briefly, it is a matrix in which the documents are represented as rows and words as columns. In our case, the corpus has 30 documents and 4131 words, so the DTM is a 30 x 4131 matrix.  Mathematically, one can think of this matrix as describing a 4131 dimensional space in which each of the words represents a coordinate axis and each document is represented as a point in this space. This is hard to visualise of course, so it may help to illustrate this via a two-document corpus with only three words in total.

Consider the following corpus:

Document A: “five plus five”

Document B: “five plus six”

These two  documents can be represented as points in a 3 dimensional space that has the words “five” “plus” and “six” as the three coordinate axes (see figure 1).

Figure 1: Documents A and B as points in a 3-word space

Figure 1: Documents A and B as points in a 3-word space

Now, if each of the documents can be thought of as a point in a space, it is easy enough to take the next logical step which is to define the notion of a distance between two points (i.e. two documents). In figure 1 the distance between A and B  (which I denote as D(A,B))is the length of the line connecting the two points, which is simply, the sum of the squares of the differences between the coordinates of the two points representing the documents.

D(A,B) = \sqrt{(2-1)^2 + (1-1)^2+(0-1)^2} = \sqrt 2

Generalising the above to the 4131 dimensional space at hand, the distance between two documents (let’s call them X and Y) have coordinates (word frequencies)  (x_1,x_2,...x_{4131}) and (y_1,y_2,...y_{4131}), then one can define the straight line distance (also called Euclidean distance)  D(X,Y) between them as:

D(X,Y) = \sqrt{(x_1 - y_1)^2+(x_2 - y_2)^2+...+(x_{4131} - y_{4131})^2}

It should be noted that the Euclidean distance that I have described is above is not the only possible way to define distance mathematically. There are many others but it would take me too far afield to discuss them here – see this article for more  (and don’t be put off by the term metric,  a metric  in this context is merely a distance)

What’s important here is the idea that one can define a numerical distance between documents. Once this is grasped, it is easy to understand the basic idea behind how (some) clustering algorithms work – they group documents based on distance-related criteria.  To be sure, this explanation is simplistic and glosses over some of the complicated details in the algorithms. Nevertheless it is a reasonable, approximate explanation for what goes on under the hood. I hope purists reading this will agree!

Finally, for completeness I should mention that there are many clustering algorithms out there, and not all of them are distance-based.

Hierarchical clustering

The first algorithm we’ll look at is hierarchical clustering. As the Wikipedia article on the topic tells us, strategies for hierarchical clustering fall into two types:

Agglomerative: where we start out with each document in its own cluster. The algorithm  iteratively merges documents or clusters that are closest to each other until the entire corpus forms a single cluster. Each merge happens at a different (increasing) distance.

Divisive:  where we start out with the entire set of documents in a single cluster. At each step  the algorithm splits the cluster recursively until each document is in its own cluster. This is basically the inverse of an agglomerative strategy.

The algorithm we’ll use is hclust which does agglomerative hierarchical clustering. Here’s a simplified description of how it works:

  1. Assign each document to its own (single member) cluster
  2. Find the pair of clusters that are closest to each other and merge them. So you now have one cluster less than before.
  3. Compute distances between the new cluster and each of the old clusters.
  4. Repeat steps 2 and 3 until you have a single cluster containing all documents.

We’ll need to do a few things before running the algorithm. Firstly, we need to convert the DTM into a standard matrix which can be used by dist, the distance computation function in R (the DTM is not stored as a standard matrix). We’ll also shorten the document names so that they display nicely in the graph that we will use to display results of hclust (the names I have given the documents are just way too long). Here’s the relevant code:

#convert dtm to matrix
m <- as.matrix(dtm)>
#write as csv file (optional)
#shorten rownames for display purposes> rownames(m) <- paste(substring(rownames(m),1,3),rep(“..”,nrow(m)),
+                      substring(rownames(m), nchar(rownames(m))-12,nchar(rownames(m))-4))


Next we run hclust. The algorithm offers several options check out the documentation for details. I use a popular option called Ward’s method – there are others, and I suggest you experiment with them  as each of them gives slightly different results making interpretation somewhat tricky (did I mention that clustering is as much an art as a science??). Finally, we visualise the results in a dendogram (see Figure 2 below).

#run hierarchical clustering using Ward’s method
groups <- hclust(d,method=”ward.D”)
#plot dendogram, use hang to ensure that labels fall below tree
plot(groups, hang=-1)


Figure 2: Dendogram from hierarchical clustering of corpus

Figure 2: Dendogram from hierarchical clustering of corpus

A few words on interpreting dendrograms for hierarchical clusters: as you work your way down the tree in figure 2, each branch point you encounter is the distance at which a cluster merge occurred. Clearly, the most well-defined clusters are those that have the largest separation; many closely spaced branch points indicate a lack of dissimilarity (i.e. distance, in this case) between clusters. Based on this, the figure reveals that there are 2 well-defined clusters – the first one consisting of the three documents at the right end of the cluster and the second containing all other documents. We can display the clusters on the graph using the rect.hclust function like so:

#cut into 2 subtrees – try 3 and 5

The result is shown in the figure below.

Figure 3: 2 cluster solution

Figure 3: 2 cluster grouping

The figures 4 and 5 below show the grouping for 3,  and 5 clusters.

Figure 4: 3 cluster solution

Figure 4: 3 cluster grouping



Figure 5: 5 cluster solution

Figure 5: 5 cluster grouping

I’ll make just one point here: the 2 cluster grouping seems the most robust one as it happens at large distance, and is cleanly separated (distance-wise) from the 3 and 5 cluster grouping. That said, I’ll leave you to explore the ins and outs of hclust on your own and move on to our next algorithm.

K means clustering

In hierarchical clustering we did not specify the number of clusters upfront. These were determined by looking at the dendogram after the algorithm had done its work.  In contrast, our next algorithm – K means –   requires us to define the number of clusters upfront (this number being the “k” in the name). The algorithm then generates k document clusters in a way that ensures the within-cluster distances from each cluster member to the centroid (or geometric mean) of the cluster is minimised.

Here’s a simplified description of the algorithm:

  1. Assign the documents randomly to k bins
  2. Compute the location of the centroid of each bin.
  3. Compute the distance between each document and each centroid
  4. Assign each document to the bin corresponding to the centroid closest to it.
  5. Stop if no document is moved to a new bin, else go to step 2.

An important limitation of the k means method is that the solution found by the algorithm corresponds to a local rather than global minimum (this figure from Wikipedia explains the difference between the two in a nice succinct way). As a consequence it is important to run the algorithm a number of times (each time with a different starting configuration) and then select the result that gives the overall lowest sum of within-cluster distances for all documents.  A simple check that a solution is robust is to run the algorithm for an increasing number of initial configurations until the result does not change significantly. That said, this procedure does not guarantee a globally optimal solution.

I reckon that’s enough said about the algorithm, let’s get on with it using it. The relevant function, as you might well have guessed is kmeans. As always, I urge you to check the documentation to understand the available options. We’ll use the default options for all parameters excepting nstart which we set to 100. We also plot the result using the clusplot function from the cluster library (which you may need to install. Reminder you can install packages via the Tools>Install Packages menu in RStudio)

#k means algorithm, 2 clusters, 100 starting configurations
kfit <- kmeans(d, 2, nstart=100)
#plot – need library cluster
clusplot(m, kfit$cluster, color=T, shade=T, labels=2, lines=0)

The plot is shown in Figure 6.

Figure 6: principal component plot (k=2)

Figure 6: principal component plot (k=2)

The cluster plot shown in the figure above needs a bit of explanation. As mentioned earlier, the clustering algorithms work in a mathematical space whose dimensionality equals the number of words in the corpus (4131 in our case). Clearly, this is impossible to visualize.  To handle this, mathematicians have invented a dimensionality reduction technique called Principal Component Analysis which reduces the number of dimensions to 2 (in this case) in such a way that the reduced dimensions capture as much of the variability between the clusters as possible (and hence the comment, “these two components explain 69.42% of the point variability” at the bottom of the plot in figure 6)

(Aside  Yes I realize the figures are hard to read because of the overly long names, I leave it to you to fix that. No excuses, you know how…:-))

Running the algorithm and plotting the results for k=3 and 5 yields the figures below.


Figure 7: Principal component plot (k=3)

Figure 7: Principal component plot (k=3)



Figure 8: Principal component plot (k=5)

Figure 8: Principal component plot (k=5)

Choosing k

Recall that the k means algorithm requires us to specify k upfront. A natural question then is: what is the best choice of k? In truth there is no one-size-fits-all answer to this question, but there are some heuristics that might sometimes help guide the choice. For completeness I’ll describe one below even though it is not much help in our clustering problem.

In my simplified description of the k means algorithm I mentioned that the technique attempts to minimise the sum of the distances between the points in a cluster and the cluster’s centroid. Actually, the quantity that is minimised is the total of the within-cluster sum of squares (WSS) between each point and the mean. Intuitively one might expect this quantity to be maximum when k=1 and then decrease as k increases, sharply at first and then less sharply as k reaches its optimal value.

The problem with this reasoning is that it often happens that the within cluster sum of squares never shows a slowing down in decrease of the summed WSS. Unfortunately this is exactly what happens in the case at hand.

I reckon a picture might help make the above clearer. Below is the R code to draw a plot of summed WSS as a function of k for k=2 all the way to 29 (1-total number of documents):

#kmeans – determine the optimum number of clusters (elbow method)
#look for “elbow” in plot of summed intra-cluster distances (withinss) as fn of k
wss <- 2:29
for (i in 2:29) wss[i] <- sum(kmeans(d,centers=i,nstart=25)$withinss)
plot(2:29, wss[2:29], type=”b”, xlab=”Number of Clusters”,ylab=”Within groups sum of squares”)

…and the figure below shows the resulting plot.

Figure 10: WSS as a function of k ("elbow plot")

Figure 10: WSS as a function of k (“elbow plot”)

The plot clearly shows that there is no k for which the summed WSS flattens out (no distinct “elbow”).  As a result this method does not help. Fortunately, in this case  one can get a sensible answer using common sense rather than computation:  a choice of 2 clusters seems optimal because both algorithms yield exactly the same clusters and show the clearest cluster separation at this point (review the dendogram and cluster plots for k=2).

The meaning of it all

Now I must acknowledge an elephant in the room that I have steadfastly ignored thus far. The odds are good that you’ve seen it already….

It is this: what topics or themes do the (two) clusters correspond to?

Unfortunately this question does not have a straightforward answer. Although the algorithms suggest a 2-cluster grouping, they are silent on the topics or themes related to these.   Moreover,  as you will see if you experiment, the results of clustering depend on:

  • The criteria for the construction of the DTM  (see the documentation for DocumentTermMatrix for options).
  • The clustering algorithm itself.

Indeed, insofar as clustering is concerned, subject matter and corpus knowledge is the best way to figure out cluster themes. This serves to reinforce (yet again!) that clustering is as much an art as it is a science.

In the case at hand, article length seems to be an important differentiator between the 2 clusters found by both algorithms. The three articles in the smaller cluster are in the top 4 longest pieces in the corpus.  Additionally, the three pieces are related to sensemaking and dialogue mapping. There are probably other factors as well, but none that stand out as being significant. I should mention, however, that the fact that article length seems to play a significant role here suggests that it may be worth checking out the effect of scaling distances by word counts or using other measures such a cosine similarity – but that’s a topic for another post!

The take home lesson is that  is that the results of clustering are often hard to interpret. This should not be surprising – the algorithms cannot interpret meaning, they simply chug through a mathematical optimisation problem. The onus is on the analyst to figure out what it means…or if it means anything at all.


This brings us to the end of a long ramble through clustering.  We’ve explored the two most common methods:  hierarchical and k means clustering (there are many others available in R, and I urge you to explore them). Apart from providing the detailed steps to do clustering, I have attempted to provide an intuitive explanation of how the algorithms work.  I hope I have succeeded in doing so. As always your feedback would be very welcome.

Finally, I’d like to reiterate an important point:  the results of our clustering exercise do not have a straightforward interpretation, and this is often the case in cluster analysis. Fortunately I can close on an optimistic note. There are other text mining techniques that do a better job in grouping documents based on topics and themes rather than word frequencies alone.   I’ll discuss this in the next article in this series.  Until then, I wish you many enjoyable hours exploring the ins and outs of clustering.

Note added on September 29th 2015:

If you liked this article, you might want to check out its sequel – an introduction to topic modeling.

Written by K

July 22, 2015 at 8:53 pm

The façade of expertise

with 2 comments


Since the 1980s, intangible assets, such as knowledge, have come to represent an ever-increasing proportion of an organisation’s net worth.  One of the problems associated with treating knowledge as an asset is that it is difficult to codify in its entirety. This is largely because knowledge is context and skill dependent, and these are hard to convey by any means other than experience. This is the well-known tacit versus explicit knowledge problem that I have written about at length elsewhere (see this post and this one, for example).  Although a recent development in knowledge management technology goes some way towards addressing the problem of context, it still looms large and is likely to for a while.

Although the problem mentioned above is well-known, it hasn’t stopped legions of consultants and professional organisations from attempting to codify and sell expertise: management consultancies and enterprise IT vendors being prime examples. This has given rise to the notion of a knowledge-intensive firm, an organization in which most work is said to be of an intellectual nature and where well-educated, qualified employees form the major part of the work force.   However, the slipperiness of knowledge mentioned in the previous paragraph suggests that the notion of a knowledge intensive firm (and, by implication, expertise) is problematic. Basically, if it is true that knowledge itself is elusive, and hard-to-codify, it raises the question as to what exactly such firms (and their employees) sell.

In this post, I shed some light on this question by drawing on an interesting paper by Mats Alvesson entitled, Knowledge Work: Ambiguity, Image and Identity (abstract only), as well as my experiences in dealing with IT services and consulting firms.

Background: the notion of a knowledge-intensive firm

The first point to note is that the notion of a knowledge-intensive firm is not particularly precise. Based on the definition offered above, it is clear that a wide variety of organisations may be classified as knowledge intensive firms. For example, management consultancies and enterprise software companies would fall into this category, as would law, accounting and research & development firms.  The same is true of the term knowledge work(er).

One of the implications of the vagueness of the term is that any claim to being a knowledge-intensive firm or knowledge worker can be contested. As Alvesson states:

It is difficult to substantiate knowledge-intensive companies and knowledge workers as distinct, uniform categories. The distinction between these and non- (or less) knowledge-intensive organization/non-knowledge   workers is not self-evident, as all organizations and work  involve “knowledge” and any evaluation of “intensiveness” is likely to be contestable. Nevertheless,  there are, in many crucial respects, differences  between many professional service and high-tech companies on the one hand, and more routinized service and industry companies on the other, e.g. in terms of broadly socially shared ideas about the significance of a long theoretical education and intellectual capacities for the work. It makes sense to refer to knowledge-intensive companies as a vague but meaningful category, with sufficient heuristic value to be useful. The category does not lend itself to precise definition or delimitation and it includes organizations which are neither unitary nor unique. Perhaps the claim to knowledge-intensiveness is one of the most distinguishing features…

The last line in the excerpt is particularly interesting to me because it resonates with my experience: having been through countless IT vendor and management consulting briefings on assorted products and services, it is clear that a large part of their pitch is aimed at establishing their credibility as experts in the field, even though they may not actually be so.

The ambiguity of knowledge work

Expertise in skill-based professions is generally unambiguous – an incompetent pilot will be exposed soon enough. In knowledge work, however, genuine expertise is often not so easily discernable. Alvesson highlights a number of factors that make this so.

Firstly, much of the day-to-day work of knowledge workers such as management consultants and IT experts involves routine matters – meetings, documentation etc. – that do not make great demands on their skills. Moreover, even when involved in one-off tasks such as projects, these workers are generally assigned tasks that they are familiar with. In general, therefore, the nature of their work requires them to follow already instituted processes and procedures.  A somewhat unexpected consequence of this is that incompetence can remain hidden for a long time.

A second issue is that the quality of so-called knowledge work is often hard to evaluate – indeed evaluations may require the engagement of independent experts! This is true even of relatively mundane expertise-based work. As Alvesson states:

Comparisons of the decisions of expert and novice auditors indicate no relationship  between the degree of expertise  (as indicated by experience)  and consensus; in high-risk and less standard situations, the experts’ consensus level was lower than that of novices. [An expert remarked that] “judging the quality of an audit is an extremely problematic exercise” and says that consumers of the audit service “have only a very limited insight into the quality of work undertaken by an audit firm”.

This is true of many different kinds of knowledge work.  As Alvesson tells us:

How can anyone tell whether a headhunting firm has found and recruited the best possible candidates or not…or if an audit has been carried out in a high-quality way?  Or  if  the  proposal by  strategic management consultants is optimal or even helpful, or not. Of course, sometimes one may observe whether something works or not (e.g. after the intervention of a plumber), but normally the issues concerned are not that simple in the context in which the concept of knowledge-intensiveness is frequently used. Here we are mainly dealing with complex and intangible phenomena.  Even if something seems to work, it might have worked even better or the cost of the intervention been much lower if another professional or organization had carried out the task.

In view of the above, it is unlikely that market mechanisms would be effective in sorting out the competent from the incompetent.  Indeed, my experience of dealing with major consulting firms (in IT) leads me believe that market mechanisms tend to make them clones of each other, at least in terms of their offerings and approach. This may be part of the reason why client firms tend to base their contracting decisions on the basis of cost or existing relationships – it makes sense to stick with the known, particularly when the alternatives offer choices akin to Pepsi vs Coke.

But that is not the whole story, experts are often hired for ulterior motives. On the one hand, they  might be hired because they confer legitimacy – “no one ever got fired for hiring McKinsey” is a quote I’ve heard more than a few times in many workplaces. On the other hand, they also make convenient scapegoats when the proverbial stuff hits the fan.

Image cultivation

One of the consequences of the ambiguity of knowledge-intensive work is that employees in such firms are forced to cultivate and maintain the image of being experts, and hence the stereotype of the suited, impeccably-groomed Big 4 consultant. As Alvesson points out, though, image cultivation goes beyond the individual employee:

This image must be  managed on different levels: professional-industrial, corporate and individual. Image may be targeted in specific acts and arrangements,  in visible symbols for public consumption but also in everyday behavior, within the organization and in interaction  with others. Thus image is not just of importance in marketing  and for attracting personnel but also in and after production.  Size and a big name  are  therefore important for  many knowledge-intensive companies – and here we perhaps have a major explanation  for all the mergers and acquisitions  in accounting, management consultancy and  other  professional service companies. A large size is reassuring. A well-known brand name substitutes for difficulties in establishing quality.

Another aspect of image cultivation is the use of rhetoric. Here are some examples taken from the websites of Big 4 consulting firms:

No matter the challenge, we focus on delivering practical and enduring results, and equipping our clients to grow and lead.” —McKinsey

We continue to redefine ourselves and set the bar higher to continually deliver quality for clients, our people, and the society in which we operate.” – Deloitte

Cutting through complexity” – KPMG

Creating value for our clients, people and communities in a changing world” – PWC

Some clients are savvy enough not to be taken in by the platitudinous statements listed above.  However, the fact that knowledge-intensive firms continue to use second-rate rhetoric to attract custom suggests that there are many customers who are easily taken in by marketing slogans.  These slogans are sometimes given an aura of plausibility via case-studies intended to back the claims made. However, more often than not the case studies are based on a selective presentation of facts that depict the firm in the best possible light.

A related point is that such firms often flaunt their current client list in order to attract new clientele. Lines like, “our client list includes 8 of top ten auto manufacturers in the world,” are not uncommon, the unstated implication being that if you are an auto manufacturer, you cannot afford not to engage us. The image cultivation process continues well after the consulting engagement is underway. Indeed, much of a consultant’s effort is directed at ensuring that the engagement will be extended.

Finally, it is important to point out the need to maintain an aura of specialness. Consultants and knowledge workers are valued for what they know. It is therefore in their interest to maintain a certain degree of exclusivity of knowledge. Guilds (such as the Project Management Institute) act as gatekeepers by endorsing the capabilities of knowledge workers through membership criteria based on experience and / or professional certification programs.

Maintaining the façade

Because knowledge workers deal with intangibles, they have to work harder to maintain their identities than those who have more practical skills. They are therefore more susceptible to the vagaries and arbitrariness of organisational life.  As Alvesson notes,

Given the high level of ambiguity and the fluidity of organizational  life and interactions with external actors, involving a strong dependence on somewhat arbitrary evaluations  and opinions of others, many knowledge-intensive workers must struggle more for the accomplishment,  maintenance and gradual change of self-identity, compared to workers whose competence and results are more materially grounded…Compared with people who invest less self- esteem in their work and who have lower expectations,  people in knowledge-intensive  companies are thus vulnerable to frustrations  contingent upon ambiguity of performance  and confirmation.

Knowledge workers are also more dependent on managerial confirmation of their competence and value. Indeed, unlike the case of the machinist or designer, a knowledge worker’s product rarely speaks for itself. It has to be “sold”, first  to management and then (possibly) to the client and the wider world.

The previous paragraphs of this section dealt with individual identity. However, this is not the whole story because organisations also play a key role in regulating the identities of their employees. Indeed, this is how they develop their brand. Alvesson notes four ways in which organisations do this:

  1. Corporate identity – large consulting firms are good examples of this. They regulate the identities of their employees through comprehensive training and acculturation programs. As a board member remarked to me recently, “I like working with McKinsey people, because I was once one myself and I know their approach and thinking processes.”
  2. Cultural programs – these are the near-mandatory organisational culture initiatives in large organisations. Such programs are usually based on a set of “guiding principles” which are intended to inform employees on how they should conduct themselves as employees and representatives of the organisation. As Alvesson notes, these are often more effective than formal structures.
  3. Normalisation – these are the disciplinary mechanisms that are triggered when an employee violates an organisational norm. Examples of this include formal performance management or official reprimands. Typically, though, the underlying issue is rarely addressed. For example, a failed project might result in a reprimand or poor performance review for the project manager, but the underlying systemic causes of failure are unlikely to be addressed…or even acknowledged.
  4. Subjectification – This is where employees mould themselves to fit their roles or job descriptions. A good example of this is when job applicants project themselves as having certain skills and qualities in their resumes and in interviews. If selected, they may spend the first few months in learning and internalizing what is acceptable and what is not. In time, the new behaviours are internalized and become a part of their personalities.

It is clear from the above that maintaining the façade of expertise in knowledge work involves considerable effort and manipulation, and has little to do with genuine knowledge. Indeed, it is perhaps because genuine expertise is so hard to identify that people and organisations strive to maintain appearances.


The ambiguous nature of knowledge requires (and enables!) consultants and technology vendors to maintain a façade of expertise. This is done through a careful cultivation of image via the rhetoric of marketing, branding and impression management.The onus is therefore on buyers to figure out if there’s anything of substance behind words and appearances. The volume of business enjoyed by big consulting firms suggests that this does not happen as often as it should, leading us to the inescapable conclusion that decision-makers in organisations are all too easily deceived by the facade of expertise.

Written by K

July 8, 2015 at 8:47 pm

Catch-22 and the paradoxes of organisational life

with one comment

“You mean there’s a catch?”

“Sure there’s a catch”, Doc Daneeka replied. “Catch-22. Anyone who wants to get out of combat duty isn’t really crazy.”

There was only one catch and that was Catch-22, which specified that a concern for one’s own safety in the face of dangers that were real and immediate was the process of a rational mind. Orr was crazy and could be grounded. All he had to do was ask; and as soon as he did, he would no longer be crazy and would have to fly more missions…”   Joseph Heller, Catch-22


The term Catch-22 was coined by Joseph Heller in the eponymous satirical novel written in 1961. As the quote above illustrates,  the term refers to a paradoxical situation caused by the application of  contradictory rules.  Catch-22 situations are common in large organisations of all kinds, not just the military (which was the setting of the novel). So much so that it is a theme that has attracted some scholarly attention over the half century since the novel was first published  – see this paper or this one for example.

Although Heller uses Catch-22 situations to highlight the absurdities of bureaucracies in a humorous way, in real-life such situations can be deeply troubling for people who are caught up in them. In a paper published in 1956, the polymath Gregory Bateson and his colleagues  suggested that these situations can cause people to behave in ways that are symptomatic of schizophrenia .  The paper introduces the notion of a  double-bind, which is  a dilemma arising from an individual receiving two or more messages that contradict each other .   In simple terms, then,  a double-bind is a Catch-22.

In this post, I draw on Bateson’s  double bind theory to get some insights into Catch-22 situations in organisations.

Double bind theory

The basic elements of a double bind situation are as follows:

  1. Two or more individuals, one of whom is a victim – i.e. the individual who experiences the dilemma described below.
  2. A primary rule which keeps the victim fearful of the consequences of doing (or not doing) something.  This rule typically takes the form , “If you do x then you will be punished” or “If you do not do x then you will be punished. “
  3. A secondary rule that is in conflict with the primary rule, but at more abstract level. This rule, which is usually implicit, typically takes the form, “Do not question the rationale behind x.”
  4. A tertiary rule that prevents the victim from escaping from the situation.
  5. Repeated experiences of (1) and (2)

A simple example (quoted from this article) serves to illustrate the above in a real- life situation:

One example of double bind communication is a mother giving her child the message: “Be spontaneous” If the child acts spontaneously, he is not acting spontaneously because he is following his mother’s direction. It’s a no-win situation for the child. If a child is subjected to this kind of communication over a long period of time, it’s easy to see how he could become confused.

Here the injunction to “Be spontaneous” is contradicted by the more implicit rule that “one cannot be spontaneous on demand.”  It is important to note that the primary and secondary (implicit) rules are at different logical levels  –  the first is about an action, whereas the second is about the nature of all such actions. This is typical of a double bind situation.

The paradoxical aspects of double binds can sometimes be useful as they can lead to creative solutions arising from the victim “stepping outside the situation”. The following example from Bateson’s paper illustrates the point:

The Zen Master attempts to bring about enlightenment in his pupil in various ways. One of the things he does is to hold a stick over the pupil’s head and say fiercely, “If you say this stick is real, I will strike you with it. If you say this stick is not real, I will strike you with it. If you don’t say anything, I will strike you with it.”… The Zen pupil might reach up and take the stick away from the Master–who might accept this response.

This is an important point which we’ll return to towards the end of  this piece.

Double binds in organisations

Double bind situations are ubiquitous in organisations.   I’ll illustrate this by drawing on a couple of examples I have written about earlier on this blog.

The paradox of learning organisations

This section draws on a post I wrote while ago. In the introduction to that post I stated that:

The term learning organisation refers to an organisation that continually modifies its processes  based on observation and experience, thus adapting to changes in its internal and external environment.   Ever since Peter Senge coined the term in his book, The Fifth Discipline, assorted consultants and academics have been telling us that although a  learning  organisation is an utopian ideal, it is one worth striving for.  The reality, however,  is that most organisations that undertake the journey actually end up in a place far removed  from this ideal. Among other things, the journey may expose managerial hypocrisies that contradict the very notion of a learning organisation.

Starkly put, the problem arises from the fact that in a true learning organisation, employees will  inevitably start to question things that management would rather they didn’t.  Consider the following story, drawn from this paper on which the post is based:

…a multinational company intending to develop itself as a learning organization ran programmes to encourage managers to challenge received wisdom and to take an inquiring approach. Later, one participant attended an awayday, where the managing director of his division circulated among staff over dinner. The participant raised a question about the approach the MD had taken on a particular project; with hindsight, had that been the best strategy? `That was the way I did it’, said the MD. `But do you think there was a better way?’, asked the participant. `I don’t think you heard me’, replied the MD. `That was the way I did it’. `That I heard’, continued the participant, `but might there have been a better way?’. The MD fixed his gaze on the participants’ lapel badge, then looked him in the eye, saying coldly, `I will remember your name’, before walking away.

Of course,  a certain kind of learning  occurred here:  the employee learnt that certain questions were taboo, in stark contrast to the openness that was being preached from the organisational pulpit.  The double bind here is evident:  feel free to question and challenge everything…except what management deems to be out of bounds.  The takeaway for employees is that, despite all the rhetoric of organisational learning, certain things should not  be challenged. I think it is safe to say that this was probably not the kind of learning that was intended by those who initiated the program.

The paradoxes of change

In a post on the  paradoxes of organizational change, I wrote that:

An underappreciated facet of organizational change is that it is inherently paradoxical. For example, although it is well known that such changes inevitably have unintended consequences that are harmful, most organisations continue to implement change initiatives in a manner that assumes  complete controllability with the certainty of achieving solely beneficial outcomes.

As pointed out in this paper, there are three types of paradoxes that can arise when an organisation is restructured. The first is that during the transition, people are caught between the demands of their old and new roles. This is exacerbated by the fact that transition periods are often much longer expected. This paradox of performing in turn leads to a paradox of belonging – people become uncertain about where their loyalties (ought to) lie.

Finally, there is a paradox of organising, which refers to the gap between the rhetoric and reality of change. The paper mentioned above has a couple of nice examples. One study described how,

friendly banter in meetings and formal documentation [promoted] front-stage harmony, while more intimate conversations and unit meetings [intensified] backstage conflict.”  Another spoke of a situation in which, “…change efforts aimed at increasing employee participation [can highlight] conflicting practices of empowerment and control. In particular, the rhetoric of participation may contradict engrained organizational practices such as limited access to information and hierarchical authority for decision making…

Indeed, the gap between the intent and actuality of change initiatives make double binds inevitable.


I suspect the situations described above will be familiar to people working in a corporate environment. The question is what can one do if one is on the receiving end of such a Catch 22?

The main thing is to realise that a double-bind arises because one perceives the situation to be so. That is, the person experiencing the situation has chosen to interpret it  as a double bind. To be sure, there are usually factors that influence the choice – things such as job security, for example – but the fact is that it is a choice that can be changed if one sees things in a different light. Escaping the double bind is then a “simple” matter of reframing the situation.

Here is where the notion of mindfulness is particularly relevant. In brief, mindfulness is “the intentional, accepting and non-judgemental focus of one’s attention on the emotions, thoughts and sensations occurring in the present moment.”  As the Zen pupil who takes the stick away from the Master, a calm non-judgemental appraisal of a double-bind situation might reveal possible courses of action that had been obscured because of one’s fears. Indeed, the realization that one has more choices than one thinks is in itself a liberating discovery.

It is important to emphasise that the actual course of action that one selects in the end matters less than the realisation that one’s reactions to such situations is largely under one’s own control.

In closing – reframe it!

Organisational life is rife with Catch 22s. Most of us cannot avoid being caught up in them, but we can choose how we react to them. This is largely a matter of reframing them in ways that open up new avenues for action, a point that brings to mind this paragraph from Catch-22 (the book):

“Why don’t you use some sense and try to be more like me? You might live to be a hundred and seven, too.”

“Because it’s better to die on one’s feet than live on one’s knees,” Nately retorted with triumphant and lofty conviction. “I guess you’ve heard that saying before.”

“Yes, I certainly have,” mused the treacherous old man, smiling again. “But I’m afraid you have it backward. It is better to live on one’s feet than die on one’s knees. That is the way the saying goes.”

“Are you sure?” Nately asked with sober confusion. “It seems to make more sense my way.”

“No, it makes more sense my way. Ask your friends.”

And that, I reckon, is as brilliant an example of reframing as I have ever come across.

Written by K

June 22, 2015 at 9:54 pm

The Risk – a dialogue mapping vignette

with one comment


Last week, my friend Paul Culmsee conducted an internal workshop in my organisation on the theme of collaborative problem solving. Dialogue mapping is one of the tools  of the tools he introduced during the workshop.  This piece, primarily intended as a follow-up for attendees,  is an introduction to dialogue mapping via a vignette that illustrates its practice (see this post for another one). I’m publishing it here as I thought it might be useful for those who wish to understand what the technique is about.

Dialogue mapping uses a notation called Issue Based Information System (IBIS), which I have discussed at length in this post. For completeness, I’ll begin with a short introduction to the notation and then move on to the vignette.

A crash course in IBIS

The IBIS notation consists of the following three elements:

  1. Issues(or questions): these are issues that are being debated. Typically, issues are framed as questions on the lines of “What should we do about X?” where X is the issue that is of interest to a group. For example, in the case of a group of executives, X might be rapidly changing market condition whereas in the case of a group of IT people, X could be an ageing system that is hard to replace.
  2. Ideas(or positions): these are responses to questions. For example, one of the ideas of offered by the IT group above might be to replace the said system with a newer one. Typically the whole set of ideas that respond to an issue in a discussion represents the spectrum of participant perspectives on the issue.
  3. Arguments: these can be Pros (arguments for) or Cons (arguments against) an issue. The complete set of arguments that respond to an idea represents the multiplicity of viewpoints on it.

Compendium is a freeware tool that can be used to create IBIS maps– it can be downloaded here.

In Compendium, IBIS elements are represented as nodes as shown in Figure 1: issues are represented by blue-green question markspositions by yellow light bulbspros by green + signs and cons by red – signs.  Compendium supports a few other node types, but these are not part of the core IBIS notation. Nodes can be linked only in ways specified by the IBIS grammar as I discuss next.

Figure 1: Elements of IBIS

Figure 1: IBIS node types

The IBIS grammar can be summarized in three simple rules:

  1. Issues can be raised anew or can arise from other issues, positions or arguments. In other words, any IBIS element can be questioned.  In Compendium notation:  a question node can connect to any other IBIS node.
  2. Ideas can only respond to questions– i.e. in Compendium “light bulb” nodes can only link to question nodes. The arrow pointing from the idea to the question depicts the “responds to” relationship.
  3. Arguments  can only be associated with ideas– i.e. in Compendium “+” and “–“  nodes can only link to “light bulb” nodes (with arrows pointing to the latter)

The legal links are summarized in Figure 2 below.

Figure 2: Legal links in IBIS

Figure 2: Legal links in IBIS


…and that’s pretty much all there is to it.

The interesting (and powerful) aspect of IBIS is that the essence of any debate or discussion can be captured using these three elements. Let me try to convince you of this claim via a vignette from a discussion on risk.

 The Risk – a Dialogue Mapping vignette

“Morning all,” said Rick, “I know you’re all busy people so I’d like to thank you for taking the time to attend this risk identification session for Project X.  The objective is to list the risks that we might encounter on the project and see if we can identify possible mitigation strategies.”

He then asked if there were any questions. The head waggles around the room indicated there were none.

“Good. So here’s what we’ll do,”  he continued. “I’d like you all to work in pairs and spend 10 minutes thinking of all possible risks and then another 5 minutes prioritising.  Work with the person one your left. You can use the flipcharts in the breakout area at the back if you wish to.”

Twenty minutes later, most people were done and back in their seats.

“OK, it looks as though most people are done…Ah, Joe, Mike have you guys finished?” The two were still working on their flip-chart at the back.

“Yeah, be there in a sec,” replied Mike, as he tore off the flip-chart page.

“Alright,” continued Rick, after everyone had settled in. “What I’m going to do now is ask you all to list your top three risks. I’d also like you tell me why they are significant and your mitigation strategies for them.” He paused for a second and asked, “Everyone OK with that?”

Everyone nodded, except Helen who asked, “isn’t it important that we document the discussion?”

“I’m glad you brought that up. I’ll make notes as we go along, and I’ll do it in a way that everyone can see what I’m writing. I’d like you all to correct me if you feel I haven’t understood what you’re saying. It is important that  my notes capture your issues, ideas and arguments accurately.”

Rick turned on the data projector, fired up Compendium and started a new map.  “Our aim today is to identify the most significant risks on the project – this is our root question”  he said, as he created a question node. “OK, so who would like to start?”



Fig 3: The root question

Figure 3: The root question


“Sure,” we’ll start, said Joe easily. “Our top risk is that the schedule is too tight. We’ll hit the deadline only if everything goes well, and everyone knows that they never do.”

“OK,” said Rick, “as he entered Joe and Mike’s risk as an idea connecting to the root question. “You’ve also mentioned a point that supports your contention that this is a significant risk – there is absolutely no buffer.” Rick typed this in as a pro connecting to the risk. He then looked up at Joe and asked,  “have I understood you correctly?”

“Yes,” confirmed Joe.


Fig 4: Map in progress

Figure 4: Map in progress


“That’s pretty cool,” said Helen from the other end of the table, “I like the notation, it makes reasoning explicit. Oh, and I have another point in support of Joe and Mike’s risk – the deadline was imposed by management before the project was planned.”

Rick began to enter the point…

“Oooh, I’m not sure we should put that down,” interjected Rob from compliance. “I mean, there’s not much we can do about that can we?”

…Rick finished the point as Rob was speaking.


Fig 4: Map in progress

Figure 5: Two pros for the idea


“I hear you Rob, but I think  it is important we capture everything that is said,” said Helen.

“I disagree,” said Rob. “It will only annoy management.”

“Slow down guys,” said Rick, “I’m going to capture Rob’s objection as “this is a management imposed-constraint rather than risk. Are you OK with that, Rob?”

Rob nodded his assent.


Fig 6: A con enters the picture

Fig 6: A con enters the picture

I think it is important we articulate what we really think, even if we can’t do anything about it,” continued Rick. There’s no point going through this exercise if we don’t say what we really think. I want to stress this point, so I’m going to add honesty  and openness  as ground rules for the discussion. Since ground rules apply to the entire discussion, they connect directly to the primary issue being discussed.”

Figure 7: A "criterion" that applies to the analysis of all risks

Figure 7: A “criterion” that applies to the analysis of all risks


“OK, so any other points that anyone would like to add to the ones made so far?” Queried Rick as he finished typing.

He looked up. Most of the people seated round the table shook their heads indicating that there weren’t.

“We haven’t spoken about mitigation strategies. Any ideas?” Asked Rick, as he created a question node marked “Mitigation?” connecting to the risk.


Figure 8: Mitigating the risk

Figure 8: Mitigating the risk

“Yeah well, we came up with one,” said Mike. “we think the only way to reduce the time pressure is to cut scope.”

“OK,” said Rick, entering the point as an idea connecting to the “Mitigation?” question. “Did you think about how you are going to do this? He entered the question “How?” connecting to Mike’s point.

Figure 9: Mitigating the risk

Figure 9: Mitigating the risk


“That’s the problem,” said Joe, “I don’t know how we can convince management to cut scope.”

“Hmmm…I have an idea,” said Helen slowly…

“We’re all ears,” said Rick.

“…Well…you see a large chunk of time has been allocated for building real-time interfaces to assorted systems – HR, ERP etc. I don’t think these need to be real-time – they could be done monthly…and if that’s the case, we could schedule a simple job or even do them manually for the first few months. We can push those interfaces to phase 2 of the project, well into next year.”

There was a silence in the room as everyone pondered this point.

“You know, I think that might actually work, and would give us an extra month…may be even six weeks for the more important upstream stuff,” said Mike. “Great idea, Helen!”

“Can I summarise this point as – identify interfaces that can be delayed to phase 2?” asked Rick, as he began to type it in as a mitigation strategy. “…and if you and Mike are OK with it, I’m going to combine it with the ‘Cut Scope’ idea to save space.”

“Yep, that’s fine,” said Helen. Mike nodded OK.

Rick deleted the “How?” node connecting to the “Cut scope” idea, and edited the latter to capture Helen’s point.

Figure 10: Mitigating the risk

Figure 10: Mitigating the risk

“That’s great in theory, but who is going to talk to the affected departments? They will not be happy.” asserted Rob.  One could always count on compliance to throw in a reality check.

“Good point,”  said Rick as he typed that in as a con, “and I’ll take the responsibility of speaking to the department heads about this,” he continued entering the idea into the map and marking it as an action point for himself. “Is there anything else that Joe, Mike…or anyone else would like to add here,” he added, as he finished.

Figure 11: Completed discussion of first risk (click to see full size

Figure 11: Completed discussion of first risk (click to view larger image)

“Nope,” said Mike, “I’m good with that.”

“Yeah me too,” said Helen.

“I don’t have anything else to say about this point,” said Rob, “ but it would be great if you could give us a tutorial on this technique. I think it could be useful to summarise the rationale behind our compliance regulations. Folks have been complaining that they don’t understand the reasoning behind some of our rules and regulations. ”

“I’d be interested in that too,” said Helen, “I could use it to clarify user requirements.”

“I’d be happy to do a session on the IBIS notation and dialogue mapping next week. I’ll check your availability and send an invite out… but for now, let’s focus on the task at hand.”

The discussion continued…but the fly on the wall was no longer there to record it.


I hope this little vignette illustrates how IBIS and dialogue mapping can aid collaborative decision-making / problem solving by making diverse viewpoints explicit. That said, this is a story, and the problem with stories is that things  go the way the author wants them to.  In real life, conversations can go off on unexpected tangents, making them really hard to map. So, although it is important to gain expertise in using the software, it is far more important to practice mapping live conversations. The latter is an art that requires considerable practice. I recommend reading Paul Culmsee’s series on the practice of dialogue mapping or <advertisement> Chapter 14 of The Heretic’s Guide to Best Practices</advertisement> for more on this point.

That said, there are many other ways in which IBIS can be used, that do not require as much skill. Some of these include: mapping the central points in written arguments (what’s sometimes called issue mapping) and even decisions on personal matters.

To sum up: IBIS is a powerful means to clarify options and lay them out in an easy-to-follow visual format. Often this is all that is required to catalyse a group decision.


Get every new post delivered to your Inbox.

Join 342 other followers

%d bloggers like this: