Eight to Late

Catch-22 and the paradoxes of organisational life

with one comment

“You mean there’s a catch?”

“Sure there’s a catch”, Doc Daneeka replied. “Catch-22. Anyone who wants to get out of combat duty isn’t really crazy.”

There was only one catch and that was Catch-22, which specified that a concern for one’s own safety in the face of dangers that were real and immediate was the process of a rational mind. Orr was crazy and could be grounded. All he had to do was ask; and as soon as he did, he would no longer be crazy and would have to fly more missions…”   Joseph Heller, Catch-22

Introduction

The term Catch-22 was coined by Joseph Heller in the eponymous satirical novel written in 1961. As the quote above illustrates,  the term refers to a paradoxical situation caused by the application of  contradictory rules.  Catch-22 situations are common in large organisations of all kinds, not just the military (which was the setting of the novel). So much so that it is a theme that has attracted some scholarly attention over the half century since the novel was first published  – see this paper or this one for example.

Although Heller uses Catch-22 situations to highlight the absurdities of bureaucracies in a humorous way, in real-life such situations can be deeply troubling for people who are caught up in them. In a paper published in 1956, the polymath Gregory Bateson and his colleagues  suggested that these situations can cause people to behave in ways that are symptomatic of schizophrenia .  The paper introduces the notion of a  double-bind, which is  a dilemma arising from an individual receiving two or more messages that contradict each other .   In simple terms, then,  a double-bind is a Catch-22.

In this post, I draw on Bateson’s  double bind theory to get some insights into Catch-22 situations in organisations.

Double bind theory

The basic elements of a double bind situation are as follows:

  1. Two or more individuals, one of whom is a victim – i.e. the individual who experiences the dilemma described below.
  2. A primary rule which keeps the victim fearful of the consequences of doing (or not doing) something.  This rule typically takes the form , “If you do x then you will be punished” or “If you do not do x then you will be punished. “
  3. A secondary rule that is in conflict with the primary rule, but at more abstract level. This rule, which is usually implicit, typically takes the form, “Do not question the rationale behind x.”
  4. A tertiary rule that prevents the victim from escaping from the situation.
  5. Repeated experiences of (1) and (2)

A simple example (quoted from this article) serves to illustrate the above in a real- life situation:

One example of double bind communication is a mother giving her child the message: “Be spontaneous” If the child acts spontaneously, he is not acting spontaneously because he is following his mother’s direction. It’s a no-win situation for the child. If a child is subjected to this kind of communication over a long period of time, it’s easy to see how he could become confused.

Here the injunction to “Be spontaneous” is contradicted by the more implicit rule that “one cannot be spontaneous on demand.”  It is important to note that the primary and secondary (implicit) rules are at different logical levels  –  the first is about an action, whereas the second is about the nature of all such actions. This is typical of a double bind situation.

The paradoxical aspects of double binds can sometimes be useful as they can lead to creative solutions arising from the victim “stepping outside the situation”. The following example from Bateson’s paper illustrates the point:

The Zen Master attempts to bring about enlightenment in his pupil in various ways. One of the things he does is to hold a stick over the pupil’s head and say fiercely, “If you say this stick is real, I will strike you with it. If you say this stick is not real, I will strike you with it. If you don’t say anything, I will strike you with it.”… The Zen pupil might reach up and take the stick away from the Master–who might accept this response.

This is an important point which we’ll return to towards the end of  this piece.

Double binds in organisations

Double bind situations are ubiquitous in organisations.   I’ll illustrate this by drawing on a couple of examples I have written about earlier on this blog.

The paradox of learning organisations

This section draws on a post I wrote while ago. In the introduction to that post I stated that:

The term learning organisation refers to an organisation that continually modifies its processes  based on observation and experience, thus adapting to changes in its internal and external environment.   Ever since Peter Senge coined the term in his book, The Fifth Discipline, assorted consultants and academics have been telling us that although a  learning  organisation is an utopian ideal, it is one worth striving for.  The reality, however,  is that most organisations that undertake the journey actually end up in a place far removed  from this ideal. Among other things, the journey may expose managerial hypocrisies that contradict the very notion of a learning organisation.

Starkly put, the problem arises from the fact that in a true learning organisation, employees will  inevitably start to question things that management would rather they didn’t.  Consider the following story, drawn from this paper on which the post is based:

…a multinational company intending to develop itself as a learning organization ran programmes to encourage managers to challenge received wisdom and to take an inquiring approach. Later, one participant attended an awayday, where the managing director of his division circulated among staff over dinner. The participant raised a question about the approach the MD had taken on a particular project; with hindsight, had that been the best strategy? `That was the way I did it’, said the MD. `But do you think there was a better way?’, asked the participant. `I don’t think you heard me’, replied the MD. `That was the way I did it’. `That I heard’, continued the participant, `but might there have been a better way?’. The MD fixed his gaze on the participants’ lapel badge, then looked him in the eye, saying coldly, `I will remember your name’, before walking away.

Of course,  a certain kind of learning  occurred here:  the employee learnt that certain questions were taboo, in stark contrast to the openness that was being preached from the organisational pulpit.  The double bind here is evident:  feel free to question and challenge everything…except what management deems to be out of bounds.  The takeaway for employees is that, despite all the rhetoric of organisational learning, certain things should not  be challenged. I think it is safe to say that this was probably not the kind of learning that was intended by those who initiated the program.

The paradoxes of change

In a post on the  paradoxes of organizational change, I wrote that:

An underappreciated facet of organizational change is that it is inherently paradoxical. For example, although it is well known that such changes inevitably have unintended consequences that are harmful, most organisations continue to implement change initiatives in a manner that assumes  complete controllability with the certainty of achieving solely beneficial outcomes.

As pointed out in this paper, there are three types of paradoxes that can arise when an organisation is restructured. The first is that during the transition, people are caught between the demands of their old and new roles. This is exacerbated by the fact that transition periods are often much longer expected. This paradox of performing in turn leads to a paradox of belonging – people become uncertain about where their loyalties (ought to) lie.

Finally, there is a paradox of organising, which refers to the gap between the rhetoric and reality of change. The paper mentioned above has a couple of nice examples. One study described how,

friendly banter in meetings and formal documentation [promoted] front-stage harmony, while more intimate conversations and unit meetings [intensified] backstage conflict.”  Another spoke of a situation in which, “…change efforts aimed at increasing employee participation [can highlight] conflicting practices of empowerment and control. In particular, the rhetoric of participation may contradict engrained organizational practices such as limited access to information and hierarchical authority for decision making…

Indeed, the gap between the intent and actuality of change initiatives make double binds inevitable.

Discussion

I suspect the situations described above will be familiar to people working in a corporate environment. The question is what can one do if one is on the receiving end of such a Catch 22?

The main thing is to realise that a double-bind arises because one perceives the situation to be so. That is, the person experiencing the situation has chosen to interpret it  as a double bind. To be sure, there are usually factors that influence the choice – things such as job security, for example – but the fact is that it is a choice that can be changed if one sees things in a different light. Escaping the double bind is then a “simple” matter of reframing the situation.

Here is where the notion of mindfulness is particularly relevant. In brief, mindfulness is “the intentional, accepting and non-judgemental focus of one’s attention on the emotions, thoughts and sensations occurring in the present moment.”  As the Zen pupil who takes the stick away from the Master, a calm non-judgemental appraisal of a double-bind situation might reveal possible courses of action that had been obscured because of one’s fears. Indeed, the realization that one has more choices than one thinks is in itself a liberating discovery.

It is important to emphasise that the actual course of action that one selects in the end matters less than the realisation that one’s reactions to such situations is largely under one’s own control.

In closing – reframe it!

Organisational life is rife with Catch 22s. Most of us cannot avoid being caught up in them, but we can choose how we react to them. This is largely a matter of reframing them in ways that open up new avenues for action, a point that brings to mind this paragraph from Catch-22 (the book):

“Why don’t you use some sense and try to be more like me? You might live to be a hundred and seven, too.”

“Because it’s better to die on one’s feet than live on one’s knees,” Nately retorted with triumphant and lofty conviction. “I guess you’ve heard that saying before.”

“Yes, I certainly have,” mused the treacherous old man, smiling again. “But I’m afraid you have it backward. It is better to live on one’s feet than die on one’s knees. That is the way the saying goes.”

“Are you sure?” Nately asked with sober confusion. “It seems to make more sense my way.”

“No, it makes more sense my way. Ask your friends.”

And that, I reckon, is as brilliant an example of reframing as I have ever come across.

Written by K

June 22, 2015 at 9:54 pm

The Risk – a dialogue mapping vignette

leave a comment »

Foreword

Last week, my friend Paul Culmsee conducted an internal workshop in my organisation on the theme of collaborative problem solving. Dialogue mapping is one of the tools  of the tools he introduced during the workshop.  This piece, primarily intended as a follow-up for attendees,  is an introduction to dialogue mapping via a vignette that illustrates its practice (see this post for another one). I’m publishing it here as I thought it might be useful for those who wish to understand what the technique is about.

Dialogue mapping uses a notation called Issue Based Information System (IBIS), which I have discussed at length in this post. For completeness, I’ll begin with a short introduction to the notation and then move on to the vignette.

A crash course in IBIS

The IBIS notation consists of the following three elements:

  1. Issues(or questions): these are issues that are being debated. Typically, issues are framed as questions on the lines of “What should we do about X?” where X is the issue that is of interest to a group. For example, in the case of a group of executives, X might be rapidly changing market condition whereas in the case of a group of IT people, X could be an ageing system that is hard to replace.
  2. Ideas(or positions): these are responses to questions. For example, one of the ideas of offered by the IT group above might be to replace the said system with a newer one. Typically the whole set of ideas that respond to an issue in a discussion represents the spectrum of participant perspectives on the issue.
  3. Arguments: these can be Pros (arguments for) or Cons (arguments against) an issue. The complete set of arguments that respond to an idea represents the multiplicity of viewpoints on it.

Compendium is a freeware tool that can be used to create IBIS maps– it can be downloaded here.

In Compendium, IBIS elements are represented as nodes as shown in Figure 1: issues are represented by blue-green question markspositions by yellow light bulbspros by green + signs and cons by red – signs.  Compendium supports a few other node types, but these are not part of the core IBIS notation. Nodes can be linked only in ways specified by the IBIS grammar as I discuss next.

Figure 1: Elements of IBIS

Figure 1: IBIS node types

The IBIS grammar can be summarized in three simple rules:

  1. Issues can be raised anew or can arise from other issues, positions or arguments. In other words, any IBIS element can be questioned.  In Compendium notation:  a question node can connect to any other IBIS node.
  2. Ideas can only respond to questions– i.e. in Compendium “light bulb” nodes can only link to question nodes. The arrow pointing from the idea to the question depicts the “responds to” relationship.
  3. Arguments  can only be associated with ideas– i.e. in Compendium “+” and “–“  nodes can only link to “light bulb” nodes (with arrows pointing to the latter)

The legal links are summarized in Figure 2 below.

Figure 2: Legal links in IBIS

Figure 2: Legal links in IBIS

 

…and that’s pretty much all there is to it.

The interesting (and powerful) aspect of IBIS is that the essence of any debate or discussion can be captured using these three elements. Let me try to convince you of this claim via a vignette from a discussion on risk.

 The Risk – a Dialogue Mapping vignette

“Morning all,” said Rick, “I know you’re all busy people so I’d like to thank you for taking the time to attend this risk identification session for Project X.  The objective is to list the risks that we might encounter on the project and see if we can identify possible mitigation strategies.”

He then asked if there were any questions. The head waggles around the room indicated there were none.

“Good. So here’s what we’ll do,”  he continued. “I’d like you all to work in pairs and spend 10 minutes thinking of all possible risks and then another 5 minutes prioritising.  Work with the person one your left. You can use the flipcharts in the breakout area at the back if you wish to.”

Twenty minutes later, most people were done and back in their seats.

“OK, it looks as though most people are done…Ah, Joe, Mike have you guys finished?” The two were still working on their flip-chart at the back.

“Yeah, be there in a sec,” replied Mike, as he tore off the flip-chart page.

“Alright,” continued Rick, after everyone had settled in. “What I’m going to do now is ask you all to list your top three risks. I’d also like you tell me why they are significant and your mitigation strategies for them.” He paused for a second and asked, “Everyone OK with that?”

Everyone nodded, except Helen who asked, “isn’t it important that we document the discussion?”

“I’m glad you brought that up. I’ll make notes as we go along, and I’ll do it in a way that everyone can see what I’m writing. I’d like you all to correct me if you feel I haven’t understood what you’re saying. It is important that  my notes capture your issues, ideas and arguments accurately.”

Rick turned on the data projector, fired up Compendium and started a new map.  “Our aim today is to identify the most significant risks on the project – this is our root question”  he said, as he created a question node. “OK, so who would like to start?”

 

 

Fig 3: The root question

Figure 3: The root question

 

“Sure,” we’ll start, said Joe easily. “Our top risk is that the schedule is too tight. We’ll hit the deadline only if everything goes well, and everyone knows that they never do.”

“OK,” said Rick, “as he entered Joe and Mike’s risk as an idea connecting to the root question. “You’ve also mentioned a point that supports your contention that this is a significant risk – there is absolutely no buffer.” Rick typed this in as a pro connecting to the risk. He then looked up at Joe and asked,  “have I understood you correctly?”

“Yes,” confirmed Joe.

 

Fig 4: Map in progress

Figure 4: Map in progress

 

“That’s pretty cool,” said Helen from the other end of the table, “I like the notation, it makes reasoning explicit. Oh, and I have another point in support of Joe and Mike’s risk – the deadline was imposed by management before the project was planned.”

Rick began to enter the point…

“Oooh, I’m not sure we should put that down,” interjected Rob from compliance. “I mean, there’s not much we can do about that can we?”

…Rick finished the point as Rob was speaking.

 

Fig 4: Map in progress

Figure 5: Two pros for the idea

 

“I hear you Rob, but I think  it is important we capture everything that is said,” said Helen.

“I disagree,” said Rob. “It will only annoy management.”

“Slow down guys,” said Rick, “I’m going to capture Rob’s objection as “this is a management imposed-constraint rather than risk. Are you OK with that, Rob?”

Rob nodded his assent.

 

Fig 6: A con enters the picture

Fig 6: A con enters the picture

I think it is important we articulate what we really think, even if we can’t do anything about it,” continued Rick. There’s no point going through this exercise if we don’t say what we really think. I want to stress this point, so I’m going to add honesty  and openness  as ground rules for the discussion. Since ground rules apply to the entire discussion, they connect directly to the primary issue being discussed.”

Figure 7: A "criterion" that applies to the analysis of all risks

Figure 7: A “criterion” that applies to the analysis of all risks

 

“OK, so any other points that anyone would like to add to the ones made so far?” Queried Rick as he finished typing.

He looked up. Most of the people seated round the table shook their heads indicating that there weren’t.

“We haven’t spoken about mitigation strategies. Any ideas?” Asked Rick, as he created a question node marked “Mitigation?” connecting to the risk.

 

Figure 8: Mitigating the risk

Figure 8: Mitigating the risk

“Yeah well, we came up with one,” said Mike. “we think the only way to reduce the time pressure is to cut scope.”

“OK,” said Rick, entering the point as an idea connecting to the “Mitigation?” question. “Did you think about how you are going to do this? He entered the question “How?” connecting to Mike’s point.

Figure 9: Mitigating the risk

Figure 9: Mitigating the risk

 

“That’s the problem,” said Joe, “I don’t know how we can convince management to cut scope.”

“Hmmm…I have an idea,” said Helen slowly…

“We’re all ears,” said Rick.

“…Well…you see a large chunk of time has been allocated for building real-time interfaces to assorted systems – HR, ERP etc. I don’t think these need to be real-time – they could be done monthly…and if that’s the case, we could schedule a simple job or even do them manually for the first few months. We can push those interfaces to phase 2 of the project, well into next year.”

There was a silence in the room as everyone pondered this point.

“You know, I think that might actually work, and would give us an extra month…may be even six weeks for the more important upstream stuff,” said Mike. “Great idea, Helen!”

“Can I summarise this point as – identify interfaces that can be delayed to phase 2?” asked Rick, as he began to type it in as a mitigation strategy. “…and if you and Mike are OK with it, I’m going to combine it with the ‘Cut Scope’ idea to save space.”

“Yep, that’s fine,” said Helen. Mike nodded OK.

Rick deleted the “How?” node connecting to the “Cut scope” idea, and edited the latter to capture Helen’s point.

Figure 10: Mitigating the risk

Figure 10: Mitigating the risk

“That’s great in theory, but who is going to talk to the affected departments? They will not be happy.” asserted Rob.  One could always count on compliance to throw in a reality check.

“Good point,”  said Rick as he typed that in as a con, “and I’ll take the responsibility of speaking to the department heads about this,” he continued entering the idea into the map and marking it as an action point for himself. “Is there anything else that Joe, Mike…or anyone else would like to add here,” he added, as he finished.

Figure 11: Completed discussion of first risk (click to see full size

Figure 11: Completed discussion of first risk (click to view larger image)

“Nope,” said Mike, “I’m good with that.”

“Yeah me too,” said Helen.

“I don’t have anything else to say about this point,” said Rob, “ but it would be great if you could give us a tutorial on this technique. I think it could be useful to summarise the rationale behind our compliance regulations. Folks have been complaining that they don’t understand the reasoning behind some of our rules and regulations. ”

“I’d be interested in that too,” said Helen, “I could use it to clarify user requirements.”

“I’d be happy to do a session on the IBIS notation and dialogue mapping next week. I’ll check your availability and send an invite out… but for now, let’s focus on the task at hand.”

The discussion continued…but the fly on the wall was no longer there to record it.

Afterword

I hope this little vignette illustrates how IBIS and dialogue mapping can aid collaborative decision-making / problem solving by making diverse viewpoints explicit. That said, this is a story, and the problem with stories is that things  go the way the author wants them to.  In real life, conversations can go off on unexpected tangents, making them really hard to map. So, although it is important to gain expertise in using the software, it is far more important to practice mapping live conversations. The latter is an art that requires considerable practice. I recommend reading Paul Culmsee’s series on the practice of dialogue mapping or <advertisement> Chapter 14 of The Heretic’s Guide to Best Practices</advertisement> for more on this point.

That said, there are many other ways in which IBIS can be used, that do not require as much skill. Some of these include: mapping the central points in written arguments (what’s sometimes called issue mapping) and even decisions on personal matters.

To sum up: IBIS is a powerful means to clarify options and lay them out in an easy-to-follow visual format. Often this is all that is required to catalyse a group decision.

A gentle introduction to text mining using R

with one comment

Preamble

This article is based on my exploration of the basic text mining capabilities of  R, the open source statistical software. It is intended primarily as a tutorial  for novices in text mining as well as R. However, unlike conventional tutorials,  I spend a good bit of time setting the context by describing the problem that led me to text mining and thence  to R. I also talk about the limitations of  the techniques I describe,  and point out directions for further exploration for those who are interested. Indeed, I’ll likely explore some of these myself in future articles.

If you have no time and /or wish to cut to the chase,  please go straight to the section entitled, Preliminaries – installing R and RStudio. If you have already installed R and have worked with it, you may want to stop reading as I  doubt there’s anything I can tell you that you don’t already know :-)

A couple of warnings are in order before we proceed. R and the text mining options we explore below are open source software. Version differences in open source can be significant and are not always documented in a way that corporate IT types are used to. Indeed, I was tripped up by such differences in an earlier version of this article (now revised). So, just for the record, the examples below were run on version 3.2.0 of R and version 0.6-1 of the tm (text mining) package for R. A second point follows from this: as is evident from its version number, the tm package is still in the early stages of its evolution. As a result – and we will see this below – things do not always work as advertised. So assume nothing, and inspect the results in detail at every step. Be warned that I do not always do this below, as my aim is introduction rather than accuracy.

Background and motivation

Traditional data analysis is based on the relational model in which data is stored in tables. Within tables, data is stored in rows – each row representing a  single record of an entity of interest (such as a customer or an account). The columns represent attributes of the entity. For example, the customer table might consist of columns such as name, street address, city, postcode, telephone number .  Typically these  are defined upfront, when the data model is created. It is possible to add columns after the fact, but this tends to be messy because one also has to update existing rows with information pertaining to the added attribute.

As long as one asks for information that is based  only on existing attributes – an example being,  “give me a list of customers based  in Sydney” –  a database analyst can use Structured Query Language (the defacto language of relational databases ) to  get an answer.  A problem arises, however, if one asks for information that is based on attributes that are not included in the database. An example  in the above case would be: “give me a list of  customers who have made a complaint in the last twelve months.”

As a result of the above, many data modelers will include  a “catch-all”  free text column that can be used to capture additional information in an ad-hoc way. As one might imagine, this column will often end up containing several lines, or even paragraphs of text that are near impossible to analyse with the tools available in relational databases.

(Note: for completeness I should add that most database vendors have incorporated text mining capabilities into their products. Indeed, many  of them now include R…which is another good reason to learn it.)

My story

Over the last few months, when time permits, I’ve been doing an in-depth exploration of  the data captured by my organisation’s IT service management tool.  Such tools capture all support tickets that are logged, and track their progress until they are closed.  As it turns out,  there are a number of cases where calls are logged against categories that are too broad to be useful – the infamous catch-all category called “Unknown.”  In such cases, much of the important information is captured in a free text column, which is difficult to analyse unless one knows what one is looking for. The problem I was grappling with was to identify patterns and hence define sub-categories that would enable support staff to categorise these calls meaningfully.

One way to do this  is  to guess what the sub-categories might be…and one can sometimes make pretty good guesses if one knows the data well enough.  In general, however, guessing is a terrible strategy because one does not know what one does not know. The only sensible way to extract subcategories is to  analyse  the content of the free text column systematically. This is a classic text mining problem.

Now, I knew a bit about the theory of text mining, but had little practical experience with it. So the logical place for me to  start was to look for a suitable text mining tool. Our vendor (who shall remain unnamed) has a “Rolls-Royce” statistical tool that has a good text mining add-on. We don’t have licenses for the tool, but the vendor was willing to give us a trial license for a few months…with the understanding that this was on an intent-to-purchase basis.

I therefore started looking at open source options. While doing so, I stumbled on an interesting paper by Ingo Feinerer that describes a text mining framework for the R environment. Now, I knew about R, and was vaguely aware that it offered text mining capabilities, but I’d not looked into the details.  Anyway, I started reading the paper…and kept going until I finished.

As I read, I realised that this could be the answer to my problems. Even better, it would not require me trade in assorted limbs for a license.

I decided to give it a go.

Preliminaries – installing R and RStudio

R can be downloaded from the R Project website. There is a Windows version available, which installed painlessly on my laptop. Commonly encountered installation issues are answered in the (very helpful)  R for Windows FAQ.

RStudio is an integrated development environment (IDE) for R. There is a commercial version of the product, but there is also a free open source version. In what follows, I’ve used the free version. Like R, RStudio installs painlessly and also detects your R installation.

RStudio has  the following panels:

  • A script editor in which you can create R scripts (top left). You can also open a new script editor window by going to File > New File > RScript.
  • The console where you can execute R commands/scripts (bottom left)
  • Environment and history (top right)
  • Files in the current working directory, installed R packages, plots and a help display screen (bottom right).

Check out this short video for a quick introduction to RStudio.

You can access help anytime (within both R and RStudio) by typing a question mark before a command. Exercise: try this by typing ?getwd() and ?setwd() in the console.

I should reiterate that the installation process for both products was seriously simple…and seriously impressive.  “Rolls-Royce” business intelligence vendors could take a lesson from  that…in addition to taking a long hard look at the ridiculous prices they charge.

There is another small step before we move on to the fun stuff.   Text mining  and certain plotting packages are not installed by default so one has to install them manually The relevant packages are:

  1. tm – the text mining package (see documentation). Also check out this excellent introductory article on tm.
  2. SnowballC – required for stemming (explained below).
  3. ggplot2 – plotting capabilities (see documentation)
  4. wordcloud – which is self-explanatory (see documentation) .

(Warning for Windows users: R is case-sensitive so Wordcloud != wordcloud)

The simplest way to install packages is to use RStudio’s built in capabilities (go to Tools > Install Packages in the menu). If you’re working on Windows 7 or 8, you might run into a permissions issue when installing packages. If you do, you might find this advice from the R for Windows FAQ helpful.

Preliminaries – The example dataset

The data I had from our service management tool isn’t  the best dataset to learn with as it is quite messy. But then, I  have a reasonable data source in my virtual backyard:  this blog. To this end, I converted all posts I’ve written since Dec 2013 into plain text form (30 posts in all). You can download the zip file of these here (Note: In case you’re wondering about the URL (orafusion): WordPress does not allow uploads of zip files so I had to host it on one of my other sites).

I suggest you create a new folder called – called, say, TextMining – and unzip the files in that folder.

That done, we’re good to start…

Preliminaries – Basic Navigation

A few things to note before we proceed:

  • In what follows, I enter the commands directly in the console. However,  here’s a little RStudio tip that you may want to consider: you can enter an R command or code fragment in the script editor and then hit Ctrl-Enter  (i.e. hit the Enter key while holding down the Control key) to copy the line to the console.  This will enable you to save the script as you go along.
  • In the code snippets below, the functions / commands to be typed in the R console are in blue font.   The output is in black. I will also denote references to  functions / commands in the body of the article by italicising them as in “setwd()”.  Be aware that I’ve omitted the command prompt “>” in the code snippets below!
  • It is best not to cut-n-paste commands directly from the article as quotes are sometimes not rendered correctly. A text file of all the code in this article is available here.

The > prompt in the RStudio console indicates that R is ready to process commands.

To see the current working directory type in getwd() and hit return. You’ll see something like:

getwd()
[1] “C:/Users/Documents”

 

The exact output will of course depend on your working directory.  Note the forward slashes in the path. This is because of R’s Unix heritage (backslash is an escape character in R.). So, here’s how would change the working directory to C:\Users: 

setwd(“C:/Users”)

You can now use getwd()to check that setwd() has done what it should.

getwd()
[1]”C:/Users”

 

I won’t say much more here about R  as I want to get on with the main business of the article.  Check out this very short introduction to R for a quick crash course.

Loading data into R

Start RStudio and open the TextMining project you created earlier.

The next step is to load the tm package as this is not loaded by default.  This is done using the library() function like so:

library(tm)
Loading required package: NLP

 

Dependent packages are loaded automatically – in this case the dependency is on the NLP (natural language processing) package.

Next, we need to create a collection of documents (technically referred to as a Corpus) in the R environment. This basically involves loading the files created in the TextMining folder into a Corpus object. The tm package provides the Corpus() function to do this. There are several ways to  create a Corpus (check out the online help using ? as explained earlier). In a nutshell, the  Corpus() function can read from various sources including a directory. That’s the option we’ll use:

#Create Corpus
docs <- Corpus(DirSource(“C:/Users/Kailash/Documents/TextMining”))

 

At the risk of stating the obvious, you will need to tailor this path as appropriate.

A couple of things to note in the above. Any line that starts with a # is a comment, and the “<-“ tells R to assign the result of the command on the right hand side to the variable on the left hand side. In this case the Corpus object created is stored in a variable called docs.  One can also use the equals sign (=)  for assignment if one wants to.

Type in docs to see some information about the newly created corpus:

docs
<<VCorpus>>
Metadata: corpus specific: 0, document level (indexed): 0
Content: documents: 30

 

The summary() function gives more details, including a complete listing of files…but it isn’t particularly enlightening.  Instead, we’ll examine a particular document in the corpus.

#inspect a particular document
writeLines(as.character(docs[[30]]))
…output not shown…

Which prints the entire content of 30th document in the corpus to the console.

Pre-processing

Data cleansing, though tedious, is perhaps the most important step in text analysis.   As we will see, dirty data can play havoc with the results.  Furthermore, as we will also see, data cleaning is invariably an iterative process as there are always problems that are overlooked the first time around.

The tm package offers a number of transformations that ease the tedium of cleaning data. To see the available transformations  type getTransformations() at the R prompt:

> getTransformations()
[1] “removeNumbers” “removePunctuation” “removeWords” “stemDocument” “stripWhitespace”

 

Most of these are self-explanatory. I’ll explain those that aren’t as we go along.

There are a few preliminary clean-up steps we need to do before we use these powerful transformations. If you inspect some documents in the corpus (and you know how to do that now), you will notice that I have some quirks in my writing. For example, I often use colons and hyphens without spaces between the words separated by them. Using the removePunctuation transform  without fixing this will cause the two words on either side of the symbols  to be combined. Clearly, we need to fix this prior to using the transformations.

To fix the above, one has to create a custom transformation. The tm package provides the ability to do this via the content_transformer function. This function takes a function as input, the input function should specify what transformation needs to be done. In this case, the input function would be one that replaces all instances of a character by spaces. As it turns out the gsub() function does just that.

Here is the R code to build the content transformer, which  we will call toSpace:

#create the toSpace content transformer
toSpace <- content_transformer(function(x, pattern) {return (gsub(pattern, ” “, x))})

Now we can use  this content transformer to eliminate colons and hypens like so:

docs <- tm_map(docs, toSpace, “-“)
docs <- tm_map(docs, toSpace, “:”)

 

Inspect random sections f corpus to check that the result is what you intend (use writeLines as shown earlier). To reiterate something I mentioned in the preamble, it is good practice to inspect the a subset of the corpus after each transformation.

If it all looks good, we can now apply the removePunctuation transformation. This is done as follows:

#Remove punctuation – replace punctuation marks with ” “
docs <- tm_map(docs, removePunctuation)

 

Inspecting the corpus reveals that several  “non-standard” punctuation marks have not been removed. These include the special quote marks and a space-hyphen combination. These can be removed using our custom content transformer, toSpace. Note that you might want to copy-n-paste these symbols directly from the relevant text file to ensure that they are accurately represented in toSpace.

docs <- tm_map(docs, toSpace, “‘”)
docs <- tm_map(docs, toSpace, “‘”)
docs <- tm_map(docs, toSpace, ” -“)

 

Inspect the corpus again to ensure that the offenders have been eliminated. This is also a good time to check for any other special symbols that may need to be removed manually.

If all is well, you can move  to the next step which is  to:

  • Convert the corpus to lower case
  • Remove all numbers.

Since R is case sensitive, “Text” is not equal to “text” – and hence the rationale for converting to a standard case.  However, although there is a tolower transformation, it is not a part of the standard tm transformations (see the output of getTransformations() in the previous section). For this reason, we have to convert tolower into a transformation that can handle a corpus object properly. This is done with the help of our new friend, content_transformer.

Here’s the relevant code:

#Transform to lower case (need to wrap in content_transformer)
docs <- tm_map(docs,content_transformer(tolower))

Text analysts are typically not interested in numbers since these do not usually contribute to the meaning of the text. However, this may not always be so. For example, it is definitely not the case if one is interested in getting a count of the number of times a particular year appears in a corpus. This does not need to be wrapped in content_transformer as it is a standard transformation in tm.

#Strip digits (std transformation, so no need for content_transformer)
docs <- tm_map(docs, removeNumbers)

Once again, be sure to inspect the corpus before proceeding.

The next step is to remove common words  from the text. These  include words such as articles (a, an, the), conjunctions (and, or but etc.), common verbs (is), qualifiers (yet, however etc) . The tm package includes  a standard list of such stop words as they are referred to. We remove stop words using the standard removeWords transformation like so:

#remove stopwords using the standard list in tm
docs <- tm_map(docs, removeWords, stopwords(“english”))

 

Finally, we remove all extraneous whitespaces using the stripWhitespace transformation:

#Strip whitespace (cosmetic?)
docs <- tm_map(docs, stripWhitespace)

 

Stemming

Typically a large corpus will contain  many words that have a common root – for example: offer, offered and offering.  Stemming is the process of reducing such related words to their common root, which in this case would be the word offer.

Simple stemming algorithms (such as the one in tm) are relatively crude: they work by chopping off the ends of words. This can cause problems: for example, the words mate and mating might be reduced to mat instead of mate.  That said, the overall benefit gained from stemming more than makes up for the downside of such special cases.

To see what stemming does, let’s take a look at the  last few lines  of the corpus before and after stemming.  Here’s what the last bit looks  like prior to stemming (note that this may differ for you, depending on the ordering of the corpus source files in your directory):

writeLines(as.character(docs[[30]]))
flexibility eye beholder action increase organisational flexibility say redeploying employees likely seen affected move constrains individual flexibility dual meaning characteristic many organizational platitudes excellence synergy andgovernance interesting exercise analyse platitudes expose difference espoused actual meanings sign wishing many hours platitude deconstructing fun 

 

Now let’s stem the corpus and reinspect it.

#load library
library(SnowballC)
#Stem document
docs <- tm_map(docs,stemDocument)
writeLines(as.character(docs[[30]]))
flexibl eye behold action increas organis flexibl say redeploy employe like seen affect move constrain individu flexibl dual mean characterist mani organiz platitud excel synergi andgovern interest exercis analys platitud expos differ espous actual mean sign wish mani hour platitud deconstruct fun

 

A careful comparison of the two paragraphs reveals the benefits and tradeoff of this relatively crude process.

There is a more sophisticated procedure called lemmatization that takes grammatical context into account. Among other things, determining the lemma of a word requires a knowledge of its part of speech (POS) – i.e. whether it is a noun, adjective etc. There are POS taggers that automate the process of tagging terms with their parts of speech. Although POS taggers are available for R (see this one, for example), I will not go into this topic here as it would make a long post even longer.

On another important note, the output of the corpus also shows up a problem or two. First, organiz and organis are actually variants of the same stem organ. Clearly, they should be merged. Second, the word andgovern should be separated out into and and govern (this is an error in the original text).  These (and other errors of their ilk) can and should be fixed up before proceeding.  This is easily done using gsub() wrapped in content_transformer. Here is the code to  clean up these and a few other issues  that I found:

docs <- tm_map(docs, content_transformer(gsub),
pattern = “organiz”, replacement = “organ”)docs <- tm_map(docs, content_transformer(gsub),
pattern = “organis”, replacement = “organ”)docs <- tm_map(docs, content_transformer(gsub),
pattern = “andgovern”, replacement = “govern”)docs <- tm_map(docs, content_transformer(gsub),
pattern = “inenterpris”, replacement = “enterpris”)docs <- tm_map(docs, content_transformer(gsub),
pattern = “team-“, replacement = “team”)

 

Note that I have removed the stop words and and in in the 3rd and 4th transforms above.

There are definitely other errors that need to be cleaned up, but I’ll leave these for you to detect and remove.

The document term matrix

The next step in the process is the creation of the document term matrix  (DTM)– a matrix that lists all occurrences of words in the corpus, by document. In the DTM, the documents are represented by rows and the terms (or words) by columns.  If a word occurs in a particular document, then the matrix entry for corresponding to that row and column is 1, else it is 0 (multiple occurrences within a document are recorded – that is, if a word occurs twice in a document, it is recorded as “2” in the relevant matrix entry).

A simple example might serve to explain the structure of the TDM more clearly. Assume we have a simple corpus consisting of two documents, Doc1 and Doc2, with the following content:

Doc1: bananas are good

Doc2: bananas are yellow

The DTM for this corpus would look like:

bananas are yellow good
Doc1 1 1 1 0
Doc2 1 1 0 1

 

Clearly there is nothing special about rows and columns – we could just as easily transpose them. If we did so, we’d get a term document matrix (TDM) in which the terms are rows and documents columns. One can work with either a DTM or TDM. I’ll use the DTM in what follows.

There are a couple of general points worth making before we proceed. Firstly, DTMs (or TDMs) can be huge – the dimension of the matrix would be number of document  x the number of words in the corpus.  Secondly, it is clear that the large majority of words will appear only in a few documents. As a result a DTM is invariably sparse – that is, a large number of its entries are 0.

The business of creating a DTM (or TDM) in R is as simple as:

dtm <- DocumentTermMatrix(docs)

 

This creates a term document matrix from the corpus and stores the result in the variable dtm. One can get summary information on the matrix by typing the variable name in the console and hitting return:

dtm
<<DocumentTermMatrix (documents: 30, terms: 4209)>>
Non-/sparse entries: 14252/112018
Sparsity : 89%
Maximal term length: 48
Weighting : term frequency (tf)

 

This is a 30 x 4209 dimension matrix in which 89% of the rows are zero.

One can inspect the DTM, and you might want to do so for fun. However, it isn’t particularly illuminating because of the sheer volume of information that will flash up on the console. To limit the information displayed, one can inspect a small section of it like so:

inspect(dtm[1:2,1000:1005])
<<DocumentTermMatrix (documents: 2, terms: 6)>>
Non-/sparse entries: 0/12
Sparsity : 100%
Maximal term length: 8
Weighting : term frequency (tf)
Docs                               creation creativ credibl credit crimin crinkl
BeyondEntitiesAndRelationships.txt        0        0      0      0      0      0
bigdata.txt                               0        0      0      0      0      0

This command displays terms 1000 through 1005 in the first two rows of the DTM. Note that your results may differ.

Mining the corpus

Notice that in constructing the TDM, we have converted a corpus of text into a mathematical object that can be analysed using quantitative techniques of matrix algebra.  It should be no surprise, therefore, that the TDM (or DTM) is the starting point for quantitative text analysis.

For example, to get the frequency of occurrence of each word in the corpus, we simply sum over all rows to give column sums:

freq <- colSums(as.matrix(dtm))

 

Here we have  first converted the TDM into a mathematical matrix using the as.matrix() function. We have then summed over all rows to give us the totals for each column (term). The result is stored in the (column matrix) variable freq.

Check that the dimension of freq equals the number of terms:

#length should be total number of terms
length(freq)
[1] 4209

 

Next, we sort freq in descending order of term count:

#create sort order (descending)
ord <- order(freq,decreasing=TRUE)

 

Then list the most and least frequently occurring terms:

#inspect most frequently occurring terms
freq[head(ord)]
one organ can manag work system
314 268    244  222   202   193
#inspect least frequently occurring terms
freq[tail(ord)]   
yield yorkshir   youtub     zeno     zero    zulli
1        1        1        1        1        1

 

The  least frequent terms can be more interesting than one might think. This is  because terms that occur rarely are likely to be more descriptive of specific documents. Indeed, I can recall the posts in which I have referred to Yorkshire, Zeno’s Paradox and  Mr. Lou Zulli without having to go back to the corpus, but I’d have a hard time enumerating the posts in which I’ve used the word system.

There are at least a couple of ways to simple ways to strike a balance between frequency and specificity. One way is to use so-called  inverse document frequencies. A simpler approach is to  eliminate words that occur in a large fraction of corpus documents.   The latter addresses another issue that is evident in the above. We deal with this now.

Words like “can” and “one”  give us no information about the subject matter of the documents in which they occur. They can therefore be eliminated without loss. Indeed, they ought to have been eliminated by the stopword removal we did earlier. However, since such words occur very frequently – virtually in all documents – we can remove them by enforcing bounds when creating the DTM, like so:

dtmr <-DocumentTermMatrix(docs, control=list(wordLengths=c(4, 20),
bounds = list(global = c(3,27))))

 

Here we have told R to include only those words that occur in  3 to 27 documents. We have also enforced  lower and upper limit to length of the words included (between 4 and 20 characters).

Inspecting the new DTM:

dtmr
<<DocumentTermMatrix (documents: 30, terms: 1290)>>
Non-/sparse entries: 10002/28698
Sparsity : 74%
Maximal term length: 15
Weighting : term frequency (tf)

The dimension is reduced to 30 x 1290.

Let’s calculate the cumulative frequencies of words across documents and sort as before:

reqr <- colSums(as.matrix(dtmr))
#length should be total number of terms
length(freqr)
[1] 1290
#create sort order (asc)
ordr <- order(freqr,decreasing=TRUE)
#inspect most frequently occurring terms
freqr[head(ordr)]
organ manag work system project problem
268     222  202    193     184     171
#inspect least frequently occurring terms
freqr[tail(ordr)]
wait warehous welcom whiteboard wider widespread
3           3      3          3     3          3

 

The results make sense: the top 6 keywords are pretty good descriptors of what my blogs is about – projects, management and systems. However, not all high frequency words need be significant. What they do, is give you an idea of potential classification terms.

That done, let’s take get a list of terms that occur at least a  100 times in the entire corpus. This is easily done using the findFreqTerms() function as follows:

findFreqTerms(dtmr,lowfreq=80)
[1] “action” “approach” “base” “busi” “chang” “consult” “data” “decis” “design”
[10] “develop” “differ” “discuss” “enterpris” “exampl” “group” “howev” “import” “issu”
[19] “like” “make” “manag” “mani” “model” “often” “organ” “peopl” “point”
[28] “practic” “problem” “process” “project” “question” “said” “system” “thing” “think”
[37] “time” “understand” “view” “well” “will” “work”

 

Here I have asked findFreqTerms() to return all terms that occur more than 80 times in the entire corpus. Note, however, that the result is ordered alphabetically, not by frequency.

Now that we have the most frequently occurring terms in hand, we can check for correlations between some of these and other terms that occur in the corpus.  In this context, correlation is a quantitative measure of the co-occurrence of words in multiple documents.

The tm package provides the findAssocs() function to do this.  One needs to specify the DTM, the term of interest and the correlation limit. The latter is a number between 0 and 1 that serves as a lower bound for  the strength of correlation between the  search and result terms. For example, if the correlation limit is 1, findAssocs() will return only  those words that always co-occur with the search term. A correlation limit of 0.5 will return terms that have a search term co-occurrence of at least  50% and so on.

Here are the results of  running findAssocs() on some of the frequently occurring terms (system, project, organis) at a correlation of 60%.

 

findAssocs(dtmr,”project”,0.6)
      project
inher 0.82
handl 0.68
manag 0.68
occurr 0.68
manager’ 0.66
findAssocs(dtmr,”enterpris”,0.6)
enterpris
agil       0.80
realist    0.78
increment  0.77
upfront    0.77
technolog  0.70
neither    0.69
solv       0.69
adapt      0.67
architectur 0.67
happi      0.67
movement   0.67
architect  0.66
chanc      0.65
fine       0.64
featur     0.63
findAssocs(dtmr,”system”,0.6)
      system
design  0.78
subset  0.78
adopt   0.77
user    0.77
involv  0.71
specifi 0.71
function 0.70
intend  0.67
softwar 0.67
step    0.67
compos  0.66
intent  0.66
specif  0.66
depart  0.65
phone  0.63
frequent 0.62
today  0.62
pattern 0.61
cognit 0.60
wherea 0.60

 

An important point to note that the presence of a term in these list is not indicative of its frequency.  Rather it is a measure of the frequency with which the two (search and result term)  co-occur (or show up together) in documents across . Note also, that it is not an indicator of nearness or contiguity. Indeed, it cannot be because the document term matrix does not store any information on proximity of terms, it is simply a “bag of words.”

That said, one can already see that the correlations throw up interesting combinations – for example, project and manag, or enterpris and agil or architect/architecture, or system and design or adopt. These give one further insights into potential classifications.

As it turned out,  the very basic techniques listed above were enough for me to get a handle on the original problem that led me to text mining – the analysis of free text problem descriptions in my organisation’s service management tool.  What I did was to work my way through the top 50 terms and find their associations. These revealed a number of sets of keywords that occurred in multiple problem descriptions,  which was good enough for me to define some useful sub-categories.  These are currently being reviewed by the service management team. While they’re busy with that that, I’m looking into refining these further using techniques such as  cluster analysis and tokenization.   A simple case of the latter would be to look at two-word combinations in the text (technically referred to as bigrams). As one might imagine, the dimensionality of the DTM will quickly get out of hand as one considers larger multi-word combinations.

Anyway,  all that and more will topics have to wait for future  articles as this piece is much too long already. That said, there is one thing I absolutely must touch upon before signing off. Do stay, I think you’ll find  it interesting.

Basic graphics

One of the really cool things about R is its graphing capability. I’ll do just a couple of simple examples to give you a flavour of its power and cool factor. There are lots of nice examples on the Web that you can try out for yourself.

Let’s first do a simple frequency histogram. I’ll use the ggplot2 package, written by Hadley Wickham to do this. Here’s the code:

wf=data.frame(term=names(freqr),occurrences=freqr)
library(ggplot2)
p <- ggplot(subset(wf, freqr>100), aes(term, occurrences))
p <- p + geom_bar(stat=”identity”)
p <- p + theme(axis.text.x=element_text(angle=45, hjust=1))
p

 

Figure 1 shows the result.

Fig 1: Term-occurrence histogram (freq>100)

Fig 1: Term-occurrence histogram (freq>100)

The first line creates a data frame – a list of columns of equal length. A data frame also contains the name of the columns – in this case these are term and occurrence respectively.  We then invoke ggplot(), telling it to consider plot only those terms that occur more than 100 times.  The aes option in ggplot describes plot aesthetics – in this case, we use it to specify the x and y axis labels. The stat=”identity” option in geom_bar () ensures  that the height of each bar is proportional to the data value that is mapped to the y-axis  (i.e occurrences). The last line specifies that the x-axis labels should be at a 45 degree angle and should be horizontally justified (see what happens if you leave this out). Check out the voluminous ggplot documentation for more or better yet, this quick introduction to ggplot2 by Edwin Chen.

Finally, let’s create a wordcloud for no other reason than everyone who can seems to be doing it.  The code for this is:

#wordcloud
library(wordcloud)
#setting the same seed each time ensures consistent look across clouds
set.seed(42)
#limit words by specifying min frequency
wordcloud(names(freqr),freqr, min.freq=70)

 

The result is shown Figure 2.

 

Fig 2: Wordcloud (freq>70)

Fig 2: Wordcloud (freq>70)

Here we first load the wordcloud package which is not loaded by default. Setting a seed number ensures that you get the same look each time (try running it without setting a seed). The arguments of the wordcloud() function are straightforward enough. Note that one can specify the maximum number of words to be included instead of the minimum frequency (as I have done above).  See the word cloud  documentation for more.

This word cloud also makes it clear that stop word removal has not done its job well, there are a number of words it has missed (also and however, for example). These can be removed by augmenting the built-in stop word list with a custom one. This is left as an exercise for the reader :-).

Finally, one can make the wordcloud more visually appealing by adding colour as follows:

#…add color
wordcloud(names(freqr),freqr,min.freq=70,colors=brewer.pal(6,”Dark2″))

 

The result is shown Figure 3.

 

 

Fig 3: Wordcloud (freq > 70)

Fig 3: Wordcloud (freq > 70)

You may need to load the RColorBrewer package to get this to work. Check out the brewer documentation to experiment with more colouring options.

Wrapping up

This brings me to the end of this rather long  (but I hope, comprehensible) introduction to text mining R.  It should be clear that despite the length of the article, I’ve covered only the most rudimentary basics.  Nevertheless, I hope I’ve succeeded in conveying  a sense of the possibilities in the vast and rapidly-expanding discipline of text analytics.

Written by K

May 27, 2015 at 8:08 pm

Big Data metaphors we live by

leave a comment »

“When Big Data metaphors erase human sensemaking, and the ways in which values are baked into categories, algorithms and visualizations, we have indeed lost the plot, not found it…” 

Quoted from my essay on metaphors for Big Data, co-written with Simon Buckingham Shum:

BigDataMetaphors_Banner

Written by K

May 15, 2015 at 12:12 pm

Posted in Uncategorized

Sherlock Holmes and the case of the management fetish

with 2 comments

As narrated by Dr. John Watson, M.D.

As my readers are undoubtedly aware,  my friend Sherlock Holmes is widely feted for his powers of logic and deduction.  With all due modesty, I can claim to have played a small part in publicizing his considerable talents, for I have a sense for what will catch the reading public’s fancy and, perhaps more important, what will not. Indeed, it could be argued  that his fame is in no small part due to the dramatic nature of the exploits which I have chosen to publicise.

Management consulting, though far more lucrative than criminal investigation, is not nearly as exciting.  Consequently my work has become that much harder since Holmes reinvented himself as a management expert.  Nevertheless, I am firmly of the opinion that the long-standing myths  exposed by  his  recent work more than make up for any lack of suspense or drama.

A little known fact is that many of Holmes’ insights into flawed management practices have come after the fact, by discerning common themes that emerged from different cases. Of course this makes perfect sense:  only after seeing the same (or similar) mistake occur in a variety of situations can one begin to perceive an underlying pattern.

The conversation I had with him last night  is an excellent illustration of this point.

We were having dinner at Holmes’ Baker Street abode  when, apropos of nothing, he remarked, “It’s a strange thing, Watson, that our lives are governed by routine. For instance, it is seven in the evening, and here we are having dinner, much like we would on any other day.”

“Yes, it is,” I said, intrigued by his remark.  Dabbing my mouth with a napkin, I put down my fork and waited for him to say more.

He smiled. “…and do you think that is a good thing?”

I thought about it for a minute before responding. “Well, we follow routine because we like…or need… regularity and predictability,” I said. “Indeed, as a medical man, I know well that our bodies have built in clocks that drive us to do things – such as eat and sleep – at regular intervals.  That apart, routines give us a sense of comfort and security in an unpredictable world. Even those who are adventurous have routines of their own. I don’t think we have a choice in the matter, it’s the way humans are wired.” I wondered where the conversation was going.

Holmes cocked an eyebrow. “Excellent, Watson!” he said. “Our propensity for routine is quite possibly a consequence of our need for security and comfort ….but what about the usefulness of routines – apart from the sense of security we get from them?”

“Hmmm…that’s an interesting question. I suppose a routine must have a benefit, or at least a perceived benefit…else it would not have been made into a routine.”

“Possibly,” said Holmes, “ but let me ask you another question.  You remember the case of the failed projects do you not?”

“Yes, I do,” I replied. Holmes’ abrupt conversational U-turns no longer disconcert me, I’ve become used to them over the years. I remembered the details of the case like it had happened yesterday…indeed I should, as it was I who wrote the narrative!

“Did anything about the case strike you as strange?” he inquired.

I mulled over the case, which (in hindsight) was straightforward enough. Here are the essential facts:

The organization suffered from a high rate of project failure (about 70% as I recall). The standard prescription – project post-mortems followed by changes in processes aimed at addressing the top issues revealed – had failed to resolve the issue. Holmes’ insightful diagnosis was that the postmortems identified symptoms, not causes.  Therefore the measures taken to fix the problems didn’t work because they did not address the underlying cause. Indeed, the measures were akin to using brain surgery to fix a headache.  In the end, Holmes concluded that the failures were a consequence of flawed organizational structures and norms.

Of course flawed structures and norms are beyond the purview of a mere project or  program manager. So Holmes’ diagnosis, though entirely correct, did not help Bryant (the manager who had consulted us).

Nothing struck me as unduly strange as  went over the facts mentally. No,” I replied, “but what on earth does that have to do with routine?”

He smiled. “I will explain presently, but I have yet another question for you before I do so.  Do you remember one of our earliest management consulting cases – the affair of the terminated PMO?”

I replied in the affirmative.

“Well then,  you see the common thread running through the two cases, don’t you?” Seeing my puzzled look, he added, “think about it for a minute, Watson, while I go and fetch dessert.”

He went into the kitchen, leaving me to ponder his question.

The only commonality I could see was the obvious one – both cases were related to the failure of PMOs. (Editor’s note: PMO = Project Management Office)

He returned with dessert a few minutes later. “So, Watson,” he said as he sat down, “have you come up with anything?

I told him what I thought.

“Capital, Watson! Then you will, no doubt, have asked yourself the obvious next question. ”

I saw what he was getting at. “Yes!  The question is: can this observation be generalised?  Do majority of PMOs fail? ”

“Brilliant, Watson.  You are getting better at this by the day.” I know Holmes  does not intend to sound condescending, but the sad fact is that he often does.  “Let me tell you,” he continued, “Research   suggests that 50% of PMOs fail within three years of being set up. My hypothesis is that failure rate would be considerably higher if the timeframe is increased to five or seven years. What’s even more interesting is that there is a single overriding complaint about PMOs:  the majority of stakeholders surveyed felt that their PMOs are overly bureaucratic, and generally hinder project work.”

“But isn’t that contrary to the aim of a  PMO – which, as I understand, is to facilitate project work?” I queried.

“Excellent, my dear Watson. You are getting close to the heart of the matter.

“I am?”  To be honest, I was a little lost.

“Ah Watson, don’t tell me you do not see it,” said Holmes exasperatedly.

“I’m afraid you’ll have to explain,” I replied curtly. Really, he could insufferable at times.

“I shall do my best. You see, there is a fundamental contradiction between the stated mission and actual operation of a typical PMO.  In theory, they are supposed to facilitate projects, but as far as executive management is concerned this is synonymous with overseeing and controlling projects. What this means is that in practice, PMOs inevitably end up policing project work rather than facilitating it.”

I wasn’t entirely convinced.  “May be the reason that  PMOs fail is that organisations do not implement them correctly,” I said.

“Ah, the famous escape clause used by purveyors of best practices – if our best practice doesn’t work, it means you aren’t implementing it correctly. Pardon me while I choke on my ale, because that is utter nonsense.”

“Why?”

“Well, one would expect after so many years, these so-called implementation errors would have been sorted out. Yet we see the same poor outcomes over and over again,” said Holmes.

“OK,  but then why are PMOs are still so popular with management?”

“Now we come to the crux of matter, Watson,” he said, a tad portentously, “They are popular for reasons we spoke of at the start of this conversation – comfort and security.”

“Comfort and security? I have no idea what you’re talking about.”

“Let me try explaining this in another way,” he said. “When you were a small child, you must have had some object that you carried around everywhere…a toy, perhaps…did you not?”

“I’m not sure I should tell you this Holmes  but, yes, I had a blanket”

“A security blanket, I would never have guessed, Watson,” smiled Holmes. “…but as it happens that’s a perfect example because PMOs and the methodologies they enforce are  security blankets. They give executives and frontline managers a sense that they are doing something concrete and constructive to manage uncertainty…even though they actually aren’t.   PMOs are popular , not because they work (and indeed, we’ve seen they don’t)  but because they help managers contain their anxiety about whether things will turn out right. I would not be exaggerating if I said that  PMOs and the methodologies they evangelise are akin to lucky charms or fetishes.”

“That’s a strong a statement to make on rather slim grounds,” I said dubiously.

“Is it? Think about it, Watson,” he shot back, with a flash of irritation. “Many (though I should admit, not all) PMOs and methodologies prescribe excruciatingly detailed procedures to follow and templates to fill when managing projects. For many (though again, not all) project managers, managing a project is synonymous with following these rituals. Such managers attempt to force-fit  reality into standardised procedures and documents. But tell me, Watson – how can such project management by ritual work  when no two projects are the same?”

“Hmm….”

“That is not all, Watson,” he continued, before I could respond, “PMOs and methodologies enable people to live in a fantasy world where everything seems to be under control. Methodology fetishists will not see the gap between their fantasy world and reality, and will therefore miss opportunities to learn. They follow rituals that give them security and an illusion of efficiency, but at the price of a genuine engagement with people and projects.”

“ I’ll have to think about it,” I said.

“You do that,” he replied , as he pushed back his chair and started to clear the table. Unlike him, I had a lot more than dinner to digest. Nevertheless, I rose to help him as I do every day.

Evening conversations at 221B Baker Street are seldom boring. Last night was no exception.

Acknowledgement:

This tale was inspired David Wastell’s brilliant paper, The fetish of technique: methodology as social defence (abstract only).

Written by K

April 29, 2015 at 8:37 pm

From the coalface: an essay on the early history of sociotechnical systems

with 2 comments

The story of sociotechnical systems began a little over half a century ago, in a somewhat unlikely setting: the coalfields of Yorkshire.

The British coal industry had just been nationalised and new mechanised mining methods were being introduced in the mines. It was thought that nationalisation would sort out the chronic labour-management issues and mechanisation would address the issue of falling productivity.

But things weren’t going as planned. In the words of Eric Trist, one of the founders of the Tavistock Institute:

…the newly nationalized industry was not doing well. Productivity failed to increase in step with increases in mechanization. Men were leaving the mines in large numbers for more attractive opportunities in the factory world. Among those who remained, absenteeism averaged 20%. Labour disputes were frequent despite improved conditions of employment.   – excerpted from, The evolution of Socio-technical systems – a conceptual framework and an action research program, E. Trist (1980)

Trist and his colleagues were asked by the National Coal Board to come in and help. To this end, they did a comparative study of two mines that were similar except that one had high productivity and morale whereas the other suffered from low performance and had major labour issues.

Their job was far from easy: they were not welcome at the coalface because workers associated them with management and the Board.

Trist recounts that around the time the study started, there were a number of postgraduate fellows at the Tavistock Institute. One of them, Ken Bamforth, knew the coal industry well as he had been a miner himself.  Postgraduate fellows who had worked in the mines were encouraged to visit their old workplaces after  a year and  write up their impressions, focusing on things that had changed since they had worked there.   After one such visit, Bamforth reported back with news of a workplace innovation that had occurred at a newly opened seam at Haighmoor. Among other things, morale and productivity at this particular seam was high compared to other similar ones.  The team’s way of working was entirely novel, a world away from the hierarchically organised set up that was standard in most mechanised mines at the time. In Trist’s words:

The work organization of the new seam was, to us, a novel phenomenon consisting of a set of relatively autonomous groups interchanging roles and shifts and regulating their affairs with a minimum of supervision. Cooperation between task groups was everywhere in evidence; personal commitment was obvious, absenteeism low, accidents infrequent, productivity high. The contrast was large between the atmosphere and arrangements on these faces and those in the conventional areas of the pit, where the negative features characteristic of the industry were glaringly apparent. Excerpted from the paper referenced above.

To appreciate the radical nature of practices at this seam, one needs to understand the backdrop against which they occurred. To this end, it is helpful to compare the  mechanised work practices introduced in the post-war years with the older ones from the pre-mechanised era of mining.

In the days before mines were mechanised, miners would typically organise themselves into workgroups of six miners, who would cover three work shifts in teams of two. Each miner was able to do pretty much any job at the seam and so could pick up where his work-mates from the previous shift had left off. This was necessary in order to ensure continuity of work between shifts. The group negotiated the price of their mined coal directly with management and the amount received was shared equally amongst all members of the group.

This mode of working required strong cooperation and trust within the group, of course.  However, as workgroups were reorganised from time to time due to attrition or other reasons, individual miners understood the importance of maintaining their individual reputations as reliable and trustworthy workmates. It was important to get into a good workgroup because such groups were more likely to get more productive seams to work on. Seams were assigned by bargaining, which was typically the job of the senior miner on the group. There was considerable competition for the best seams, but this was generally kept within bounds of civility via informal rules and rituals.

This traditional way of working could not survive mechanisation. For one, mechanised mines encouraged specialisation because they were organised like assembly lines, with clearly defined job roles each with different responsibilities and pay scales. Moreover, workers in a shift would perform only part of the extraction process leaving those from subsequent shifts to continue where work was left off.

As miners were paid by the job they did rather than the amount of coal they produced, no single group had end-to-end responsibility for the product.   Delays due to unexpected events tended to get compounded as no one felt the need to make up time. As a result, it would often happen that work that was planned for a shift would not be completed. This meant that the next shift (which could well be composed of a group with completely different skills) could not or would not start their work because they did not see it as their job to finish the work of the earlier shift. Unsurprisingly, blame shifting and scapegoating was rife.

From a supervisor’s point of view, it was difficult to maintain the same level of oversight and control in underground mining work as was possible in an assembly line. The environment underground is simply not conducive to close supervision and is also more uncertain in that it is prone to unexpected events.  Bureaucratic organisational structures are completely unsuited to dealing with these because decision-makers are too far removed from the coalface (literally!).  This is perhaps the most important insight to come out of the Tavistock coal mining studies.

As Claudio Ciborra  puts it in his classic book on teams:

Since the production process at any seam was much more prone to disorganisation than due to uncertainty and complexity of underground conditions, any ‘bureaucratic’ allocation of jobs could be easily disrupted. Coping with emergencies and coping with coping became part of worker’s and supervisors’ everyday activities. These activities would lead to stress, conflict and low productivity because they continually clashed with the technological arrangements and the way they were planned and subdivided around them.

Thus we see that the new assembly-line bureaucracy inspired work organisation was totally unsuited to the work environment because there was no end-to-end responsibility, and decision making was far removed from the action. In contrast, the traditional workgroup of six was able to deal with uncertainties and complexities of underground work because team members had a strong sense of responsibility for the performance of the team as a whole. Moreover, teams were uniquely placed to deal with unexpected events because they were actually living them as they occurred and could therefore decide on the best way to deal with them.

What Bamforth found at the Haighmoor seam was that it was possible to recapture the spirit of the old ways of working by adapting these to the larger specialised groups that were necessary in the mechanised mines. As Ciborra describes it in his book:

The new form of work organisation features forty one men who allocate themselves to tasks and shifts. Although tasks and shifts those of the conventional mechanised system, management and supervisors do not monitor, enforce and reward single task executions. The composite group takes over some of the managerial tasks, as it had in the pre-mechanised marrow group, such as the selection of group members and the informal monitoring of work…Cycle completion, not task execution becomes a common goal that allows for mutual learning and support…There is basic wage and a bonus linked to the overall productivity of the group throughout the whole cycle rather than a shift.  The competition between shifts that plagued the conventional mechanised method is effectively eliminated…

Bamforth and Trist’s studies on Haighmoor convinced them that there were viable (and better!) alternatives to those that were typical of mid to late 20th century work places.  Their work led them to the insight that the best work arrangements come out of seeking a match between technical and social elements of the modern day workplace, and thus was born the notion of sociotechnical systems.

Ever since the assembly-line management philosophies of Taylor and Ford, there has been an increasing trend towards division of labour, bureaucratisation and mechanisation / automation of work processes.  Despite the early work of the Tavistock school and others who followed, this trend continues to dominate management practice, arguably even more so in recent years. The Haighmoor innovation described above was one of the earliest demonstrations that there is a better way.   This message has since been echoed by many academics and thinkers,  but remains largely under-appreciated or ignored by professional managers who have little idea – or have completely forgotten – what it is like to work at the coalface.

Written by K

April 7, 2015 at 10:30 pm

TOGAF or not TOGAF… but is that the question?

with 4 comments

The ‘Holy Grail’ of effective collaboration is creating shared understanding, which is a precursor to shared commitment.” – Jeff Conklin.

Without context, words and actions have no meaning at all.” – Gregory Bateson.

I spent much of last week attending a class on the TOGAF Enterprise Architecture (EA) framework.  Prior experience with  IT frameworks such as PMBOK and ITIL had taught me that much depends on the instructor – a good one can make the material come alive whereas a not-so-good one can make it an experience akin to watching grass grow. I needn’t have worried: the instructor was superb, and my classmates, all of whom are experienced IT professionals / architects, livened up the proceedings through comments and discussions both in class and outside it. All in all, it was a thoroughly enjoyable and educative experience, something I cannot say for many of the professional courses I have attended.

One of the things about that struck me about TOGAF is the way in which the components of the framework hang together to make a coherent whole (see the introductory chapter of the framework for an overview). To be sure, there is a lot of detail within those components, but there is a certain abstract elegance – dare I say, beauty – to the framework.

That said TOGAF is (almost) entirely silent on the following question which I addressed in a post late last year:

Why is Enterprise Architecture so hard to get right?

Many answers have been offered. Here are some, extracted from articles published by IT vendors and consultancies:

  • Lack of sponsorship
  • Not engaging the business
  • Inadequate communication
  • Insensitivity to culture / policing mentality
  • Clinging to a particular tool or framework
  • Building an ivory tower
  • Wrong choice of architect

(Note: the above points are taken from this article and this one)

It is interesting that the first four issues listed are related to the fact that different stakeholders in an organization have vastly different perspectives on what an enterprise architecture initiative should achieve.  This lack of shared understanding is what makes enterprise architecture a socially complex problem rather than a technically difficult one. As Jeff Conklin points out in this article, problems that are technically complex will usually have a solution that will be acceptable to all stakeholders, whereas socially complex problems will not.  Sending a spacecraft to Mars is an example of the former whereas an organization-wide ERP  (or EA!) project or (on a global scale) climate change are instances of the latter.

Interestingly, even the fifth and sixth points in the list above – framework dogma and retreating to an ivory tower – are usually consequences of the inability to manage social complexity. Indeed, that is precisely the point made in the final item in the list: enterprise architects are usually selected for their technical skills rather than their ability to deal with ambiguities that are characteristic of social complexity.

TOGAF offers enterprise architects a wealth of tools to manage technical complexity. These need to be complemented by a suite of techniques to reconcile worldviews of different stakeholder groups.  Some examples of such techniques are Soft Systems Methodology, Polarity Management, and Dialogue Mapping. I won’t go into details of these here, but if you’re interested, please have a look at my posts entitled, The Approach – a dialogue mapping story and The dilemmas of enterprise IT for brief introductions to the latter two techniques via IT-based examples.

<Advertisement > Better yet, you could check out Chapter 9 of my book for a crash course on Soft Systems Methodology and Polarity Management and Dialogue Mapping, and the chapters thereafter for a deep dive into Dialogue Mapping </Advertisement>.

Apart from social complexity, there is the problem of context – the circumstances that shape the unique culture and features of an organization.  As I mentioned in my introductory remarks, the framework is abstract – it applies to an ideal organization in which things can be done by the book. But such an organization does not exist!  Aside from unique people-related and political issues, all organisations have their own quirks and unique features that distinguish them from other organisations, even within the same domain. Despite superficial resemblances, no two pharmaceutical companies are alike. Indeed, the differences are the whole point because they are what make a particular organization what it is. To paraphrase the words of the anthropologist, Gregory Bateson, the differences are what make a difference.

Some may argue that the framework acknowledges this and encourages, even exhorts, people to tailor the framework to their needs. Sure, the word “tailor” and its variants appear almost 700 times in the version 9.1 of the standard but, once again, there is no advice offered on how this tailoring should be done.  And one can well understand why: it is impossible to offer any sensible advice if one doesn’t know the specifics of the organization, which includes its context.

On a related note, the TOGAF framework acknowledges that there is a hierarchy of architectures ranging from the general (foundation) to the specific (organization). However despite the acknowledgement of diversity,   in practice TOGAF tends to focus on similarities between organisations. Most of the prescribed building blocks and processes are based on assumed commonalities between the structures and processes in different organisations.   My point is that, although similarities are important, architects need to focus on differences. These could be differences between the organization they are working in and the TOGAF ideal, or even between their current organization and others that they have worked with in the past (and this is where experience comes in really handy). Cataloguing and understanding these unique features –  the differences that make a difference – draws attention to precisely those issues that can cause heartburn and sleepless nights later.

I have often heard arguments along the lines of “80% of what we do follows a standard process, so it should be easy for us to standardize on a framework.” These are famous last words, because some of the 20% that is different is what makes your organization unique, and is therefore worthy of attention. You might as well accept this upfront so that you get a realistic picture of the challenges early in the game.

To sum up, frameworks like TOGAF are abstractions based on an ideal organization; they gloss over social complexity and the unique context of individual organisations.  So, questions such as the one posed in the title of this post are akin to the pseudo-choice between Coke and Pepsi, for the real issue is something else altogether. As Tom Graves tells us in his wonderful blog and book, the enterprise is a story rather than a structure, and its architecture an ongoing sociotechnical drama.

Written by K

March 17, 2015 at 8:09 pm

Follow

Get every new post delivered to your Inbox.

Join 324 other followers

%d bloggers like this: