Eight to Late

Archive for the ‘Knowledge Management’ Category

From information to knowledge: the what and whence of issue based information systems

with 3 comments

Issue Based Information System (IBIS) is a notation invented by Horst Rittel and Werner Kunz in the early 1970s.  IBIS is best known for its use in dialogue mapping, a collaborative approach to tackling wicked problems (i.e. contentious issues) in organisations. It has a range of other applications as well – capturing knowledge is a good example, and I’ll have much more to say about that later in this post.

Over the last five years or so, I have written a fair bit on IBIS on this blog and in a book that I co-authored with the dialogue mapping expert, Paul Culmsee.   The present post reprises an article I wrote five years ago on the “what” and “whence” of the notation:  its practical aspects – notation, grammar etc -, as well as its origins, advantages and limitations. My motivations for revisiting the piece are to revise and update the original discussion and, more important, to cover some recent developments in IBIS technology that open up interesting possibilities in the area of knowledge management.

To appreciate the power of the IBIS, it is best to begin by understanding the context in which the notation was invented.  I’ll therefore start with a discussion of the origins of the notation followed by an introduction to it. Finally, I’ll cover its development through the last 40 odd years, focusing on the recent developments that I mentioned above.

Wicked origins

A good place to start is where it all started. IBIS was first described in a paper entitled, Issues as elements of Information Systems; written by Horst Rittel (the man who coined the term wicked problem) and Werner Kunz in July 1970. They state the intent behind IBIS in the very first line of the abstract of their paper:

 Issue-Based Information Systems (IBIS) are meant to support coordination and planning of political decision processes. IBIS guides the identification, structuring, and settling of issues raised by problem-solving groups, and provides information pertinent to the discourse.

Rittel’s preoccupation was the area of public policy and planning – which is also the context in which he defined the term wicked problem originally. Given the above background it is no surprise that Rittel and Kunz foresaw IBIS to be the:

…type of information system meant to support the work of cooperatives like governmental or administrative agencies or committees, planning groups, etc., that are confronted with a problem complex in order to arrive at a plan for decision…

The problems tackled by such cooperatives are paradigm-defining examples of wicked problems. From the start, then, IBIS was intended as a tool to facilitate a collaborative approach to solving…or better, managing a wicked problem by helping develop a shared perspective on it.

A brief introduction to IBIS

The IBIS notation consists of the following three elements:

  1. Issues(or questions): these are issues that are being debated. Typically, issues are framed as questions on the lines of “What should we do about X?” where X is the issue that is of interest to a group. For example, in the case of a group of executives, X might be rapidly changing market condition whereas in the case of a group of IT people, X could be an ageing system that is hard to replace.
  2. Ideas(or positions): these are responses to questions. For example, one of the ideas of offered by the IT group above might be to replace the said system with a newer one. Typically the whole set of ideas that respond to an issue in a discussion represents the spectrum of participant perspectives on the issue.
  3. Arguments: these can be Pros (arguments for) or Cons (arguments against) an issue. The complete set of arguments that respond to an idea represents the multiplicity of viewpoints on it.

Compendium is a freeware tool that can be used to create IBIS maps– it can be downloaded here.

In Compendium, the IBIS elements described above are represented as nodes as shown in Figure 1: issues are represented by blue-green question marks; positions by yellow light bulbs; pros by green + signs and cons by red – signs.  Compendium supports a few other node types, but these are not part of the core IBIS notation. Nodes can be linked only in ways specified by the IBIS grammar as I discuss next.

 

Figure 1: IBIS elements

Figure 1: IBIS elements

The IBIS grammar can be summarized in three simple rules:

  1. Issues can be raised anew or can arise from other issues, positions or arguments. In other words, any IBIS element can be questioned.  In Compendium notation:  a question node can connect to any other IBIS node.
  2. Ideas can only respond to questions– i.e. in Compendium “light bulb” nodes can only link to question nodes. The arrow pointing from the idea to the question depicts the “responds to” relationship.
  3. Arguments  can only be associated with ideas– i.e. in Compendium “+” and “–“  nodes can only link to “light bulb” nodes (with arrows pointing to the latter)

The legal links are summarized in Figure 2 below.

Figure 2: Legal links in IBIS

Figure 2: Legal links in IBIS

Yes, it’s as simple as that.

The rules are best illustrated by example-   follow the links below to see some illustrations of IBIS in action:

  1. See this postfor a simple example of dialogue mapping.
  2. See this postor this one for examples of argument visualisation. (Note: using IBIS to map out the structure of written arguments is called issue mapping.
  3. See this one for an example Paul did with his children. This example also features in our book. that made an appearance in our book.

Now that we know how IBIS works and have seen a few examples of it in action, it’s time to trace the history of the notation from its early days the present.

Operation of early systems

When Rittel and Kunz wrote their paper, there were three IBIS-type systems in operation: two in government agencies (in the US, one presumes) and one in a university environment (quite possibly Berkeley, where Rittel worked). Although it seems quaint and old-fashioned now, it is no surprise that these were manual, paper-based systems; the effort and expense involved in computerizing such systems in the early 70s would have been prohibitive and the pay-off questionable.

The Rittel-Kunz paper introduced earlier also offers a short description of how these early IBIS systems operated:

An initially unstructured problem area or topic denotes the task named by a “trigger phrase” (“Urban Renewal in Baltimore,” “The War,” “Tax Reform”). About this topic and its subtopics a discourse develops. Issues are brought up and disputed because different positions (Rittel’s word for ideas or responses) are assumed. Arguments are constructed in defense of or against the different positions until the issue is settled by convincing the opponents or decided by a formal decision procedure. Frequently questions of fact are directed to experts or fed into a documentation system. Answers obtained can be questioned and turned into issues. Through this counterplay of questioning and arguing, the participants form and exert their judgments incessantly, developing more structured pictures of the problem and its solutions. It is not possible to separate “understanding the problem” as a phase from “information” or “solution” since every formulation of the problem is also a statement about a potential solution.

 Even today, forty years later, this is an excellent description of how IBIS is used to facilitate a common understanding of complex problems.  Moreover, the process of reaching a shared understanding (whether using IBIS or not) is one of the key ways in which knowledge is created within organizations. To foreshadow a point I will elaborate on later, using IBIS to capture the key issues, ideas and arguments, and the connections between them, results in a navigable map of the knowledge that is generated in a discussion.

 Fast forward a couple decades (and more!)

In a paper published in 1988 entitled, gIBIS: A hypertext tool for exploratory policy discussion, Conklin and Begeman describe a prototype of a graphical, hypertext-based  IBIS-type system (called gIBIS) and its use in capturing design rationale (yes, despite the title of the paper, it is more about capturing design rationale than policy discussions). The development of gIBIS represents a key step between the original Rittel-Kunz version of IBIS and its more recent version as implemented in Compendium.  Amongst other things, IBIS was finally off paper and on to disk, opening up a world of new possibilities.

gIBIS aimed to offer users:

  1. The ability to capture design rationale – the options discussed (including the ones rejected) and the discussion around the pros and cons of each.
  2. A platform for promoting computer-mediated collaborativedesign work – ideally in situations where participants were located at sites remote from each other.
  3. The ability to store a large amount of information and to be able to navigate through it in an intuitive way.

The gIBIS prototype proved successful enough to catalyse the development of Questmap, a commercially available software tool that supported IBIS.  In a recent conversation Jeff Conklin mentioned to me that Questmap was one of the earliest Windows-based groupware tools available on the market…and it won a best-of-show award in that category. It is interesting to note that in contrast to Questmap (which no longer exists), Compendium is a single-user, desktop software.

The primary application of Questmap was in the area of sensemaking which is all about helping groups reach a collective understanding of complex situations that might otherwise lead them into tense or adversarial conditions. Indeed, that is precisely how Rittel and Kunz intended IBIS to be used.  The key advantage offered by computerized IBIS systems was that one could map dialogues in real-time, with the map representing the points raised in the conversation along with their logical connections. This proved to be a big step forward in the use of IBIS to help groups achieve a shared understanding of complex issues.

That said, although there were some notable early successes in the real-time use of IBIS in industry environments (see this paper, for example), these were not accompanied by widespread adoption of the technique. It is worth exploring the reasons for this briefly.

 The tacitness of IBIS mastery

The reasons for the lack of traction of IBIS-type techniques for real-time knowledge capture are discussed in a paper by Shum et. al. entitled, Hypermedia Support for Argumentation-Based Rationale: 15 Years on from gIBIS and QOC.  The reasons they give are:

  1. For acceptance, any system must offer immediate value to the person who is using it. Quoting from the paper, “No designer can be expected to altruistically enter quality design rationale solely for the possible benefit of a possibly unknown person at an unknown point in the future for an unknown task. There must be immediate value.” Such immediate value is not obvious to novice users of IBIS-type systems.
  2. There is some effort involved in gaining fluency in the use of IBIS-based software tools. It is only after this that users can gain an appreciation of the value of such tools in overcoming the limitations of mapping design arguments on paper, whiteboards etc.

While the rules of IBIS are simple to grasp, the intellectual effort – or cognitive overhead in using IBIS in real time involves:

  1. Teasing out issues, ideas and arguments from the dialogue.
  2. Classifying points raised into issues, ideas and arguments.
  3. Naming (or describing) the point succinctly.
  4. Relating (or linking) the point to the existing map (or anticipating how it will fit in later)
  5. Developing a sense for conversational patterns.

Expertise in these skills can only be developed through sustained practice, so it is no surprise that beginners find it hard to use IBIS to map dialogues.   Indeed, the use of IBIS for real-time conversation mapping is a tacit skill, much like riding a bike or swimming – it can only be mastered by doing.

Making sense through IBIS 

Despite the difficulties of mastering IBIS, it is easy to see that it offers considerable advantages over conventional methods of documenting discussions. Indeed, Rittel and Kunz were well aware of this. Their paper contains a nice summary of the advantages, which I paraphrase below:

  1. IBIS can bridge the gap between discussions and records of discussions (minutes, audio/video transcriptions etc.). IBIS sits between the two, acting as a short-term memory. The paper thus foreshadows the use of issue-based systems as an aid to organizational or project memory.
  2. Many elements (issues, ideas or arguments) that come up in a discussion have contextual meanings that are different from any pre-existing definitions. That is, the interpretation of points made or questions raised depends on the circumstances surrounding the discussion. What is more important is that contextual meaning is more important than formal meaning. IBIS captures the former in a very clear way – for example a response to a question “What do we mean by X?” elicits the meaning of X in the context of the discussion, which is then subsequently captured as an idea (position)”. I’ll have much more to say about this towards the end of this article.
  3. The reasoning used in discussions is made transparent, as is the supporting (or opposing) evidence.
  4. The state of the argument (discussion) at any time can be inferred at a glance (unlike the case in written records). See this post for more on the advantages of visual documentation over prose.
  5. Often times it happens that the commonality of issues with other, similar issues might be more important than its precise meaning. To quote from the paper, “…the description of the subject matter in terms of librarians or documentalists (sic) may be less significant than the similarity of an issue with issues dealt with previously and the information used in their treatment…”  This is less of an issue now because of search of technologies. However, search technologies are still largely based on keywords rather than context. A properly structured, context-searchable IBIS-based archive would be more useful than a conventional document-based system. As I’ll discuss in the next section, the technology for this is now available.

To sum up, then: although IBIS offers a means to map out arguments what is lacking is the ability to make these maps available and searchable across an organization.

IBIS in the enterprise

It is interesting to note that Compendium, unlike its predecessor, Questmap, is a single-user, desktop tool – it does not, by itself, enable the sharing of maps across the enterprise. To be sure, it is possible work around this limitation but the workarounds are somewhat clunky.  A recent advance in IBIS technology addresses this issue rather elegantly: Seven Sigma, an Australian consultancy founded by Paul Culmsee, Chris Tomich and Peter Chow, has developed Glyma (pronounced “glimmer”): a product that makes IBIS available on collaboration platforms like Microsoft SharePoint. This is a game-changer because it enables sharing and searching of IBIS maps across the enterprise. Moreover, as we shall see below, the implications of this go beyond sharing and search.

Full Disclosure: As regular readers of this blog know, Paul is a good friend and we have jointly written a book and a few papers. However, at the time of writing, I have no commercial association with Seven Sigma.  My comments below are based on playing with beta version of the product that Paul was kind enough to give me to access to as well as some discussions that I have had with him.

Figure 3: IBIS in Glyma

Figure 3: IBIS in Glyma (Click to see larger picture)

The look and feel of Glyma is much the same as Compendium (see Fig 3 above) – and the keystrokes and shortcuts are quite similar. I have trialled Glyma for a few weeks and feel that the overall experience is actually a bit better than in Compendium. For example one can navigate through a series of maps and sub-maps using a breadcrumb trail. Another example: documents and videos are embedded within the map – so one does not need to leave the map in order to view tagged media (unless of course one wants to see it at a higher resolution).

I won’t go into any detail about product features etc. since that kind of information is best accessed at source – i.e. the product website and documentation. Instead, I will now focus on how Glyma addresses a longstanding gap in knowledge management systems.

Revisiting the problem of context

In most organisations, knowledge artefacts (such as documents and audio-visual media) are stored in hierarchical or relational structures (for example, a folder within a directory structure or a relational database). To be sure, stored artefacts are often tagged with metadata indicating when, where and by whom they were created, but experience tells me that such metadata is not as useful as it should be.  The problem is that the context in which an artefact was created is typically not captured. Anyone who has read a document and wondered, “What on earth were the authors thinking when they wrote this?” has encountered the problem of missing context.

Context, though hard to capture, is critically important in understanding the content of a knowledge artefact. Any artefact, when accessed without an appreciation of the context in which it was created is liable to be misinterpreted or only partially understood.

 Capturing context in the enterprise

Glyma addresses the issue of context rather elegantly at the level of the enterprise.  I’ll illustrate this point an inspiring case study on the innovative use of SharePoint in education that Paul has written about some time ago.

The case study

Here is the backstory in Paul’s words:

Earlier this year, I met Louis Zulli Jnr – a teacher out of Florida who is part of a program called the Centre of Advanced Technologies. We were co-keynoting at a conference and he came on after I had droned on about common SharePoint governance mistakes…The majority of Lou’s presentation showcased a whole bunch of SharePoint powered solutions that his students had written. The solutions were very impressive…We were treated to examples like:

  • IOS, Android and Windows Phone apps that leveraged SharePoint to display teacher’s assignments, school events and class times;
  • Silverlight based application providing a virtual tour of the campus;
  • Integration of SharePoint with Moodle (an open source learning platform)
  • An Academic Planner web application allowing students to plan their classes, submit a schedule, have them reviewed, track of the credits of the classes selected and whether a student’s selections meet graduation requirements;

All of this and more was developed by 16 to 18 year olds and all at a level of quality that I know most SharePoint consultancies would be jealous of…

Although the examples highlighted by Louis were very impressive, what Paul found more interesting were the anecdotes that Lou related about the dedication and persistence that students displayed in their work. Quoting again from Paul,

So the demos themselves were impressive enough, but that is actually not what impressed me the most. In fact, what had me hooked was not on the slide deck. It was the anecdotes that Lou told about the dedication of his students to the task and how they went about getting things done. He spoke of students working during their various school breaks to get projects completed and how they leveraged each other’s various skills and other strengths. Lou’s final slide summed his talk up brilliantly:

  • Students want to make a difference! Give them the right project and they do incredible things.
  • Make the project meaningful. Let it serve a purpose for the campus community.
  • Learn to listen. If your students have a better way, do it. If they have an idea, let them explore it.
  • Invest in success early. Make sure you have the infrastructure to guarantee uptime and have a development farm.
  • Every situation is different but there is no harm in failure. “I have not failed. I’ve found 10,000 ways that won’t work” – Thomas A. Edison

In brief:  these points highlight the fact that Lou’s primary role as director of the center is to create the conditions that make it possible for students to do great work.  The commercial-level quality of work turned out by students suggests that Lou’s knowledge on how to build high-performing teams is definitely worth capturing.

The question is: what’s the best way to do this (short of getting him to visit you and talk about his experiences)?

Seeing the forest for the trees

Paul recently interviewed Lou with the intent of documenting Lou’s experiences. The conversation was recorded on video and then “Glymafied” it – i.e the video was mapped using IBIS (see Figure 4 below).

Figure 4: Knowledge capture via Glyma

Figure 4: Knowledge capture via Glyma (Click to see larger picture)

There are a few points worth noting here:

  1. The content of the entire conversation is mapped out so one can “see” the conversation at a glance.
  2. The context in which a particular point (i.e. the content of a node) is made is clarified by the connections between a node and its neighbours. Moving left from a node gives a higher level picture, moving right drills down into details.

Of course, the reader will have noted that these are core IBIS capabilities that are available in Compendium (or any other IBIS tool).  Glyma offers much more. Consider the following:

  1. Relevant documents or audio visual media can be tagged to specific nodes to provide supplementary material. In this case the video recording was tagged to specific nodes that related to points made in the video. Clicking on the play icon attached to such a node plays the segment in which the content of the node is being discussed. This is a really nice feature as it saves the user from having to watch the whole video (or play an extended game of ffwd-rew to get to the point of interest). Moreover, this provides additional context that cannot (or is not) captured by in the map. For example, one can attach papers, links to web pages, Slideshare presentations etc. to fill in background and context.
  2. Glyma is integrated with an enterprise content management system by design. One can therefore link map and video content to the powerful built-in search and content aggregation features of these systems. For example, users would be able enter a search from their intranet home page and retrieve not only traditional content such as documents, but also stories, reflections and anecdotes from experts such as Lou.
  3. Another critical aspect to intranet integration is the ability to provide maps as contextual navigation. Amazon’s ability to sell books that people never intended to buy is an example of the power of such navigation. The ability to execute the kinds of queries outlined in the previous point, along with contextual information such as user profile details, previous activity on the intranet and the area of an intranet the user is browsing, makes it possible to present recommendations of nodes or maps that may be of potential interest to users. Such targeted recommendations might encourage users to explore video (and other rich media) content.

Technical Aside: An interesting under-the-hood feature of Glyma is that it uses an implementation of a hypergraph database to store maps. (Note: this is a database that can store representations of graphs (maps) in which an edge can connect to more than 2 vertices). These databases enable the storing of very general graph structures. What this means is that Glyma can be extended to store any kind of map (Mind Maps, Concept Maps, Argument Maps or whatever)…and nodes can be shared across maps. This feature has not been developed as yet, but I mention it because it offers some exciting possibilities in the future.

To summarise: since Glyma stores all its data in an enterprise-class database, maps can be made available across an organization.  It offers the ability to tag nodes with pretty much any kind of media (documents, audio/video clips etc.), and one can tag specific parts of the media that are relevant to the content of the node (a snippet of a video, for example). Moreover, the sophisticated search capabilities of the platform enable context aware search.  Specifically, we can search for nodes by keywords or tags, and once a node of interest is located, we can also view the map(s) in which it appears. The map(s) inform us of the context(s) relating to the node. The ability to display the “contextual environment” of a piece of information is what makes Glyma really interesting.

In metaphorical terms, Glyma enables us to see the forest for the trees.

…and so, to conclude

My aim in this post has been to introduce readers to the IBIS notation and trace its history from its origins in issue mapping to recent developments in knowledge management.  The history of a technique is valuable because it gives insight into the rationale behind its creation, which leads to a better understanding of the different ways in which it can be used. Indeed, it would not be an exaggeration to say that the knowledge management applications discussed above are but an extension of Rittel’s original reasons for inventing IBIS.

I would like to close this piece with a couple of observations from my experience with IBIS:

Firstly, the real surprise for me has been that the technique can capture most written arguments and conversations, despite having only three distinct elements and a very simple grammar. Yes, it does require some thought to do this, particularly when mapping discussions in real time. However, this cognitive overhead is good because it forces the mapper to think about what’s being said instead of just writing it down blind. Thoughtful transcription is the aim of the game. When done right, this results in a map that truly reflects an understanding of a complex issue.

Secondly, although most current discussions of IBIS focus on its applications in dialogue mapping, it has a more important role to play in mapping organizational knowledge. Maps offer a powerful means to navigate the complex network of knowledge within an organisation. The (aspirational) end-goal of such an effort would be a “global” knowledge map that highlights interconnections between different kinds of knowledge that exists within an organization. To be sure, such a map will make little sense to the human eye, but powerful search capabilities could make it navigable. To the extent that this is a feasible direction, I foresee IBIS becoming an important skill in the repertoire of knowledge management professionals.

Making sense of sensemaking – a conversation with Paul Culmsee

with 5 comments

Introduction

Welcome to the second post in my conversations series. This time around I chat with my friend and long-time collaborator Paul Culmsee who, among many other things, is a skilled facilitator and a master of the craft of dialogue mapping (more on that below).

In an hour-long conversation recorded a couple of weeks ago, Paul and I talked about the art of sensemaking.  (Editor’s note: the conversation has been lightly edited for clarity)

What is sensemaking?

KA: Hi Paul, in this conversation I wanted to focus on sensemaking. From our association over the years, I know that’s a specialty of yours.  Incidentally, I checked out your LinkedIn profile and saw that you announce yourself as an IT veteran of many years and a sensemaker. So, to begin with, could you tell us what sensemaking is?

PC: [Laughs] First up, thanks for stalking me on LinkedIn.  Well the “IT veteran” part is easy – it’s is what I’ve been doing ever since I left university in 1989. Sensemaking came a while later.

In a nutshell, sensemaking is about helping groups make sense of complex situations that might otherwise lead them into tense or adversarial conditions. A lot of projects exhibit such situations from time to time. Sensemaking seeks to help groups develop a shared understanding of these sorts of situations.

KA:  OK, so what I’m hearing is that it is about helping people get clarity on an ambiguous situation or may be, even define what the problem is.

PC: Yeah absolutely…and we alluded to this in our Heretic’s book.  It is staggering when one realizes how many teams and individuals (in teams and in organisations) spend a stack of money and time without being aligned on the problem that they’re solving. Often this lack of alignment becomes evident only in the wash, long after anything can be done. In some ways it beggars belief that that could happen; that a project or initiative could go on for long without alignment, but it does happen quite often. Sensemaking seeks to eliminate that up front.

There are various tools, techniques and collaborative approaches that one can use to bring people together to air and reconcile different viewpoints.  Of course, this assumes that people genuinely want to see things improve, and in my experience this is often the case. A lot of the time, therefore, sensemaking is simply about helping groups reach a shared understanding so that subsequent actions can be taken with full commitment from everybody concerned.

KA: All that sounds very reasonable, even obvious. Why do you think this has been neglected for so long? Why  have people overlooked this?

PC: Mate, I’m glad we’re having beers as we talk about this [takes a swig].

Look I think it is often seen as an excuse to have a talk-fest, and I think that criticism is actually quite fair.  I think organisations…or, rather, the people within them…tend to have a very strong drive to move to action. The idea of stopping and thinking is seen as not being a particularly productive use of time.

Actually, if you delve into it, sensemaking has been around for years and years. In fact, pretty much anyone who is a facilitator is a sensemaker as well in that he or she seeks to help people [overcome an issue that they’re facing as a group].   The problem is that a lot of the techniques used in sensemaking are rooted in theories or philosophies that aren’t seen as being particularly practical. To a certain extent, the theories themselves are to blame. For example, the first time I heard about soft systems theory, I had no idea what the person was talking about. (Editor’s note: a theory that underlies many facilitation techniques)

KA: Yes, that’s absolutely right. Systems theory  itself has been around for a long time….since the 1950s I think.  It’s also been resurrected in various guises ever since, but has always had this reputation (perhaps unfair) of being somewhat impractical. So, I’m curious as to how you actually get around that. How do you sell what you do? (Editor’s note: Systems theory is the precursor of soft systems theory)

PC: By example. It’s really as simple as that. If you take the example of dialogue mapping, which is a practical facilitation approach involving the visual capture of rationale using a graphical notation.  Even that…which is a practical tool…is much easier to show by example than to explain conceptually.  If I were to try to explain what it is in words, I’d have to say something like “I sit in a room and get paid to draw maps. I map the conversations and facilitate at the same time.”  People might then say, “What’s that? Is it like mind mapping?” Then I have no choice but to say, “Well, yeah…but there’s a lot more to it than that.”

So I’ve long since given up on explaining it to people conceptually; it’s much easier to just show them. Moreover, in a lot of the situations where I do engagements for clients, I discourage the sponsor from making a big deal about the technique. I’d rather just let the technique “sell itself”.

KA: Yeah that rings true. You know, I was trying to write a blog post once on dialogue mapping, and realized it would be much better to tell it through a story (Editor’s note: …and the result was this post).

OK, so you’ve told us a bit about dialogue mapping, and I know it is a mainstay of your practice. Could you tell us a bit about how you came to it and what it has done for you?

PC: Oh, it’s changed my career.  In terms of what it’s done for me – well, where I am now is a direct result of my taking up that craft. And I call it a craft because it took a damn lot of practice. It is not something where you can read a book and go “Oh that makes sense,” and then expect to facilitate a group of twenty people or anything like that.

How I came to it was as follows: I had come off a large failed project and was asking myself what I could have done differently.  In hindsight, the problem was pretty obvious:  there were times when things were said by certain stakeholders and I should have gone, “Right, stop!” But I didn’t.  Of course, such mistakes are part of a learning journey.  I subsequently did some research on techniques that might have helped resolve such issues and came to dialogue mapping directly as a result of that research.

Then, through sheer luck I got to apply the technique in areas other than my discipline.  As I said earlier, I’m an IT guy and have been in IT for a long time, but I was lucky to get an early opportunity to use dialogue mapping in an area that was very different from IT (Urban planning to be precise).  I sucked at it completely in that first engagement, but did enough that the client got value out of it and asked me back.

That engagement was a sink or swim situation, and I managed to do just enough to stay afloat.  I should also say that the group I worked with really wanted to succeed: even though they were deadlocked, the group as a whole had a genuine intent to address the problem. Fortunately we were able to make a small breakthrough in the first session. We ended up doing six more sessions and had a really good outcome for the organisation.

Framing questions

KA: That’s interesting…but also a little bit scary.  A lot of people would find a situation like that quite daunting to facilitate.  In particular, when you walk into a situation where you know a group has been grappling with a problem for a long time, you first need to make sense of it yourself. How do you do that?

PC: Yeah, well as you do it more of it, you gain experience of different situations and domains – for example, not-for-profit organizations, executives of a business, public sector or what have you. You then begin to notice that the patterns behind complex problems are actually quite similar across different areas. Although I can’t quite put my finger on what exactly I do, I would say that it is largely about “listening to the situation” and “asking the right questions”.

When Jeff Conklin teaches dialogue mapping, he talks about the seven different question types (Editor’s note: Jeff Conklin is the inventor of dialogue mapping. See this post for more on his question types). He really gets you to think about the questions you’re going to ask. It took me a while to realize just how important that is: the power of asking questions in the right way or framing them appropriately. Indeed, the real learning for me began when I realized this, and it happened long after I had mastered the notation.

The fact is: each situation is unique. I approach strategic planning, team development or business analysis in completely different ways. I can’t give you a generic answer on the approach, but certainly nowadays when I’m presented with a scenario, I find that there is not much that is unfamiliar. I’ve seen most of the territory now, perhaps.

KA: So it’s almost like you’ve got a “library” of patterns which you can find a match to the situation you’re in

PC: Yes that’s right…and I should also mention that the guys I worked with in my early days of using the craft were also sensitive to this, even though they did not practice dialogue mapping.  One of my earliest gigs was to develop a procurement strategy for a major infrastructure project. We spent half a day – from 8:30 in the morning to 1:00 in the afternoon – just trying to figure out what the first question should be.  It’s conversations like that in the early days, followed by trial and error in actual facilitation scenarios that aided my learning.

KA: That’s interesting, and I’d like to pick up on what you said earlier about the power of asking the right questions. Jeff Conklin has his seven question types which he elaborates at length in his book (and we also talk about them in the Heretic’s Guide).  However, since then, I know that your thinking on this has advanced considerably. Could you tell us a bit more about this?

PC: Yeah, if we ever do a second edition of the Heretic’s Guide, I’ll definitely be covering this kind of stuff.   But, let me try to explain some of the ideas here in brief.

To set the context, I’ll start with one of Jeff’s question types. An important question type is the deontic question, which is a question that a lot of maps start with. A deontic question asks “What ought to be done?” – for example, “What should we do about X?” The aim of such a question is to open up a conversation.

However, deontic questions can be poorly framed. To take a concrete example, say if one were to ask, “What should we do about increasing  X?” – well such a question implicitly suggests a course of action – i.e. one that increases X. A well framed deontic question doesn’t do that.  It solicits information in a neutral or open way. (Editor’s Note: For example, a well-framed alternative to the foregoing question would be, “What should we do about X?”)

All that is well and good, and is something I teach in my dialogue mapping courses as does Jeff in his.  However, I once taught dialogue mapping to a bunch of business analysts, and of course told them about the importance of asking deontic questions.  Some of the guys told me that they intended to use it at work the following week.  Well, I saw them again a few weeks later and naturally asked, “So how did it go?” They said, “Hey that the deontic question just didn’t work!”

I kind of realized then, and in fact I had mentioned to them (but may be not stressed it enough), that questions need to be framed to suit the situation. You can ask an open deontic question in a really bad way… or even lead at the wrong time with the wrong question.

The more I thought about it, the more I realized that there are patterns to [framing] questions. In other words, there are ways of asking questions that will lead to better outcomes. As an example, a deontic question might be, “What should our success indicators be on this project?” This is a perfectly valid open question (as per Conklin’s question types). However, in a workshop setting this is probably not a good question to ask because the conversation will go all over the place without reaching any consensus.  Moreover, people who don’t have tolerance for ambiguity would be uncomfortable with a question like that.

A better option would be  to reframe the question and ask something like, “If this initiative were highly successful and we look back on it after, say 2 years, how would things be different to now?” With that question what you’re saying to people is: let’s not even worry about problem definition, context, criteria and all that stuff that comes with a deontic question; instead you’re asking them to tell you about the difference between now and an aspirational future. This is easier to answer. A lot of people will say things like – we’ll have more of X and less of Y and so on.

On a related note, if you want to understand the long-term implications of “more of X” or “less of Y”, it is not helpful to ask a question like, “What do you think will happen in the long-term?” People won’t intuitively understand that, so you won’t get a useful answer. Instead it is better to ask a question like, “What behaviours do you think will change if we do all this sort of stuff?” Now, if you think about it, the immediate outcomes of projects are things like “increased awareness of something “or “better access to something”, but over time you’re looking for changes in behaviours because that’s when you know that the changes wrought by an initiative have really taken root.

Subtle reframings of this kind yield richer answers that are more meaningful to people. Moreover, when you solicit responses from a group in this manner, you’ll start to see common themes emerge. These are the sorts of subtleties I have come to understand and appreciate through my practice of sensemaking.

Obliquity

KA: That’s fascinating. So what you’re saying is that rather than asking a direct question it is better to ask an oblique one. Is that right?

PC: Yes, and I think that point is worth elaborating. You used the word “oblique” and I know you’re using it deliberately because we’ve talked about this in the past. Essentially I think the “law of asking questions” is that the more direct question, the less likely it is that you will get a useful answer.

Seriously, asking a question like “What should our vision be?” is a completely brainless way of getting to a vision. You’re more likely to get a useful answer from a question like “What would our organization look like three years from now, if we achieved all we are setting out to do?” The themes that come out of the answers to these kinds of question help in answering the direct question.

I’ve learnt that the question that everyone wants answered is never the one you start with. If you start with the direct question, the conversation will meander over all kinds of weird places.

I came across the notion of obliquity in an article (…and I think it was one of the rare times I sent you something instead of the other way around). In the article, the author (John Kay) made the observation that organisations that chased KPIs like earnings per share (for example) generally did not do as well as organisations that had a more holistic vision (Editor’s note: I also recommend Kay’s book on  obliquity). One example Kay gives is that of Microsoft whose objective in the 90s wanted “a PC on every desk in the world.” Microsoft achieved the earnings per share alright, but it was pursued via an oblique goal.

Organisations that chase earnings per share or other financial metrics tend to be like the folks who “seek happiness” directly instead of trying to find it by, say, immersing themselves in activities that they enjoy. I guess I observed that the principle of obliquity – that things are best achieved indirectly – also applies to the art of asking questions.

KA: That makes a lot of sense. Indeed, after we exchanged notes on the article and Kay’s book, I’ve noticed this idea of obliquity popping up in all different kinds of contexts. I’m not sure why this is, but I think it has something to do with the fact that we don’t really know how the future is going to unfold, and obliquity helps open our minds up to possibilities that we would overlook if we took a “straight line from A to B” kind of approach.

PC: Yep, and that brings up an interesting aspect to oblique questions as well. You know, some people – especially those trained in a standard business school curriculum will be surprised if you ask them an oblique question because it seemingly makes no sense. They might say, “Well, why would you ask that? What we really want to answer is this

Well, I’ve found a way to deal with this, and I learnt this from working with a facilitator who is a professor at one of the business schools here (in Perth).  This was in a strategic planning workshop that we co-facilitated.  Before starting, she walked up to the whiteboard and sketched out a very simple strategic planning model – literally a diagram that said here’s our vision, and the vision leads to a mission, which leads to areas of focus which, in turn, lead to processes…a simple causal diagram with a few boxes connected by arrows.

She spent only a minute or so explaining this model; she didn’t do it in any detail. Then she pointed to a particular box and said, “We’re going to talk about this particular one now.” And I don’t know why this is, but when you present a little model like that (which is familiar to the audience) and say that you’re going to focus on a particular aspect of it, people seem to become more receptive to ambiguity, and you can then get as oblique as you like. Perhaps this is because the narrative is then aligned with a mental model that is familiar to them.

What I’ve learnt, in effect, is that you can’t talk about the wonders of complex systems theory to a bunch of rational project managers. Conversely, when I’m dealing with a group of facilitators (who love all that systems theory stuff) I would never draw a management model going from vision to mission to execution. But when dealing with the corporate world, I will often use a model like that. Not to educate them – they already know the model – but purely to reduce their anxiety. After that I can ask them the questions I really want to ask. It’s a subtle trick: you put things in a familiar frame and then, once you have done that, you can get as oblique as you like.

Tying this back to a question you asked earlier about how I prepare for a facilitation session. Well, I usually try to figure out audience first. If I’m dealing with the public sector I might set the stage by talking about wicked problems, whereas with corporate clients I might start with a Strategic Planning 101 sort of presentation. Either way, I find a frame that is familiar to them and then – almost like a sleight of hand – I switch to the questions I really want to ask. Does that make sense?

KA: Yeah, so to summarise: you give them a security blanket and then scare the hell out of them [laughs]

PC:  That’s actually pretty well summarized [laughs]; I like where you’re going with that…but I’d put it slightly differently. It’s actually a bit like when you’re trying to get little kids to eat something they don’t want to eat – you go, “see here’s the choo-choo train” or something like that, and then get them to have a spoonful while their focusing on that. In a way it’s like creating a distraction. But the aim is really to couch things in such a ways as to get to a point where you can start to have a productive dialogue. And the dialogue itself is driven by powerful questions.

From obliquity to directness

KA: By powerful, I guess you mean oblique

PC: Yeah, the oblique aspect of questions is a common thread that runs through much of what I do now. Mind you, I don’t stay oblique all the way through a session. I start obliquely because I want to unpack a problem. Eventually, though, as people start to get insights and themes begin to emerge, I become more direct; I start to ask things such as who, when etc…putting names and dates down on the map.

KA: So you get more direct in your questioning as the group starts to reach a common understanding of a problem.

PC: That’s exactly right. But there’s the other side to it (and by the way, you should have a conversation with my colleague Neil on this kind of stuff):  you typically have a mixed audience, the “left-brainers” who are rational engineer types as well as the “hippies” (the so-called “right-brainers”) who want to stay out in conceptual-land. Both groups like to stay in their comfort zone: the engineers don’t like moving to conceptual-land because they see it as a waste of time; on the other hand conceptual people don’t like moving to action because the conceptual world feels safer to them. So I sort of trick the engineers into doing conceptual stuff while also pushing conceptual guys into answering more direct questions.

Other techniques and skills

KA: Interesting. Let’s talk a bit about techniques – I know dialogue mapping is a mainstay of much of what you do. What are some of the other techniques you use [to draw people out of their comfort zones]? You mentioned soft systems theory and a few others; you seem to have quite a tool-chest of techniques to draw upon.

PC: Yeah well, when I got into mapping, I also looked at other techniques. I was interested in what else you could do, so I looked at various gamestorming techniques, graphic facilitation and, of course, many methods based on the principles of soft systems and related theories.

I use techniques from both the right-brained and left-brained ends of the spectrum…and by the way, I apologize to any neuroscientists who might be reading/listening to this because I know they hate the term left/right brained. However, I do find it useful sometimes.

Anyway, a popular technique on   the right-brained end of the spectrum is Open Space, which operates almost entirely in conceptual-land. It relies on the wisdom of the crowd; there is no preset agenda, just a theme. People sit in a circle, there’s a Tibetan bell…an on the surface it all seems quite hippy. However, I’ve actually done it in construction projects where you have folks who have come off a building site, dressed in their safety gear – hard hats and all – and participated in such situations. And it does work, despite its touchy-feely, hippy image.

On the other hand, once you’ve conceptualized a project you need to get down to hard work of getting stuff done.  One of the first questions that comes up is, “How do we measure success?”  This usually boils down to defining KPIs. Now, I would never dialogue map or open space a conversation on KPIs. You might get a few themes from dialogue mapping, but definitely not enough detail. Instead, one of the things I often do is go to an online KPI library (like http://kpilibrary.com which has over 7000 KPIs) in which you can find KPIs relating to any area you can think of, ranging from project management to customer service to quality or sustainability. I’ll then print relevant ones on cards and then use a card-sorting technique in which I put people in different groups and ask them to look at specific focus areas [that emerged from the conceptualization phase], and figure out which KPIs are relevant to it.

Why do I do that? Not because I think they will find the KPI. They probably won’t. It’s more because such a process avoids those inevitable epic arguments on what a KPI actually is.

A very effective technique is to spend half a day unpacking a problem via dialogue mapping and get key themes to emerge. This “conceptualization phase” is done with the whole group. Then, when you want to drill down into detailed actions action, it helps to use a divide and conquer approach. This is why I split people up into smaller groups and get them to go off and work on themes that emerged from the conceptualization phase. The aim is to get them to come up with concrete KPIs or even actions. If I’m feeling really evil, when there’s only 10 minutes left, I’ll tell them that they can present only their top four actions or KPIs. This forces them to prioritise things according to value. It’s a bit like a Delphi technique really. Finally, the groups come back together and present all their findings, which I then dialogue map. Once that is done, the larger group (together) will turn to the map and synthesise the outputs of what the smaller groups have done. This example is quite typical of the kind of stuff that I do.

Another example: I did some strategic planning work for local government – this was in the area of urban planning. Now, we did not want them to just copy someone else’s community development plan and “cookie cutter” it. So the very first thing we did was a dialogue mapping session geared towards answering a couple of questions: 1) if the community development plan for this organization was highly successful, how would things be different from how they are now? And, 2) what is unique about this particular area (shire)?

Then, in the second workshop we got some of the best community development plans from around Australia and put each one at a different table. We split people up into groups and got them to spend time at each table. Their job was to note down, on flipcharts, the pros and cons of each plan. The first iteration took about half an hour – presumably because this was the first time many of them were reading a development plan. Once the first iteration was done, people moved to the next table and so on, in round robin fashion.

By the end of that exercise, everyone was a world expert in reading community development plans. By the time people got to their third plan, they were flicking through the pages pretty fast, noting down the things they agreed or disagreed with. Then they came back together and did a synthesis of the common themes were – what was good, what was bad and so on.

Finally, we dialogue mapped again, and this time the question was, “Given what we have seen in all of the other plans, what are we going to do differently to mitigate the issues we have seen with some of them?”  That pretty much nailed what they were going to do with their plan.

The need to improvise

KA: From what you’re saying it seems that almost every situation you walk into is different, and you almost have to design your approach as you go along.   I suppose you would make a guess or some tentative plans based on your knowledge of the make-up of the audience, but wouldn’t you also have to adjust a lot on-the-fly?

PC: Oh yeah, all the time. And in fact, that is more a help than a hindrance. I’ll tell you why…by example again.

Often groups will tell you what they aspire to do. They might say, “as a general principle, we will do this” or something along those lines.  For me that’s gold because I can use it on them [laughs].

In fact, I did this a couple days ago in a workshop. Earlier in the workshop they had said, “It’s OK to make mistakes as long as we are honest with each other and upfront about it.”  I totally used that on them towards the end of the workshop when I said, “Given that you guys are honest with each other, the question I’d like you to answer is – what keeps you up at night with this project?”

My colleague uses the phrase, “hang them by their own petard” when we do this kind of stuff [laughter]. I guess what we’re doing, though, is calling them out on what they espouse, and getting them to live it. If you can do that in a workshop, it is brilliant. So I’m always on the lookout for opportunities to improvise like that, particularly when it is a matter of (espoused) principle.

There are many times when I’ve been in workshops where the corporate values are hanging on a wall – in a boardroom, right –  and I’m witnessing them get completely trashed in the conversation that’s happening. So I like to hold people to account to what’s stated…and these are sneaky little subtle ways in which one can do that.

KA: I’m sure you come across situations where a certain approach doesn’t go down very well – may be people start to get defensive or even question the approach. Does that happen, and how do you deal with that?

PC: I’ve never had a situation where people question the approach I use. I guess that’s because we’re able to deal with that as it happens. For example, if I’m going a bit too “hippie” on a group and I see that they need more structure then I’ll change my approach to suit the group and then gently nudge them back to where I want them to be.

I also co-facilitate with other people…and sometimes they’re the ones who design the workshops, or I co-design it with them.  Often it is their tolerance for ambiguity that can be a roadblock. One facilitator I work with loves emergence. This is my crass generalization, but anyone who thinks complex systems theory is just it will be happy to let a group get mired in ambiguity. The group might be struggling, but as far as the facilitator is concerned that’s OK because he or she believes that ambiguity is necessary for an emergent outcome. What they forget is that not everyone has the same level of tolerance for ambiguity.

On the other hand, I also work with highly structured facilitators who follow a set path – “we’ll do this, then we’ll do that and so on”. This approach might not go so well with people who prefer more open-ended approaches.

These sorts of experiences have been handy. When designing my own workshops, I’m the ultimate bower bird: I cherry pick whatever I need and improvise on the fly. So I tend not to worry about the risks of people not finding the workshop of value. That probably comes from a level of confidence too: we’re reasonably confident that we know our craft and have enough experience to deal with unexpected situations.

Coda – capturing organisational knowledge

KA:  Thanks for the insights into sensemaking.  Now, if you don’t mind, I’d like to switch tack and talk about something that your organization – Seven Sigma – is currently involved in. I know you guys started out as a SharePoint outfit, and you’ve been doing some interesting things in SharePoint relating to knowledge management. Could you tell us a bit about that before we wrap up?

PC: Sure. To begin with, dialogue maps are a pretty good knowledge artifact. Anyone who has used the Compendium software will know this (Editor’s note: Compendium is a free software tool that can be used for dialogue mapping). I’ve used it extensively for the last five years and have an “encyclopedia of conversations” that I have mapped. When I go and look at them, they’re as vivid to me as on the day I mapped them. So I’ve always been fascinated by the power of dialogue maps as a visceral way to capture the wisdom of a group at a point in time.

Now SharePoint is a collaborative platform that’s often used for intranets, project portals, knowledge management portal and so forth. It’s a fairly versatile platform. The Compendium software on the other hand, is not a multi-user, collaborative platform. It’s more like Photoshop or Word in that you use it to create an artefact – a map – but if someone else wants to see the map then he or she has to install Compendium.  And it can be a bit of a pain in the butt to install Compendium as it is a freebie, open source product that doesn’t really fit in an enterprise environment.   We’ve therefore always wanted to have the ability to import maps into SharePoint and my colleague Chris [Tomich] had already started writing some code to do that around the time we first got into dialogue mapping.

However, my own Aha moment came later; and come to think of it, the fact that we’re having beers in this conversation is relevant to this story…

I was dialogue mapping a group of executives about 4-5 years ago; it was a team-building exercise built around a lessons learned workshop. The purpose of the workshop was to improve the collaborative and team maturity of this group. [As a part of this exercise] the group was reviewing some old projects, doing a sort of retrospective lessons learned. We got to this one project, and someone complained about an organisational policy that had caused an issue on the project.  Now as it happened, there was this guy in the group who knew how this policy had come about (I knew this guy, by the way, and I also knew that he was about to retire). He said, “Oh yeah, well that happened about 30 years ago, and it was on so-and-so project.” He then proceeded to elaborate on it.

Well, I knew this guy was about to retire. I also knew that the organization does this “phased retirement” thing, where people who are about to retire write documentation about what they do, mentor their successors etc. before they leave. I remember thinking to myself, “there’s no way in hell that he would have written that down in his documentation.”  I just knew it: he had to be in that particular conversation for him to have remembered this.

Then my next thought…and as I’m mapping, I’m having this thought … “man, someone just ought to give him a beer, sit him down, and ask him about these kinds of old projects. I could map that video…how hard could that be? If one can map live conversations then surely one can map videos.”  In fact, videos should be simpler because you can pause them, which is something you can’t do with live conversations.

So that was literally the little spark. It started with me thinking about how great it would be if this guy could spend even half his time recording his reflections…and this could take different forms, may be it could be a grad asking him questions in a mentoring scenario, or it could be another person he has worked with for years and they could reminisce over various projects. The possibilities were endless. But the basic idea was simple: it was to try to capture those sorts of water cooler or pub conversations, or those that you have in conferences. That’s where we get many of our insights – it’s the stories, the war stories through which we learn. That kind of stuff never gets into the manuals or knowledge-base articles.

Indeed, stories are the key to those unwritten insights about, say, when it is OK to break the rules. That kind of stuff can never be captured in the processes, manuals or procedures. One of your pieces highlights this beautifully – it’s one of the parables you’ve written I think, where an experienced project manager suggested to the novice that he should be listening to the stories rather than focusing on the body of knowledge he was studying.  And that is completely true.

So that was, to coin a pun, the “glimmer of an idea” – because the product is called Glyma. The idea was to capture expert knowledge by mapping it and storing the map in SharePoint. SharePoint has a great search engine and we already knew that dialogue maps are a great way to capture conversations in a way that makes it easy to understand and navigate rationale (or the logic of a conversation).  If we could do dialogue maps live, then we sure as hell could map videos. Moreover SharePoint also offers the possibility of tagging, adding feeds etc. – the kinds of things that portals these days are good at. It occurred to us that no one had really done that before.

Sure, there are plenty of story captures, say where people capture reflections on video. But because the resulting videos tend to be quite big, they are usually edited down to 15 minute “elevator pitch” type presentation. But then all the good stuff is gone; indeed, you and I have had many of these brief conversations where you’re summarising something terrific you’ve read and I’ll go, “Yeah well, that doesn’t sound so interesting to me.” The point is: you can’t compress insight into a convenient 10 minute video with nice music. So our idea was – well, don’t do that; take the video as it is and map it. Then, if you click on a node – say an idea node or a question node –Glyma will play the video from the point in time where the idea or question came up. You don’t have to sit through the entire thing. Moreover, when you do a search and get a series of results, you can click on a result and it plays that bit straight away.

So that was really the inspiration for Glyma…and it will see the light of day very soon.

Actually we’ll be putting a beta site out early next week (Editor’s note: the site has since gone live; I urge you to check it out). By the way, Glyma has been four years in the making. One of last things on our bucket list of things to do while running a consultancy was to put an innovative new product out and to see if people like it. So that’s where we are going with that.

KA: That’s sounds very interesting.  The timing should work out well because this conversation will be posted out in a week or two as well.  I wish you luck with Glyma; I’ve seen some early versions of it and it looks really good.  I look forward to seeing how it does in the market place and what sort of reception it receives.  I certainly hope it gets the reception it deserves because it is a tremendous idea.

PC: Well thank you; I appreciate your saying that…and we’ll see if you still say that once you’ve mapped this video because that might be your homework. [laughs]

KA: [laughs] Alright, great mate.  Well, thanks for your time. I think that’s been a really interesting conversation. We’ll chat about Glyma further after it’s been out for a while.

PC: Yeah absolutely.

KA: Cheers mate.

PC: See ya.

Written by K

June 18, 2014 at 7:31 am

The unspoken life of information in organisations

with 5 comments

Introduction

Many activities in organisations are driven by information. Chief among these is decision-making : when faced with a decision, those involved will seek information on the available choices and their (expected) consequences. Or so the theory goes.

In reality, information plays a role that does not quite square up with this view. For instance, decision makers may expend considerable time and effort in gathering information, only to ignore it when making their choices.  In this case information plays a symbolic role, signifying competence of the decision-maker (the volume of information being a measure of competence) rather than being a means of facilitating a decision. In this post I discuss such common but unspoken uses of information in organisations, drawing on a paper by James March and Martha Feldman entitled Information in Organizations as Symbol and Signal.

Information perversity

As I have discussed in an earlier post, the standard view of decision-making is that choices are based on an analysis of their consequences and (the decision-maker’s) preferences for those consequences.  These consequences and preferences generally refer to events in the future and are therefore uncertain. The main role of information is to reduce this uncertainty.  In such a rational paradigm, one would expect that  information gathering and utilization are consistent with the process of decision making.  Among other things this implies that:

  1. The required information is gathered prior to the decision being made.
  2. All relevant information is used in the decision-making process.
  3. All available information is evaluated prior to requesting further information.
  4. Information that is not relevant to a decision is not collected.

In reality, the above expectations are often violated. For example:

  1. Information is gathered selectively after a decision has been made (only information that supports the decision is chosen).
  2. Relevant information is ignored.
  3. Requests for further information are made before all the information at hand is used.
  4. Information that has no bearing on the decision is sought.

On the face of it, such behaviour is perverse – why on earth would someone take the trouble to gather information if they are not going to use it?  As we’ll see next, there are good reasons for such “information perversity”, some of which are obvious but others that are less so.

Reasons for information perversity

There are a couple of straightforward reasons why a significant portion of the information gathered by organisations is never used. These are:

  1. Humans have bounded cognitive capacities, so there is a limit to the amount of information they can process. Anything beyond this leads to information overload.
  2. Information gathered is often unusable in that it is irrelevant to the decision that is to be made.

Although these reasons are valid in many situations, March and Feldman assert that there are other less obvious but possibly more important reasons why information gathered is not used. I describe these in some detail below.

Misaligned incentives

One of the reasons for the mountains of unused information in organisations is that certain groups of people (who may not even be users of information) have incentives to gather information regardless of its utility. March and Feldman describe a couple of scenarios in which this can happen:

  1. Mismatched interests: In most organisations the people who use information are not the same as those who gather and distribute it. Typically, information users tend to be from  business functions (finance, sales, marketing etc.) whereas gatherers/distributors are from IT. Users are after relevant information whereas IT is generally interested in volume rather than relevance. This can result in the collection of data that nobody is going to use.
  2.   “After the fact” assessment of decisions:  Decision makers know that many (most?) of their decisions will later turn out to be suboptimal. In other words,   after-the-fact assessments of their decision may lead to the realisation that those decisions ought to have been made differently. In view of this, decision makers have good reason to try to anticipate as many different outcomes as they can, which leads to them gathering more information than can be used.

Information as measurement

Often organisations collect information to measure performance or monitor their environments. For example, sales information is collected to check progress against targets and employees are required to log their working times to ensure that they are putting in the hours they are supposed to. Information collected in such a surveillance mode is not relevant to any decision except when corrective action is required. Most of the information collected for this purpose is never used even though it could well contain interesting insights

Information as a means to support hidden agendas

People often use information to build arguments that support their favoured positions. In such cases it is inevitable that information will be misrepresented.  Such strategic misrepresentation (aka lying!) can cause more information to be gathered than necessary. As March and Feldman state in the paper:

Strategic misrepresentation also stimulates the oversupply of information. Competition among contending liars turns persuasion into a contest in (mostly unreliable) information. If most received information is confounded by unknown misrepresentations reflecting a complicated game played under conditions of conflicting interests, a decision maker would be curiously unwise to consider information as though it were innocent. The modest analyses of simplified versions of this problem suggest the difficulty of devising incentive schemes that yield unambiguously usable information…

As a consequence, decision makers end up not believing information, especially if it is used or generated by parties that (in the decision-makers’ view) may have hidden agendas.

The above points are true enough. However, March and Feldman suggest that there is a more subtle reason for information perversity in organisations.

The symbolic significance of information

In my earlier post on decision making in organisations I stated that:

…the official line about decision making being a rational process that is concerned with optimizing choices on the basis of consequences and preferences is not the whole story. Our decisions are influenced by a host of other factors, ranging from the rules that govern our work lives to our desires and fears, or even what happened at home yesterday. In short: the choices we make often depend on things we are only dimly aware of.

One of the central myths of modern organisations is that decision making is essentially a rational process.  In reality, decision making is often a ritualised activity consisting of going through the motions of identifying choices, their consequences and our preferences for them.  In such cases, information has a symbolic significance; it adds to the credibility of the decision. Moreover, the greater the volume of information, the greater the credibility (providing, of course, that the information is presented in an attractive format!). Such a process reaffirms the competence of those involved and reassures those in positions of authority that the right decision has been made, regardless of the validity or relevance of the information used.

Information is thus a symbol of rational decision making; it signals (or denotes) competence in decision making and that the decision made is valid.

Conclusion

In this article I have discussed the  unspoken life of information in organisations –  how it is used in ways that do not square up to a rational process of decision making. As March and Feldman put it:

Individuals and organizations invest in information and information systems, but their investments do not seem to make decision-theory sense. Organizational participants seem to find value in information that has no great decision relevance. They gather information and do not use it. They ask for reports and do not read them. They act first and receive requested information later.

Some of the reasons for such “information perversity” are straightforward: they include, limited human cognitive ability, irrelevant information, misaligned incentives and even lying!  But above all, organisations gather information because it symbolises proper decision making behaviour and provides assurance of the validity of decisions, regardless of whether or not decisions are actually made on a rational basis.  To conclude: the official line about information spins a tale about its role in rational decision-making but  the unspoken life of information in organisations tells another story.

Written by K

June 14, 2012 at 5:55 am

“The Heretic’s Guide to Best Practices” wins bronze at the 5th Annual Axiom Business Book Awards.

with 2 comments

I’m delighted to announce that the book that Paul Culmsee and I published recently has been awarded a bronze medal at the Axiom Business Book Awards for 2012, under the category Operations Management/Lean/Continuous Improvement.

http://www.axiomawards.com

We are truly honoured that the panel found our efforts worthy of an award.

If you are interested in finding out more about the book, please check out the review by Shim Marom and the one by Scott McCrickard.

There are also a number of customer reviews on Amazon.

http://www.amazon.com/Heretics-Guide-Best-Practices-Organisations/dp/1938908406

The Heretic’s Guide is a self-published book with no big publisher marketing behind it, so we’d greatly appreciate your spreading the word!

On the ineffable tacitness of knowledge

with 7 comments

Introduction

Knowledge management (KM) is essentially about capturing and disseminating the know-how,  insights and experiences  that exist within an organisation.  Although much is expected of KM initiatives, most end up delivering document repositories that are of as much help in managing knowledge as a bus is in getting to the moon. In this post I look into the question of why KM initiatives fail, drawing on a couple of sources that explore the personal nature of knowledge.

Explicit and tacit knowledge in KM

Most KM  professionals are familiar with terms explicit and tacit knowledge.  The first term refers to knowledge that can be expressed in writing or speech whereas the second refers to that which cannot.  Examples of the former include driving directions (how to get from A to B) or a musical score; examples of the latter include the ability to drive or to play a musical instrument.  This seems reasonable enough: a musician can learn how to play a piece by studying a score however a non-musician cannot learn to play an instrument by reading a book.

In their influential book, The Knowledge-Creating Company, Ikujiro Nonaka and Hirotaka Takeuchi proposed a model of knowledge creation1  based on their claim that:  “human knowledge is created and expanded through social interaction between tacit knowledge and explicit knowledge.” It would take me too far afield to discuss their knowledge creation model in full here – see this article for a quick summary.  However, the following aspects of it are relevant to the present discussion:

  1. The two forms of knowledge (tacit and explicit) can be converted from one to the other. In particular, it is possible to convert tacit knowledge to an explicit form.
  2. Knowledge can be transferred (from person to person).

In the remainder of this article I’ll discuss why these claims aren’t entirely valid.

All knowledge has tacit and explicit elements

In a paper entitled, Do we really understand tacit knowledge, Haridimos Tsoukas discusses why Nonaka and Takeuchi’s view of knowledge is incomplete, if not incorrect. To do so, he draws upon writings of the philosopher Michael Polanyi.

According to Polanyi, all knowledge has tacit and explicit elements. This is true even of theoretical knowledge that can be codified in symbols (mathematical knowledge, for example). Quoting from Tsoukas’ paper:

…if one takes a closer look at how theoretical (or codified) knowledge is actually used in practice, one will see the extent to which theoretical knowledge itself, far from being as objective, self-sustaining, and explicit as it is often taken to be, it is actually grounded on personal judgements and tacit commitments. Even the most theoretical form of knowledge, such as pure mathematics, cannot be a completely formalised system, since it is based for its application and development on the skills of mathematicians and how such skills are used in practice.

Mathematical proofs are written in a notation that is (supposed to be) completely unambiguous.  Yet every   mathematician will understand a proof  (in the sense of its implications rather than its veracity) in his or her own way.  Moreover, based on their personal understandings, some mathematicians will be able to derive insights that others won’t. Indeed this is how we distinguish between skilled and less skilled mathematicians.

Polanyi claimed that all knowing consists at least in part of skillful action because the knower participates in the act of understanding and assimilating what is known.

Lest this example seem too academic, let’s consider a more commonplace one taken from Tsoukas’ paper: that of a person reading a map.

Although a map is an explicit representation of location, in order to actually use a map to get from A to B a person needs to:

  1. Locate A on the map.
  2. Plot out a route from A to B.
  3. Traverse the plotted route by identifying landmarks, street names etc. in the real world and interpreting them in terms of the plotted route.

In other words, the person has to make use of his or her senses and cognitive abilities in order to use the (explicit) knowledge captured in the map. The point is that the person will do this in a way that he or she cannot fully explain to anyone else. In this sense, the person’s understanding (or knowledge) of what’s in the map manifests itself in how he or she actually goes about getting from A to B.

The nub of the matter: focal and subsidiary awareness

Let me get to the heart of the matter through another example that is especially relevant as I sit at my desk writing these words.

I ask the following question:

What is it that enables me to write these lines using my knowledge of the English language, papers on knowledge management and a host of other things that I’m not even aware of?

I’ll begin my answer by quoting yet again from Tsoukas’ paper,

 For Polanyi the starting point towards answering this question is to acknowledge that “the aim of a skilful performance is achieved by the observance of a set of rules which are not known as such to the person following them.” …Interestingly, such ignorance is hardly detrimental to [the] effective carrying out of [the]  task…

Any particular elements of the situation which may help the purpose of a mental effort are selected insofar as they contribute to the performance at hand, without the performer knowing them as they would appear in themselves. The particulars are subsidiarily known insofar as they contribute to the action performed. As Polanyi remarks, ‘this is the usual process of unconscious trial and error by which we feel our way to success and may continue to improve on our success without specifiably knowing how we do it.’

Polanyi noted that there are two distinct kinds of awareness that play a role in any (knowledge-based) action. The first one is conscious awareness of what one is doing (Polanyi called this focal awareness). The second is subsidiary awareness: the things that one is not consciously aware of but nevertheless have a bearing on the action.

Back to my example, as I write these words I’m consciously aware of the words appearing on my screen as I type whereas I’m subsidarily aware of a host of other things I cannot fully enumerate:  my thoughts, composition skills, vocabulary and all the other things that have a bearing on my writing (my typing skills, for example).

The two kinds of awareness, focal and subsidiary, are mutually exclusive: the instant I shift my awareness from the words appearing on my screen, I lose flow and the act of writing is interrupted.  Yet, both kinds of awareness are necessary for the act of writing. Moreover, since my awareness of the subsidiary elements of writing is not conscious, I cannot describe them. The minute I shift attention to them, the nature of my awareness of them changes – they become things in their own right instead of elements that have a bearing on my writing.

In brief, the knowledge-based act of writing is composed of both conscious and subsidiary elements in an inseparable way. I can no more describe all the knowledge involved in the act than I can the full glory of a  beautiful sunset.

Wrapping up

From the above it appears that the central objective of knowledge management is essentially unattainable because all knowledge has tacit elements that cannot be “converted” or codified explicitly. We can no more capture or convert knowledge than we can “know how others know.”  Sure, one can get people to document what they do, or even capture their words and actions on media. However this does not amount to knowing what they know. In his paper, Tsoukas writes about the ineffability of tacit knowledge.  However, as I have argued,  all knowledge is ineffably tacit. I hazard that this may, at least in part, be the reason why KM initiatives fall short of their objectives.

Acknowledgement and further reading

Thanks to Paul Culmsee for getting me reading and thinking about this stuff again!  Some of the issues that I have discussed above are touched upon in the book I have written with Paul.

Finally, for those who are interested, here are some of my earlier pieces on tacit knowledge:

What is the make of that car? A tale about tacit knowledge

Why best practices are hard to practice (and what can be done about it)


Footnotes:

1 As far as I’m aware, Nonaka and Takeuchi’s model mentioned in this article is still the gold standard in KM. In recent years, there have been a number of criticisms of the model (see this paper by Gourlay, or especially this one by Powell). Nonaka and von Krogh attempt to rebut some of the criticisms in this paper. I will leave it to interested readers to make up their own minds as to how convincing their rebuttal is.

Written by K

February 9, 2012 at 10:30 pm

On the limitations of business intelligence systems

with 7 comments

Introduction

One of the main uses of business intelligence  (BI) systems is to support decision making in organisations.  Indeed, the old term Decision Support Systems is more descriptive of such applications than the term BI systems (although the latter does have more pizzazz).   However, as Tim Van Gelder pointed out in an insightful post,  most BI tools available in the market do not offer a means to clarify the rationale behind decisions.   As he stated, “[what] business intelligence suites (and knowledge management systems) seem to lack is any way to make the thinking behind core decision processes more explicit.”

Van Gelder is absolutely right:  BI tools do not support the process of decision-making directly, all they do is present data or information on which a decision can be based.  But there is more:  BI systems are based on  the view that data should be the primary consideration when making decisions.   In this post I explore some of the (largely tacit) assumptions that flow from such a data-centric view. My discussion builds on some points made by Terry Winograd and Fernando Flores in their wonderful book, Understanding Computers and Cognition.

As we will see, the assumptions regarding the centrality of data are questionable, particularly when dealing with complex decisions. Moreover, since these assumptions are implicit in all BI systems, they highlight the limitations of using BI systems for making business decisions.

An example

To keep the discussion grounded, I’ll use a scenario to illustrate how assumptions of data-centrism can sneak into decision making. Consider a sales manager who creates sales action plans for representatives based on reports extracted from his organisation’s BI system. In doing this, he makes a number of tacit assumptions. They are:

  1. The sales action plans should be based on the data provided by the BI system.
  2. The data available in the system is relevant to the sales action plan.
  3. The information provided by the system is objectively correct.
  4. The  side-effects of basing decisions (primarily) on data are negligible.

The assumptions and why they are incorrect

Below I state some of the key assumptions of the data-centric paradigm of BI and discuss their limitations using the example of the previous section.

Decisions should be based on data alone:    BI systems promote the view that decisions can be made based on data alone.  The danger in such a view is that it overlooks social, emotional, intuitive and qualitative factors that can and should influence decisions.  For example, a sales representative may have qualitative information regarding sales prospects that cannot be inferred from the data. Such information should be factored into the sales action plan providing the representative can justify it or is willing to stand by it.

The available data is relevant to the decision being made: Another tacit assumption made by users of BI systems is that the information provided is relevant to the decisions they have to make. However, most BI systems are designed to answer specific, predetermined questions. In general these cannot cover all possible questions that managers may ask in the future.

More important is the fact that the data itself may be based on assumptions that are not known to users. For example, our sales manager may be tempted to incorporate market forecasts simply because they are available in the BI system.  However, if he chooses to use the forecasts, he will likely not take the trouble to check the assumptions behind the models that generated the forecasts.

The available data is objectively correct:  Users of BI systems tend to look upon them as a source of objective truth. One of the reasons for this is that quantitative data tends to be viewed as being more reliable than qualitative data.  However, consider the following:

  1. In many cases it is impossible to establish the veracity of quantitative data, let alone its accuracy. In extreme cases, data can be deliberately distorted or fabricated (over the last few years there have been some high profile cases of this that need no elaboration…).
  2. The imposition of arbitrary quantitative scales on qualitative data can lead to meaningless numerical measures. See my post on the limitations of scoring methods in risk analysis for a deeper discussion of this point.
  3. The information that a BI system holds is based the subjective choices (and biases) of its designers.

In short, the data in a BI system does not represent an objective truth. It is based on subjective choices of users and designers, and thus may not be an accurate reflection of the reality it allegedly represents. (Note added on 16 Feb 2013:  See my essay on data, information and truth in organisations for more on this point).

Side-effects of data-based decisions are negligible:  When basing decisions on data, side-effects are often ignored. Although this point is closely related to the first one, it is worth making separately.  For example, judging a sales representative’s performance on sales figures alone may motivate the representative to push sales at the cost of building sustainable relationships with customers.  Another example of such behaviour is observed in call centers where employees are measured by number of calls rather than call quality (which is much harder to measure). The former metric incentivizes employees to complete calls rather than resolve issues that are raised in them. See my post entitled, measuring the unmeasurable, for a more detailed discussion of this point.

Although I have used a scenario to highlight problems of the above assumptions, they are independent of the specifics of any particular decision or system. In short, they are inherent in BI systems that are based on data – which includes most systems in operation.

Programmable and non-programmable decisions

Of course, BI systems are perfectly adequate – even indispensable –  for certain situations. Examples of these include, financial reporting (when done right!) and other operational reporting (inventory, logistics etc).  These generally tend to be routine situations with clear cut decision criteria and well-defined processes. Simply put, they are the kinds of decisions that can be programmed.

On the other hand, many decisions cannot be programmed: they have to be made based on incomplete and/or ambiguous information that can be interpreted in a variety of ways. Examples include issues such as what an organization should do in response to increased competition or formulating a sales action plan in a rapidly changing business environment. These issues are wicked: among other things, there is a diversity of viewpoints on how they should be resolved. A business manager and a sales representative are likely to have different views on how sales action plans should be adjusted in response to a changing business environment. The shortcomings of BI systems become particularly obvious when dealing with such problems.

Some may argue that it is naïve to expect BI systems to be able to handle such problems. I agree entirely. However, it is easy to overlook over the limitations of these systems, particularly when called upon to make snap decisions on complex matters. Moreover, any critical reflection regarding what BI ought to be is drowned in a deluge of vendor propaganda and advertisements masquerading as independent advice in the pages of BI trade journals.

Conclusion

In this article I have argued that BI systems have some inherent limitations as decision support tools because they focus attention on data to the exclusion of other, equally important factors.  Although the data-centric paradigm promoted by these systems is adequate for routine matters, it falls short when applied to complex decision problems.

Written by K

November 24, 2011 at 6:20 am

Inexplicit knowledge: what people know, but won’t tell

with 2 comments

Introduction

Much of the knowledge that exists in organisations remains unarticulated, in the heads of those who work at the coalface of business activities. Knowledge management professionals know this well, and use the terms explicit and tacit knowledge to distinguish between knowledge that can and can’t be communicated via language.  Incidentally, the term tacit knowledge was coined by Michael Polanyi  – and it is important to note that he used it in a sense that is very different from what it has come to mean in knowledge management. However, that’s a topic for another post.  In the present post I look at a related issue that is common in organisations: the fact that much of what people know can be made explicit, but isn’t.  Since the  discipline of knowledge management is in dire need of more jargon, I call this inexplicit knowledge. To borrow a phrase from Polanyi, inexplicit knowledge is what people know, but won’t tell.   Below, I discuss reasons why potentially explicit knowledge remains inexplicit and what can be done about it.

Why inexplicit knowledge is common

Most people would have encountered work situations in which they chose “not to tell” – remaining silent instead of sharing knowledge  that would have been helpful. Common reasons for such behaviour include:

  1. Fear of loss of ownership of the idea: People are attached to their ideas. One reason for not volunteering their ideas is the worry that someone else in the organisation (a peer or manager) might “steal” the idea. Sometimes such behaviour is institutionalised in the form of an “innovation committee” that solicits ideas, offering monetary incentives for those that are deemed the best (more on incentives below). Like most committee-based solutions, this one is a dud. A better option may be to put in place mechanisms to ensure that those who conceive and volunteer ideas are encouraged to see them through to fruition.
  2. Fear of loss of face and/or fear of reprisals: In organisational cultures that are competitive, people may fear that their ideas will be ridiculed or put down by others. Closely related to this is the fear of reprisals from management. This happens often enough, particularly when the idea challenges the status quo or those in positions of authority. One of the key responsibilities of management is to foster an environment in which people feel psychologically safe to volunteer ideas, however controversial or threatening the ideas may be.
  3. Lack of incentives:  Some people may be willing to part with their ideas, but only at a price. To address this, organisations may offer extrinsic rewards (i.e. material items such as money, gift vouchers etc) for worthwhile ideas.  Interestingly, research has shown that non-monetary extrinsic rewards (meals, gifts etc.) are more effective than monetary ones. This makes sense – financial rewards are more easily forgotten; people are more likely to remember a meal at a top-flight restaurant than a 500$ cheque. That said, it is important to note that extrinsic rewards can also lead to unintended side effects. For example, financial incentives based on quantity of contributions might lead to a glut of low-quality contributions. See the next point for a discussion of another side effect of extrinsic rewards.
  4. Wrong incentives: As I have discussed at length in my post on motivation in knowledge management projects, people will contribute their hard earned knowledge only if they are truly engaged in their work.  Such people are intrinsically motivated (i.e. internally motivated, independent of material rewards); their satisfaction comes from their work (yes, such people do exist!).  Consequently they need little or no supervision. Intrinsic rewards are invariably non-material and they cannot be controlled by management. A surprising fact is that, intrinsically motivated people can actually be turned off – even offended – by material rewards.

Psychological safety and incentives are important factors, but there is an even more important issue: the relationships between people who make up the workgroup.

Knowledge sharing and the theory of cooperative action

The work of Elinor Ostrom on collective (or cooperative) action is relevant here because knowledge sharing is a form of cooperation. According to the theory of cooperative action, there are three core relationships that promote cooperation in groups: trust, reciprocity and reputation.  Below I take a look at each of these in the context of knowledge sharing:

Trust: In the end, whether we choose to share what we know is largely a matter of trust: if we believe that others will respond positively – be it through acknowledgement or encouragement via tangible or intangible rewards –  then the chances are that we will tell what we know.  On the other hand, if the response is likely to be negative, we may prefer to remain silent.

Reciprocity: This refers to strategies that are based on treating people in the way we believe they would treat us. We are more likely to share what we know with others if we have reason to believe that they would be just as open with us.

Reputation: This refers to the views we have about the individuals we work with.  Although such views may be developed by direct observation of peoples’ behaviours, they are also greatly influenced by opinions of others. The relevance of reputation is that we are more likely to be open with people who have a good reputation.

According to Ostrom, these core relationships can be enhanced by face-to-face communication and organisational rules/ norms that promote openness. See my post on Ostrom’s work and its relevance to project management for more on this.

Summing up

One of the key challenges that organisations face is to get people working together in a cooperative manner.  Among other things this includes getting people to share their knowledge; to “tell what they know.” Unfortunately, much of this potentially explicit knowledge remains inexplicit, locked away in peoples’ heads, because there is no incentive to share or, even worse, there are factors that actively discourage people from sharing what they know. These issues can be tackled by offering employees the right incentives and creating the right environment. As important as incentives are, the latter is the more important factor:   the key to unlocking inexplicit knowledge lies in creating an environment of trust and openness.

Follow

Get every new post delivered to your Inbox.

Join 331 other followers

%d bloggers like this: