Archive for June 2009
The aim of an opinion piece writer is to convince his or her readers that a particular idea or point of view is reasonable or right. Typically, such pieces weave facts , interpretations and reasoning into prose, wherefrom it can be hard to pick out the essential thread of argumentation. In an earlier post I showed how an issue map can help in clarifying the central arguments in a “difficult” piece of writing by mapping out Fred Brooks’ classic article No Silver Bullet. Note that I use the word “difficult” only because the article has, at times, been misunderstood and misquoted; not because it is particularly hard to follow. Still, Brooks’ article borders on the academic; the arguments presented therein are of interest to a relatively small group of people within the software development community. Most developers and architects aren’t terribly interested in the essential difficulties of the profession – they just want to get on with their jobs. In the present post, I develop an issue map of a piece that is of potentially wider interest to the IT community – Nicholas Carr’s 2003 article, IT Doesn’t Matter.
The main point of Carr’s article is that IT is becoming a utility, much like electricity, water or rail. As this trend towards commoditisation gains momentum, the strategic advantage offered by in-house IT will diminish, and organisations will be better served by buying IT services from “computing utility” providers than by maintaining their own IT shops. Although Carr makes a persuasive case, he glosses over a key difference between IT and other utilities (see this post for more). Despite this, many business and IT leaders have taken his words as the way things will be. It is therefore important for all IT professionals to understand Carr’s arguments. The consequences are likely to affect them some time soon, if they haven’t already.
Some preliminaries before proceeding with the map. First, the complete article is available here – you may want to have a read of it before proceeding (but this isn’t essential). Second, the discussion assumes a basic knowledge of IBIS (Issue-Based Information System) – see this post for a quick tutorial on IBIS. Third, the map is constructed using the open-source tool Compendium which can be downloaded here.
With the preliminaries out of the way, let’s get on with issue mapping Carr’s article.
So, what’s the root (i.e. central) question that Carr poses in the article? The title of the piece is “IT Doesn’t Matter” – so one possible root question is, “Why doesn’t IT matter?” But there are other candidates: “On what basis is IT an infrastructural technology?” or “Why is the strategic value of IT diminishing?” for example. From this it should be clear that there’s a fair degree of subjectivity at every step of constructing an issue map. The visual representation that I construct here is but one interpretation of Carr’s argument.
Out of the above (and many other possibles), I choose “Why doesn’t IT matter?” as the root question. Why? Well, in my view the whole point of the piece is to convince the reader that IT doesn’t matter because it is an infrastructural technology and consequently has no strategic significance. This point should become clearer as our development of the issue map progresses.
The ideas that respond to this question aren’t immediately obvious. This isn’t unusual: as I’ve mentioned elsewhere, points can only be made sequentially – one after the other – when expressed in prose. In some cases one may have to read a piece in its entirety to figure out the elements that respond to a root (or any other) question.
In the case at hand, the response to the root question stands out clearly after a quick browse through the article. It is: IT is an infrastructural technology.
The map with the root question and the response is shown in Figure 1.
Moving on, what arguments does Carr offer for (pros) and against (cons) this idea? A reading of the article reveals one con and four pros. Let’s look at the cons first:
- IT (which I take to mean software) is complex and malleable, unlike other infrastructural technologies. This point is mentioned, in passing, on the third page of the paper: “Although more complex and malleable than its predecessors, IT has all the hallmarks of an infrastructural technology…”
The arguments supporting the idea that IT is an infrastructural technology are:
- The evolution of IT closely mirrors that of other infrastructural technologies such as electricity and rail. Although this point encompasses the other points made below, I think it merits a separate mention because the analogies are quite striking. Carr makes a very persuasive, well-researched case supporting this point.
- IT is highly replicable. This is point needs no further elaboration, I think.
- IT is a transport mechanism for digital information. This is true, at least as far as network and messaging infrastructure is concerned.
- Cost effectiveness increases as IT services are shared. This is true too, providing it is understood that flexibility is lost when services are shared.
The map, incorporating the pros and cons is shown in Figure 2.
Now that the arguments for and against the notion that IT is an infrastructural technology are laid out, lets look at the article again, this time with an eye out for any other issues (questions) raised.
The first question is an obvious one: What are the consequences of IT being an infrastructural technology?
Another point to be considered is the role of proprietary technologies, which – by definition – aren’t infrastructural. The same holds true for custom built applications. So, this begs the question, if IT is an infrastructural technology, how do proprietary and custom built applications fit in?
The map, with these questions added in is shown in Figure 3.
Let’s now look at the ideas that respond to these two questions.
A point that Carr makes early in the article is that the strategic value of IT is diminishing. This is essentially a consequence of the notion that IT is an infrastructural technology. This idea is supported by the following arguments:
- IT is ubiquitous – it is everywhere, at least in the business world.
- Everyone uses it in the same way. This implies that no one gets a strategic advantage from using it.
What about proprietary technologies and custom apps?. Carr reckons these are:
- Doomed to economic obsolescence. This idea is supported by the argument that these apps are too expensive and are hard to maintain.
- Related to the above, these will be replaced by generic apps that incorporate best practices. This trend is already evident in the increasing number of enterprise type applications that offered as services. The advantages of these are that they a) cost little b) can be offered over the web and c) spare the client all those painful maintenance headaches.
The map incorporating these ideas and their supporting arguments is shown in Figure 4.
Finally, after painting this somewhat gloomy picture (to a corporate IT minion, such as me) Carr asks and answers the question: How should organisations deal with the changing role of IT (from strategic to operational)? His answers are:
- Reduce IT spend.
- Buy only proven technology – follow don’t lead.
- Focus on (operational) vulnerabilities rather than (strategic) opportunities.
The map incorporating this question and the ideas that respond to it is shown in Figure 5, which is also the final map (click on the graphic to view a full-sized image).
Map completed, I’m essentially done with this post. Before closing, however, I’d like to mention a couple of general points that arise from issue mapping of prose pieces.
Figure 5 is my interpretation of the article. I should emphasise that my interpretation may not coincide with what Carr intended to convey (in fact, it probably doesn’t). This highlights an important, if obvious, point: what a writer intends to convey in his or her writing may not coincide with how readers interpret it. Even worse, different readers may interpret a piece differently. Writers need to write with an awareness of the potential for being misunderstood. So, my first point is that issue maps can help writers clarify and improve the quality of their reasoning before they cast it in prose.
Issue maps sketch out the logical skeleton or framework of argumentative prose. As such, they can help highlight weak points of arguments. For example, in the above article Carr glosses over the complexity and malleability of software. This is a weak point of the argument, because it is a key difference between IT and traditional infrastructural technologies. Thus my second point is that issue maps can help readers visualise weak links in arguments which might have been obscured by rhetoric and persuasive writing.
To conclude, issue maps are valuable to writers and readers: writers can use issue maps to improve the quality of their arguments before committing them in writing, and readers can use such maps to understand arguments that have been thus committed.
One consequence of the increasing awareness of knowledge as an organisational asset is that many organisations have launched projects aimed at managing knowledge. Unfortunately, a large number of these efforts focus entirely on technical solutions, neglecting the need for employee participation. The latter is important; as stated in this paper, published a decade ago, “Knowledge transfer is about connection not collection, and that connection ultimately depends on choice made by individuals…” This suggests that participant motivation is a key success factor for knowledge management initiatives. A recent paper entitled, Considering Participant Motivation in Knowledge Management Projects, by Allen Whittom and Marie-Christine Roy looks at theories of motivation from the context of knowledge management projects. This post is a summary and annotated review of the paper.
Many researchers claim that the failure rate of knowledge management projects is high, but there seems to be some confusion as to just how high the figure is (see this paper, for example). In the introduction to their paper, Whittom and Roy claim that the failure rate may be higher than 80% – but they offer no proof. Still, with many independent researchers quoting figures ranging from 50 to 80%, one can take it as established that it is a matter that merits investigation. Accordingly, many researchers have looked at causes of failure of knowledge management projects (see this paper or this one). Some specifically identify lack of participant motivation as a cause of failure (see this paper). Whittom and Roy claim that despite the work done thus far, knowledge management research does not provide any suggestions as to how motivation is to be managed in such projects. Their aim, therefore, is to:
- Present concepts from theories of motivation that are relevant to knowledge management projects.
- Propose ways in which project managers can foster participant motivation in a way that is consistent with business objectives.
These points are covered in the next two sections. The final section presents some concluding remarks.
Motivation and Knowledge Transfer
The authors define motivation as the underlying reason(s) for a person’s actions. Motivation is usually classified as extrinsic or intrinsic depending on whether its source is external or internal to the individual. People who are extrinsically motivated are driven by rewards such as bonuses or promotions. Typically such individuals work for rewards. On the other hand intrinsically motivated individuals are self-driven, and need little supervision. Their enthusiasm, however, depends on whether or not their personal goals are congruent with the task at hand. This is important: their aims and objectives may not always be aligned with business goals. Further, intrinsically motivated individuals perform creative or complex tasks better than others, but this type of motivation varies greatly from one person to another and cannot be controlled by management. See my post on motivation in project management for a comprehensive discussion on extrinsic and intrinsic motivation.
The authors then discuss the link between motivation and the willingness to share knowledge. Knowledge falls into two categories: tacit and explicit. Tacit knowledge is hard to codify and communicate (e.g. a skill, such as riding a bicycle) whereas explicit knowledge can be formalised and transmitted (e.g. how to open a bank account). Tacit knowledge is in “people’s heads” and is consequently harder to capture. More often than not, though, it turns out to be more valuable than explicit knowledge. In their paper entitled, Motivation, Knowledge Transfer and Organisational Forms, Osterloh and Frey state that, “…Intrinsic motivation is crucial when tacit knowledge in and between teams must be transferred…” Following this work, Gartner researchers Morello and Caldwell proposed a model in which intrinsic motivation drives the creation and sharing of tacit knowledge which in turn drives the dissemination and use of tacit knowledge in the organisation (I couldn’t find a publicly available copy of their work – but there is an illustration of the model in Figure 1 Whittom and Roy’s paper).
The message from motivation research is clear: intrinsic motivation is critical to the success of knowledge management projects.
Rewards and Recognition
Rewards and recognition are “levers of motivation”: they can be used to enhance and direct employee motivation towards achieving organisational goals. Reward systems are aimed at aligning individual efforts with organisational objectives. Recognition systems, on the other hand, are designed to express public appreciation for high standards of achievement or competence. These may be set according to criteria that diverge from preset objectives (as an example – a public thanks for a job well done can be made irrespective of whether the job is in line with company objectives)
Rewards can be extrinsic (not related to the task) or intrinsic (related to the task) and material or non-material. Extrinsic rewards are typically material – i.e. they involve giving the recipient something tangible. Financial incentives are the most common form of extrinsic rewards because they are easily administered through the pay system. Extrinsic rewards can also be non-financial (gift certificates or a meal at a nice restaurant, for example). For the same investment, non-financial rewards are found to have a more lasting effect than financial ones. This makes sense: people are more likely to remember a memorable meal than a few hundred dollar raise; the latter is sometimes forgotten as soon as it comes into effect. A downside of financial rewards is that they are easily forgotten and may actually decrease intrinsic motivation (see this paper by David Beswick). Another is that they may encourage sub-standard work, particularly in cases where benchmarks are based on volume rather than quality of output.
Extrinsic rewards can also be non-material – promotions and training opportunities, for example (see this paper by Wolfgang Semar for more on non-material, extrinsic rewards).
Intrinsic rewards generally pertain to the satisfaction derived from performing a task. The moral satisfaction arising from a job done well is also a form of intrinsic reward. It should be clear that these rewards work only for intrinsically motivated individuals. Intrinsic rewards are invariably non-material and they cannot be controlled by management. However, awareness of factors influencing intrinsic motivation can help managers create the right environment for intrinsically motivated individuals. Kenneth Thomas, in his book entitled, Intrinsic Motivation at Work – Building Energy and Commitment, identifies four psychological factors that can influence intrinsic motivation. They are:
- Feelings of accomplishment: These can be enhanced by devising interesting work tasks and aligning them with employee interests.
- Feelings of autonomy: These can be enhanced by empowering employees with responsibility and authority to do their work.
- Feelings of competence: These can be enhanced by offering employees opportunities to demonstrate and enhance their expertise.
- Feelings of progress: These can be enhanced by fostering a collaborative atmosphere in which project successes are celebrated.
These factors are (to an extent) under management control. If nothing else, it is worth being aware of them so that one can avoid doing things that might reduce intrinsic motivation.
Motivation crowding and psychological contracts
The authors then examine the effects of rewards on intrinsic motivation in the context of knowledge management projects (Recall that intrinsic motivation was seen to be a key success factor in knowledge management projects). They use motivation crowding theory to frame their discussion. Crowding theory suggests that intrinsic motivation can be enhanced (“crowded-in”) or undermined (or “crowded-out”) by external rewards.
To understand motivation crowding, one has to look at how extrinsic (or external) rewards work. Basically there are two ways in which an extrinsic reward can be perceived. To quote from the paper,
External interventions, such as rewards, may influence this perception either through information or control. If people see a reward as being related to their competence (information), intrinsic motivation for the task will be encouraged or maintained. On the other hand, if they see a reward as a way to control their performance or autonomy, intrinsic motivation would be decreased.
Extrinsic rewards can have a positive or negative effect on information and control. This is best understood through an example: consider a company that announces cash incentives for the top three contributors to a knowledge database. This reward has a positive control aspect (i.e. encourages participation) but a negative information aspect (i.e. no check on quality of contributions). Consequently, the reward encourages high volume of contributions with no regard to quality. This situation typically undermines or “crowds-out” intrinsic motivation. Note that motivation “crowding out” is sometimes referred to as motivation eviction in the literature.
Crowding-out is also seen in recurring tasks. For example, if a monetary incentive is offered for a task, there will be an expectation that the incentive be offered the next time around. On the other hand, non-monetary interventions such as increased employee involvement and autonomy in project decision making can “crowd-in” or enhance intrinsic motivation.
These effects are intuitively quite obvious, but it’s interesting to see them from a social science / economics point of view. If you’d like to find out more, I highly recommend the paper, Motivation crowding theory: A survey of empirical evidence, by Bruno Frey and Reto Jegen.
The take home lesson from the above is that intrinsic motivation can sometimes be negatively affected by external rewards. Manager, beware.
Whittom and Roy also discuss the notion of psychological contracts between the employer and employee. These contracts, distinct from formal employment contracts, refer to the unstated (but implied) informal, mutual obligations pertaining to respect, autonomy, work ethic, fairness etc. An employee’s intrinsic motivation can be greatly reduced if he or she perceives that the contract has been breached. For example, if an employee’s regarding improvements to a knowledge database are ignored, she might feel undervalued. In her eyes, management (and hence the organisation) has lost credibility, and the psychological contract has been violated. In psychological contract theory, personal relationships are seen to be an important driver of intrinsic motivation: people are more likely to enjoy working in teams in which they have good relations with team members.
Practices to foster intrinsic motivation
One conclusion from the aforementioned theories is that intrinsic motivation is essential for the transfer of tacit knowledge. Accordingly, the authors suggest the following practices to maintain and enhance intrinsic motivation of employees involved in knowledge management projects:
- Avoid the use of monetary rewards. Instead, use non-monetary rewards that recognize competence. Monetary rewards may also encourage the transfer of unimportant knowledge.
- Involve employees in the formulation of project objectives.
- Encourage team work and team bonding. A good team dynamic encourages the sharing of tacit knowledge. The technique of dialogue mapping facilitates the sharing and capture of knowledge in a team environment.
- Emphasise how the employee might benefit from the project – this is the old WIIFM factor. This needs to be done in a way that shows how the benefit is integrated into the organisation’s culture – i.e. the benefit must be a realistic and believable one, else the employee will see right through it.
- Good communication between management and employees. This one is a “usual suspect” that comes up in virtually all best practice recommendations. Unfortunately it is seldom done right.
Contextual recommendations based on knowledge and motivation types
Theories of motivation indicate that, as far as motivation for knowledge sharing is concerned, one size does not fit all. The particular strategy used depends on the nature of the knowledge that is being captured (tacit or explicit), participants’ motivational drivers (intrinsic or extrinsic) and organizational resources. Based on this, the authors discuss the following contexts
- Tacit knowledge management / intrinsic motivation: This is an ideal situation. Here the manager’s role is to support participants in achieving project objectives rather than to influence their behaviour through rewards. Extrinsic rewards should be avoided because participants are intrinsically motivated.
- Tacit knowledge management / extrinsic motivation: From the preceding discussion of motivation theories, it is clear that this is not a good situation. However, all is not lost. A manager can develop knowledge management strategies based on structured training, discussion groups etc. to help codify and transfer tacit knowledge. These strategies should highlight the project benefits (for the employee and the organisation). Further, extrinsic rewards can be offered, but their “crowding-out” effect over time should be kept in mind.
- Explicit knowledge / intrinsic motivation: Here the knowledge management aspect is easier because the knowledge is explicit. Typically, once the objectives are identified, it is clear how knowledge should be captured and organized. Obviously, structured training and tools such as Wikis and databases can help facilitate knowledge transfer. Further, these will be more effective than case (2) above, because the participants are intrinsically motivated.. Recommendations, as far as rewards are concerned are the same as in the first case.
- Explicit knowledge / extrinsic motivation: For knowledge management the same considerations apply as in case (3). However, these strategies will be less effective because employees are extrinsically motivated. For rewards management, the considerations of case (2) apply.
As discussed above, the motivation strategy should be determined by whether the team members are intrinsically or extrinsically motivated. Unfortunately, though, the strategy often dictated by the culture of the organization – the manager may have little say in determining it. The authors do not discuss what a manager might do in such a situation.
The paper presents no new data or analysis of existing data. As such it must be evaluated on the basis of new concepts and theoretical constructs that it presents. From this perspective there’s little that’s new in this paper. That said, project managers leading knowledge management projects might find the paper a worthwhile read because of its coverage of motivation theories (crowding theory and psychological contracts, in particular).
Let me end with an extrapolation of the above discussion to software projects. The holy grail of knowledge management initiatives is to capture tacit knowledge. By definition, this knowledge is difficult to codify. One sees something similar in requirements gathering for application software. The analyst needs to capture all the explicit and tacit process knowledge that’s in users’ heads. The former is easy to capture; the latter isn’t. As a result requirements usually do not capture tacit process knowledge. This is one aspect of what Brooks referred to as the essential problem of software design – figuring out what the software really needs to do (see this post for more on Brooks’ argument). Well designed software embodies both kinds of knowledge, so software projects are knowledge management projects in a sense. As far as motivation is concerned, therefore, the theories and conclusions sketched above should apply to software projects. An intrinsically motivated development team will improve the chances of success greatly; a trite statement perhaps, but one that may resonate with those who have had the privilege of working with such teams.
On a recent ramble through Google Scholar, I stumbled on a fascinating paper by Michael Mahoney entitled, What Makes the History of Software Hard. History can offer interesting perspectives on the practice of a profession. So it is with this paper. In this post I review the paper, with an emphasis on the insights it provides into the practice of software development.
Mahoney’s thesis is that,
The history of software is the history of how various communities of practitioners have put their portion of the world into the computer. That has meant translating their experience and understanding of the world into computational models, which in turn has meant creating new ways of thinking about the world computationally and devising new tools for expressing that thinking in the form of working programs….
In other words, software– particularly application software – embodies real world practices. As a consequence,
…the models and tools that constitute software reflect the histories of the communities that created them and cannot be understood without knowledge of those histories, which extend beyond computers and computing to encompass the full range of human activities…
This, according Mahoney, is what makes the history of software hard.
The standard history of computing
The standard (textbook) history of computing is hardware-focused: a history of computers rather than computing. The textbook version follows a familiar tune starting with the abacus and working its way up via analog computers, ENIAC, mainframes, micros, PCs and so forth. Further, the standard narrative suggests that each of these were invented in order to satisfy a pre-existing demand, which makes their appearance almost inevitable. In Mahoney’s words,
…Just as it places all earlier calculating devices on one or more lines leading toward the electronic digital computer, as if they were somehow all headed in its direction, so too it pulls together the various contexts in which the devices were built, as if they constituted a growing demand for the invention of the computer and as if its appearance was a response to that demand.
Mahoney says that this is misleading for because,
…If people have been waiting for the computer to appears as the desired solution to their problems, it is not surprising that they then make use of it when it appears, or indeed that they know how to use it…
…sets up a narrative of revolutionary impact, in which the computer is brought to bear on one area after another, in each case with radically transformative effect….”
The second point – revolutionary impact – is interesting because we still suffer its fallout: just about every issue of any trade journal has an article hyping the Next Big Computing Revolution. It seems that their writers are simply taking their cues from history. Mahoney puts it very well,
One can hardly pick up a journal in computing today without encountering some sort of revolution in the making, usually proclaimed by someone with something to sell. Critical readers recognise most of it as hype based on future promise than present performance…
The problem with revolutions, as Mahoney notes, is that they attempt to erase (or rewrite) history, ignoring the real continuities and connections between present and the past,
Nothing is in fact unprecedented, if only because we use precedents tot recognise, accommodate and shape the new…
CIOs and other decision makers, take note!
But what about software?
The standard history of computing doesn’t say much about software,
To the extent that the standard narrative covers software, the story follows the generations of machines, with an emphasis on systems software, beginning with programming languages and touching—in most cases, just touching—on operating systems, at least up to the appearance of time-sharing. With a nod toward Unix in the 1970s, the story moves quickly to personal computing software and the story of Microsoft, seldom probing deep enough to reveal the roots of that software in the earlier period.
As far as applications software is concerned –whether in construction, airline ticketing or retail – the only accounts that exist are those of pioneering systems such as the Sabre reservation system. Typically these efforts focus on the system being built, excluding any context and connection to the past. There are some good “pioneer style” histories: an example is Scott Rosenberg’s book Dreaming in Code – an account of the Chandler software project. But these are exceptions rather than the rule.
In the revolutionary model, people react to computers. In reality, though, it’s the opposite: people figure out ways to use computers in their areas of expertise. They design and implement programs to make computers do useful things. In doing so, they make choices:
Hence, the history of computing, especially of software, should strive to preserve human agency by structuring its narratives around people facing choices and making decisions instead of around impersonal forces pushing people in a predetermined direction. Both the choices and the decisions are constrained by the limits and possibilities of the state of the art at the time, and the state of the art embodies its history to that point.
The early machines of the 1940s and 50s were almost solely dedicated to numerical computations in the mathematical and physical sciences. Thereafter, as computing became more “mainstream” other communities of practitioners started to look at how they might use computers:
These different groups saw different possibilities in the computer, and they had different experiences as they sought to realize those possibilities, often translating those experiences into demands on the computing community, which itself was only taking shape at the time.
But these different communities have their own histories and ways of doing things – i.e. their own, unique worlds. To create software that models these worlds, the worlds have to be translated into terms the computer can “understand” and work with. This translation is the process of software design. The software models thus created embody practices that have evolved over time. Hence, the models also reflect the histories of the communities that create them.
Models are imperfect
There is a gap between models and reality, though. As Mahoney states,
…Programming is where enthusiasm meets reality. The enduring experience of the communities of computing has been the huge gap between what we can imagine computers doing and what we can actually make them do.
This lead to the notion of a “software crisis: and calls to reform the process of software development, which in turn gave rise to the discipline of software engineering. Many improvements resulted: better tools, more effective project management, high-level languages etc. But all these, as Brooks pointed out in his classic paper, addressed issues of implementation (writing code) not those of design (translating reality into computable representations). As Mahoney state,
…putting a portion of the world into the computer means designing an operative representation of that portion of the world that captures what we take to be its essential features. This has proved, as I say, no easy task; on the contrary it has proved difficult, frustrating and in some cases disastrous.
The problem facing the software historian is that he or she has to uncover the problem context and reality as perceived by the software designer, and thus reach an understanding of the design choices made. This is hard to do because it is implicit in the software artefact that the historian studies. Documentation is rarely any help here because,
…what programs do and what the documentation says they do are not always the same thing. Here, in a very real sense, the historian inherits the problems of software maintenance: the farther the program lies from its creators, the more difficult it is to discern its architecture and the design decisions that inform it.
There are two problems here:
- That software embodies a model of some aspect of reality.
- The only explanation of the model is the software itself.
As Mahoney puts it,
Legacy code is not just old code, but rather a continuing enactment, an operative representation, of the domain knowledge embodied in it. That may explain the difficulties software engineers have experienced in upgrading and replacing older systems.
Most software professionals will recognise the truth of this statement.
The legacy of legacy code
The problem is that new systems promise much, but are expensive and pose too many risks. As always continuity must be maintained, but this is nigh impossible because no one quite understands the legacy bequeathed by legacy code: what it does, how it does it and why it was designed so. So, customers play it safe and legacy code lives on. Despite all the advances in software engineering, software migrations and upgrades remain fraught with problems.
Mahoney concludes with the following play on the word “legacy”,
This situation (the gap between the old and the new) should be of common interest to computer people and to historians. Historians will want to know how it developed over several decades and why software systems have not kept pace with advances in hardware. That is, historians are interested in the legacy. Even as computer scientists wrestle with a solution to the problem the legacy poses, they must learn to live with it. It is part of their history, and the better they understand it, the better they will be able to move on from it.
This last point should be of interest to those running software development projects in corporate IT environments (and to a lesser extent those developing commercial software). An often unstated (but implicit) requirement is that the delivered software must maintain continuity between the past and present. This is true even for systems that claim to represent a clean break from the past; one never has the luxury of a completely blank slate, there are always arbitrary constraints placed by legacy systems. As Fred Brooks mentions in his classic article No Silver Bullet,
…In many cases, the software must conform because it is the most recent arrival on the scene. In others, it must conform because it is perceived as the most conformable. But in all cases, much complexity comes from conformation to other interfaces…
So, the legacy of legacy software is to add complexity to projects intended to replace it. Mahoney’s concluding line is therefore just as valid for project managers and software designers as it is for historians and computer scientists: project managers and software designers must learn to live with and understand this complexity before they can move on from it.
I’ll say it at the outset: once in a while there comes along a book that inspires and excites because it presents new perspectives on old, intractable problems. In my opinion, Dialogue Mapping : Building a Shared Understanding of Wicked Problems by Jeff Conklin falls into this category. This post presents an extensive summary and review of the book.
Before proceeding, I think it is only fair that I state my professional views (biases?) upfront. Some readers of this blog may have noted my leanings towards the “people side” of project management (see this post , for example). Now, that’s not to say that I don’t use methodologies and processes. On the contrary, I use project management processes in my daily work, and appreciate their value in keeping my projects (and job!) on track. My problem with processes is when they become the only consideration in managing projects. It has been my long-standing belief (supported by experience) that if one takes care of the people side of things, the right outcomes happen more easily; without undue process obsession on part of the manager. (I should clarify that I’m not encouraging some kind of a laissez-faire, process-free approach, merely one that balances both people and processes). I’ve often wondered if it is possible to meld these two elements into some kind of “people-centred process”, which leverages the collective abilities of people in a way that facilitates and encourages their participation. Jeff Conklin’s answer is a resounding “Yes!”
Dialogue mapping is a process that is aimed at helping groups achieve a shared understanding of wicked problems – complex problems that are hard to understand, let alone solve. If you’re a project manager that might make your ears perk up; developing a shared understanding of complex issues is important in all stages of a project: at the start, all stakeholders must arrive at a shared understanding of the project goals (eg, what are we trying to achieve in this project?); in the middle, project team members may need to come to a common understanding (and resolution) of tricky implementation issues; at the end, the team may need to agree on the lessons learned in the course of the project and what could be done better next time. But dialogue mapping is not restricted to project management – it can be used in any scenario involving diverse stakeholders who need to arrive at a common understanding of complex issues. This book provides a comprehensive introduction to the technique.
Although dialogue mapping can be applied to any kind of problem – not just wicked ones – Conklin focuses on the latter. Why? Because wickedness is one of the major causes of fragmentation: the tendency of each stakeholder to see a problem from his or her particular viewpoint ignoring other, equally valid, perspectives. The first chapter of this book discusses fragmentation and its relationship to wickedness and complexity. Fragmentation is a symptom of complexity- one would not have diverse, irreconcilable viewpoints if the issues at hand were simple. According to Conklin, fragmentation is a function of problem wickedness and social complexity, i.e. the diversity of stakeholders. Technical complexity is also a factor, but a minor one compared to the other two. All too often, project managers fall into the trap of assuming that technical complexity is the root cause of many of their problems, ignoring problem complexity (wickedness) and social complexity. The fault isn’t entirely ours; the system is partly to blame: the traditional, process driven world is partially blind to the non-technical aspects of complexity. Dialogue mapping helps surface issues that arise from these oft ignored dimensions of project complexity.
Early in the book, Conklin walks the reader through the solution process for a hypothetical design problem. His discussion is aimed at highlighting some limitations of the traditional approach to problem solving. The traditional approach is structured; it works methodically through gathering requirements, analysing them, formulating a solution and finally implementing it. In real-life, however, people tend to dive headlong into solving the problem. Their approach is far from methodical – it typically involves jumping back and forth between hypothesis formulation, solution development, testing ideas, following hunches etc. Creative work, like design, cannot be boxed in by any methodology, waterfall or otherwise. Hence the collective angst on how to manage innovative product development projects. Another aspect of complexity arises from design polarity; what’s needed (features requested) vs. what’s feasible(features possible) – sometimes called the marketing and development views. Design polarity is often the cause of huge differences of opinion within a team; that is, it manifests itself as social complexity.
Having set the stage in the first chapter, the rest of the book focuses on describing the technique of dialogue mapping. Conklin’s contention is that fragmentation manifests itself most clearly in meetings – be they project meetings, design meetings or company board meetings. The solution to fragmentation must, therefore, focus on meetings. The solution is for the participants to develop a shared understanding of the issues at hand, and a shared commitment to a decision and action plan that addresses them. The second chapter provides an informal discussion of how these are arrived at via dialogue that takes place in meetings. Dialogue mapping provides a process – yes, it is a process – to arrive at these.
The second chapter also introduces some of the elements that make up the process of dialogue mapping. The first of these is a visual notation called IBIS (Issue Based Information System). The IBIS notation was invented by Horst Rittel, the man who coined the term wicked problem. IBIS consists of three elements depicted in Figure 1 below – Issues (or questions), Ideas (that generally respond to questions) and Arguments (for and against ideas – pros and cons) – which can be connected according to a specified grammar (see this post for a quick introduction to IBIS or see Paul Culmsee’s series of posts on best practices for a longer, far more entertaining one). Questions are at the heart of dialogues (or meetings) that take place in organisations – hence IBIS, with its focus on questions, is ideally suited to mapping out meeting dialogues.
The basic idea in dialogue mapping is that a skilled facilitator maps out the core points of the dialogue in real-time, on a shared display which is visible to all participants. The basic idea is that participants see their own and collective contributions to the debate, while the facilitator fashions these into a coherent whole. Conklin’s believes that this can be done, no matter how complex the issues are or how diverse and apparently irreconcilable the opinions. Although I have limited experience with the technique, I believe he is right: IBIS in the hands of a skilled facilitator can help a group focus on the real issues, blowing away the conversational chaff. Although the group as a whole may not reach complete agreement, they will at least develop a real understanding of other perspectives. The third chapter, which concludes the first part of the book, is devoted to an example that illustrates this point.
The second part of the book delves into the nuts and bolts of dialogue mapping. It begins with an introduction to IBIS – which Conklin calls a “tool for all reasons.” The book provides a nice informal discussion, covering elements, syntax and conventions of the language. The coverage is good, but I have a minor quibble : one has to read and reread the chapter a few times to figure out the grammar of the language. It would have been helpful to have an overview of the grammar collected in one place (say in a diagram, like the one shown in Figure 2). Incidentally, Figures 1 and 2 also show how an IBIS map is structured: starting from a root question (placed on the left of the diagram) and building up to the right as the discussion proceeds.
A good way to gain experience with IBIS is to use it to create issue maps of arguments presented in articles. See this post for an example of an issue map based on Fred Brooks’ classic article, No Silver Bullet.
Dialogue mapping is issue mapping plus facilitation. The next chapter – the fifth one in the book – discusses facilitation skills required for dialogue mapping. The facilitator (or technographer, as the person is sometimes called) needs to be able to listen to the conversation, guess at the intended meaning, write (or update) the map and validate what’s written; then proceed through the cycle of listening, guessing, writing and validating again as the next point comes up and so on. Conklin calls this the dialogue mapping listening cycle (see Figure 3 below). As one might imagine, this skill, which is the key to successful dialogue mapping, takes lots of practice to develop. In my experience, a good way to start is by creating IBIS maps of issues discussed in meetings involving a small number of participants. As one gains confidence through practice, one shares the display thereby making the transition from issue mapper to dialogue mapper.
One aspect of the listening cycle is counter-intuitive – validation may require the facilitator to interrupt the speaker. Conklin emphasises that it is OK to do so as long as it is done in the service of listening. Another important point is that when capturing a point made by someone, the technographer will need to summarise or interpret the point. The interpretation must be checked with the speaker. Hence validation – and the interruption it may entail – is not just OK, it is absolutely essential. Conklin also emphasises that the facilitator should focus on a single person in each cycle – it is possible to listen to only one person at a time.
A side benefit of interrruption is that it slows downs the dialogue. This is a good thing because everyone in the group gets more time to consider what’s on the screen and how it relates (or doesn’t) to their own thoughts. All too often, meetings are rushed, things are done in a hurry, and creative creative ideas and thoughts are missed in the bargain. A deliberate slowing down of the dialogue counters this.
The final part of the book – chapters six through nine – are devoted to advanced dialogue mapping skills.
The sixth chapter presents a discussion of the types of questions that arise in most meetings. Conklin identifies seven types of questions:
Deontic: These are questions that ask what should be done in order to deal with the issue at hand. For example: What should we do to improve our customer service? The majority of root questions (i.e. starting questions) in an IBIS map are deontic.
Instrumental: These are questions that ask how something should be done. For example: How can we improve customer service? These questions generally follow on from deontic questions. Typically root questions are either deontic or instrumental.
Criterial: These questions ask about the criteria that any acceptable ideas must satisfy. Typically ideas that respond to criterial questions will serve as a filter for ideas that might come up. Conklin sees criterial and instrumental questions as being complementary. The former specify high-level constraints (or criteria) for ideas whereas the latter are nuts-and-bolts ideas on how something is to be achieved. For example, a criterial question might ask: what are the requirements for improving customer service or how will we know that we have improved customer service.
Conklin makes the point that criterial questions typically connect directly to the root question. This makes sense: the main issue being discussed is usually subject ot criteria or constraints. Further, ideas that respond to criterial questions (in other words, the criteria) have a correspondence with arguments for and against the root questions. This makes sense: the pros and cons that come up in a meeting would generally correspond to the criteria that have been stated. This isn’t an absolute requirement – there’s nothing to say that all arguments must correspong to at least one criterion – but it often serves as a check on whether a discussion is taking all constraints into account.
Conceptual: These are questions that clarify the meaning of any point that’s raised. For example, what do we mean by customer service? Conklin makes the point that many meetings go round in circles because of differences in understanding of particular terms. Conceptual questions surface such differences.
Factual: These are questions of fact. For example: what’s the average turnaround time to respond to customer requests? Often meetings will debate such questions without having any clear idea of what the facts are. Once a factual question is identified as such, it can be actioned for someone to do research on it thereby saving a lot of pointless debate
Background: These are questions of context surrounding the issue at hand. An example is: why are we doing this initiative to improve customer service? Ideas responding to such questions are expected to provide the context as to why something has become an issue.
Stakeholder: These are the “who” questions. An example: who should be involved in the project? Such questions can be delicate in situations where there are conflicting interests (cross-functional project, say), but need to be asked in order to come up with a strategy to handle differences opinion. One can’t address everyone’s concerns until one knows who all constitute “everyone”.
Following the classification of questions, Conklin discusses the concept of a dialogue meta-map – an overall pattern of how certain kinds of questions naturally follow from certain others. The reader may already be able to discern some of these patterns from the above discussion of question types. Also relevant here are artful questions – open questions that keep the dialogue going in productive directions.
The seventh chapter is entitled Three Moves of Discourse. It describes three conversational moves that propel a discussion forward, but can also upset the balance of power in the discussion and evoke strong emotions. These moves are:
- Making an argument for an idea or proposal (a Pro)
- Making an argument against an idea (a Con)
- Challenging the context of the entire discussion.
Let’s look at the first two moves to start with. In an organisation, these moves have a certain stigma attached to them: anyone making arguments for or against an idea might be seen as being opinionated or egotistical. The reason is because these moves generally involve contradicting someone else in the room. Conklin contends that dialogue mapping takes removes these negative connotations because the move is seen as just another node in the map. Once on the map, it is no longer associated with any person – it is objectified as an element of the larger discussion. It can be discussed or questioned just as any other node can.
Conklin refers to the last move – challenging the context of a discussion – as “grenade throwing.” This is an apt way of describing such questions because they have the potential to derail the discussion entirely. They do this by challenging the relevance of the root question itself. But dialogue mapping takes these grenades in its stride; they are simply captured as any another conversational move – i.e. a node on the map, usually a question. Better yet, in many cases further discussion shows how these questions might connect up with the rest of the map. Even if they don’t, these “grenade questions” remain on the map, in acknowledgement of the dissenter and his opinion. Dialogue mapping handles such googlies (curveballs to baseball aficionados) with ease, and indicates how they might connect up with the rest of the discussion – but connection is neither required nor always desirable. It is OK to disagree, as long as it is done respectfully. This is a key element of shared understanding – the participants might not agree, but they understand each other.
Related to the above is the notion of a “left hand move”. Occasionally a discussion can generate a new root question which, by definition, has to be tacked on to the extreme left of the map. Such a left hand move is extremely powerful because it generally relates two or more questions or ideas that were previously unrelated (some of them may even have been seen as a grenade).
By now it should be clear that dialogue mapping is a technique that promotes collaboration – as such it works best in situations where openness, honesty and transparency are valued. In the penultimate chapter, the author discusses some situations in which it may not be appropriate to use the technique. Among these are meetings in which decisions are made by management fiat. Other situations in which it may be helpful to “turn the display off” are those which are emotionally charged or involve interpersonal conflict. Conklin suggests that the facilitator use his or her judgement in deciding where it is appropriate and where it isn’t.
In the final chapter, Conklin discusses how decisions are reached using dialogue mapping. A decision is simply a broad consensus to mark one of the ideas on the map as a decision. How does one choose the idea that is to be anointed as the group’s decision? Well quite obviously: the best one. Which one is that? Conklin states, the best decision is the one that has the broadest and deepest commitment to making it work. He also provides a checklist for figuring out whether a map is mature enough for a decision to be made. But the ultimate decision on when a decision (!) is to be made is up to group. So how does one know when the time is right for a decision? Again, the book provides some suggestions here, but I’ll say no more except to hint at them by paraphrasing from the book: “What makes a decision hard is lack of shared understanding. Once a group has thoroughly mapped a problem (issues) and its potential solutions (ideas) along with their pros and cons, the decision itself is natural and obvious.”
Before closing, I should admit that my experience with dialogue mapping is minimal – I’ve done it a few times in small groups. I’m not a brilliant public speaker or facilitator, but I can confirm that it helps keep a discussion focused and moving forward. Although Conklin’s focus is on dialogue mapping, one does not need to be a facilitator to benefit from this book; it also provides a good introduction to issue mapping using IBIS. In my opinion, this alone is worth the price of admission. Further, IBIS can also be used to augment project (or organisational) memory. So this book potentially has something for you, even if you’re not a facilitator and don’t intend to use IBIS in group settings.
This brings me to the end of my long-winded summary and review of the book. My discussion, as long as it is, does not do justice to the brilliance of the book. By summarising the main points of each chapter (with some opinions and annotations for good measure!), I have attempted to convey a sense of what a reader can expect from the book. I hope I’ve succeeded in doing so. Better yet, I hope I have convinced you that the book is worth a read, because I truly believe it is.