Archive for the ‘Design Rationale’ Category
Enterprise architects are seldom (never?) given a blank canvas on which they can draw as they please. They invariably have to begin with an installed base of systems over which they have no control. As I wrote in a piece on the legacy of legacy systems:
An often unstated (but implicit) requirement [on new systems] is that [they] must maintain continuity between the past and present. This is true even for systems that claim to represent a clean break from the past; one never has the luxury of a completely blank slate, there are always arbitrary constraints placed by legacy systems.
Indeed the system landscape of any large organization is a palimpsest, always retaining traces of what came before. Those who actually maintain systems – usually not architects – are painfully aware of this simple truth.
The IT landscape of an organization is therefore a snapshot, a picture that begins to age the instant is taken. Practicing enterprise architects will say they know this “of course”, and pay due homage to it in their words…but often not their actions. The conflicts and contradictions between legacy and their aspirational architectures are hard to deal with and hence easier to ignore. In this post, I draw a parallel between this central conundrum of enterprise architecture and the process of biological evolution.
A Batesonian perspective on evolution
I’ve recently been re-reading Mind and Nature: A Necessary Unity, a book that Gregory Bateson wrote towards the end of his life of eclectic scholarship. Tucked away in the appendix of the book is an essay lamenting the fragmentation of knowledge and the lack of transdisciplinary thinking within universities. Central to the essay is the notion of obsolescence. Bateson argued that much of what was taught in universities lagged behind the practical skills and mindsets that were needed to tackle the problems of that time. Most people would agree that this is as true today as it was in Bateson’s time, perhaps even more so.
Bateson had a very specific idea of obsolescence in mind. He suggested that the educational system is lopsided because it invariably lags behind what is needed in the “real world”. Specifically, there is a lag between the typical university curriculum and the attitudes, dispositions, knowledge and skills needed to the problems of an ever-changing world. This lag is what Bateson referred to as obsolescence. Indeed, if the external world did not change there would be no lag and hence no obsolescence. As he noted:
I therefore propose to analyze the lopsided process called “obsolescence” which we might more precisely call “one-sided progress.” Clearly for obsolescence to occur there must be, in other parts of the system, other changes compared with which the obsolete is somehow lagging or left behind. In a static system, there would be no obsolescence…
This notion of obsolescence-as-lag has a direct connection with the contrasting process of developmental and evolutionary biology. The process of development of an embryo is inherently conservative – it develops according predetermined rules and is relatively robust to external stimuli. On the other hand, after birth, individuals are continually subject to a wide range of external factors (e.g. climate, stress etc.) that are unpredictable. If exposed to such factors over an extended period, they may change their characteristics in response to them (e.g. the tanning effect of sunlight, adaptability etc). However, these characteristics are not inheritable. They are passed on (if at all) by a much slower process of natural selection. As a consequence, there is a significant lag between external stimuli and the inheritability of the associated characteristics.
As Bateson puts it:
Survival depends upon two contrasting phenomena or processes, two ways of achieving adaptive action. Evolution must always, Janus-like, face in two directions: inward towards the developmental regularities and physiology of the living creature and outward towards the vagaries and demands of the environment. These two necessary components of life contrast in interesting ways: the inner development-the embryology or “epigenesis”-is conservative and demands that every new thing shall conform or be compatible with the regularities of the status quo ante. If we think of a natural selection of new features of anatomy or physiology-then it is clear that one side of this selection process will favor those new items which do not upset the old apple cart. This is minimal necessary conservatism.
In contrast, the outside world is perpetually changing and becoming ready to receive creatures which have undergone change, almost insisting upon change. No animal or plant can ever be “readymade.” The internal recipe insists upon compatibility but is never sufficient for the development and life of the organism. Always the creature itself must achieve change of its own body. It must acquire certain somatic characteristics by use, by disuse, by habit, by hardship, and by nurture. These “acquired characteristics” must, however, never be passed on to the offspring. They must not be directly incorporated into the DNA. In organisational terms, the injunction – e.g. to make babies with strong shoulders who will work better in coal mines- must be transmitted through channels, and the channel in this case is via natural external selection of those offspring who happen (thanks to the random shuffling of genes and random creation of mutations) to have a greater propensity for developing stronger shoulders under the stress of working in coal mine.
The upshot of the above is that the genetic code of any species is inherently obsolete because it is, in at least a few ways, maladapted to its environment. This is a good thing. Sustainable and lasting change to the genome of a population should occur only through the trial-and-error process of natural selection over many generations. It is only through such a gradual process that one can be sure that that a) the adaptation is necessary and b) that it occurs with minimal disruption to the existing order.
…and so to enterprise architecture
In essence, the aim of enterprise architecture is to come up with a strategy and plan to move from an old system landscape to a new one. Typically, architectures are proposed based on current technology trends and extrapolations thereof. Frameworks such as The Open Group Architecture Framework (TOGAF) present a range of options for migrating from legacy architecture.
Here’s an excerpt from Chapter 13 of the TOGAF Guide:
[The objective is to] create an overall Implementation and Migration Strategy that will guide the implementation of the Target Architecture, and structure any Transition Architectures. The first activity is to determine an overall strategic approach to implementing the solutions and/or exploiting opportunities. There are three basic approaches as follows:
- Greenfield: A completely new implementation.
- Revolutionary: A radical change (i.e., switches on, switch off).
- Evolutionary: A strategy of convergence, such as parallel running or a phased approach to introduce new capabilities.
What can we say about these options in light of the discussion of the previous sections?
Firstly, from the discussion of the introduction, it is clear that Greenfield situations can be discounted on grounds rarity alone. So let’s look at the second option – revolutionary change – and ask if it is viable in light of the discussion of the previous section.
In the case of a particular organization, the gap between an old architecture and technology trends/extrapolations is analogous to the lag between inherited characteristics and external forces. The former resist change; the latter insist on it. The discussion of the previous section tells us that the former cannot be wished away, they are a natural consequence of “technology genes” embedded in the organization. Because this is so, changes are best introduced in a gradual way that permits adaptation through the slow and painful process of trial and error. This is why the revolutionary approach usually fails.
It follows from the above that the only viable approach to enterprise architecture is an evolutionary one. This process is necessarily gradual. Architects may wish for green fields and revolutions, but the reality is that lasting and sustainable change in an organisation’s technology landscape can only be achieved incrementally, akin to the way in which an aspiring marathon runner’s physiology adapts to the extreme demands of the sport.
The other, perhaps more subtle point made by this analogy is that a particular organization is but one member of a “species” which, in the present context, is a population of organisations that have a certain technology landscape. Clearly, a new style of architecture will be deemed a success only if it is adopted successfully by a significant number of organisations within this population. Equally clear is that this eventuality is improbable because new architectural paradigms are akin to random mutations. Most of these are rightly rejected by organizations, but only after exacting a high price. This explains why most technology fads tend to fade away.
The analogy between the evolution of biological systems and organizational technology landscapes has some interesting implications for enterprise architects. Here are a few that are worth highlighting:
- Enterprise architects are caught between a rock and a hard place: to demonstrate value they have to change things rapidly, but rapid changes are more likely to fail than succeed.
- The best chance of success lies in an evolutionary approach that accepts trial and error as a natural part of the process. The trick lies in selling that to management…and there are ways to do that.
- A corollary of (2) is that old and new elements of the landscape will necessarily have to coexist, often for periods much longer than one might expect. One must therefore design for coexistence. Above all, the focus here should be on the interfaces for these are the critical elements that enable the old and the new to “talk” to each other.
- Enterprise architects should be skeptical of cutting edge technologies. It almost always better to bet on proven technologies because they have the benefit of the experience of others.
- One of the consequences of an evolutionary process of trial and error is that benefits (or downsides) are often not evident upfront. One must therefore always keep an eye out for these unexpected features.
Finally, it is worth pointing out that an interesting feature of all the above points is that they are consistent with the principles of emergent design.
In this article I’ve attempted to highlight a connection between the evolution of organizational technology landscapes and the process of biological evolution. At the heart of both lie a fundamental tension between inherent conservatism (the tendency to preserve the status quo change) and the imperative to evolve in order to adapt to changes imposed by the environment. There is no question that maintaining the status quo is never an option. The question is how to evolve in order to ensure the best chance of success. Evolution tells us that the best approach is a gradual one, via a process of trial, error and learning.
Issue Based Information System (IBIS) is a notation invented by Horst Rittel and Werner Kunz in the early 1970s. IBIS is best known for its use in dialogue mapping, a collaborative approach to tackling wicked problems (i.e. contentious issues) in organisations. It has a range of other applications as well – capturing knowledge is a good example, and I’ll have much more to say about that later in this post.
Over the last five years or so, I have written a fair bit on IBIS on this blog and in a book that I co-authored with the dialogue mapping expert, Paul Culmsee. The present post reprises an article I wrote five years ago on the “what” and “whence” of the notation: its practical aspects – notation, grammar etc -, as well as its origins, advantages and limitations. My motivations for revisiting the piece are to revise and update the original discussion and, more important, to cover some recent developments in IBIS technology that open up interesting possibilities in the area of knowledge management.
To appreciate the power of the IBIS, it is best to begin by understanding the context in which the notation was invented. I’ll therefore start with a discussion of the origins of the notation followed by an introduction to it. Finally, I’ll cover its development through the last 40 odd years, focusing on the recent developments that I mentioned above.
A good place to start is where it all started. IBIS was first described in a paper entitled, Issues as elements of Information Systems; written by Horst Rittel (the man who coined the term wicked problem) and Werner Kunz in July 1970. They state the intent behind IBIS in the very first line of the abstract of their paper:
Issue-Based Information Systems (IBIS) are meant to support coordination and planning of political decision processes. IBIS guides the identification, structuring, and settling of issues raised by problem-solving groups, and provides information pertinent to the discourse.
Rittel’s preoccupation was the area of public policy and planning – which is also the context in which he defined the term wicked problem originally. Given the above background it is no surprise that Rittel and Kunz foresaw IBIS to be the:
…type of information system meant to support the work of cooperatives like governmental or administrative agencies or committees, planning groups, etc., that are confronted with a problem complex in order to arrive at a plan for decision…
The problems tackled by such cooperatives are paradigm-defining examples of wicked problems. From the start, then, IBIS was intended as a tool to facilitate a collaborative approach to solving…or better, managing a wicked problem by helping develop a shared perspective on it.
A brief introduction to IBIS
The IBIS notation consists of the following three elements:
- Issues(or questions): these are issues that are being debated. Typically, issues are framed as questions on the lines of “What should we do about X?” where X is the issue that is of interest to a group. For example, in the case of a group of executives, X might be rapidly changing market condition whereas in the case of a group of IT people, X could be an ageing system that is hard to replace.
- Ideas(or positions): these are responses to questions. For example, one of the ideas of offered by the IT group above might be to replace the said system with a newer one. Typically the whole set of ideas that respond to an issue in a discussion represents the spectrum of participant perspectives on the issue.
- Arguments: these can be Pros (arguments for) or Cons (arguments against) an issue. The complete set of arguments that respond to an idea represents the multiplicity of viewpoints on it.
In Compendium, the IBIS elements described above are represented as nodes as shown in Figure 1: issues are represented by blue-green question marks; positions by yellow light bulbs; pros by green + signs and cons by red – signs. Compendium supports a few other node types, but these are not part of the core IBIS notation. Nodes can be linked only in ways specified by the IBIS grammar as I discuss next.
The IBIS grammar can be summarized in three simple rules:
- Issues can be raised anew or can arise from other issues, positions or arguments. In other words, any IBIS element can be questioned. In Compendium notation: a question node can connect to any other IBIS node.
- Ideas can only respond to questions– i.e. in Compendium “light bulb” nodes can only link to question nodes. The arrow pointing from the idea to the question depicts the “responds to” relationship.
- Arguments can only be associated with ideas– i.e. in Compendium “+” and “–“ nodes can only link to “light bulb” nodes (with arrows pointing to the latter)
The legal links are summarized in Figure 2 below.
Yes, it’s as simple as that.
The rules are best illustrated by example- follow the links below to see some illustrations of IBIS in action:
- See this postfor a simple example of dialogue mapping.
- See this postor this one for examples of argument visualisation. (Note: using IBIS to map out the structure of written arguments is called issue mapping.
- See this one for an example Paul did with his children. This example also features in our book. that made an appearance in our book.
Now that we know how IBIS works and have seen a few examples of it in action, it’s time to trace the history of the notation from its early days the present.
Operation of early systems
When Rittel and Kunz wrote their paper, there were three IBIS-type systems in operation: two in government agencies (in the US, one presumes) and one in a university environment (quite possibly Berkeley, where Rittel worked). Although it seems quaint and old-fashioned now, it is no surprise that these were manual, paper-based systems; the effort and expense involved in computerizing such systems in the early 70s would have been prohibitive and the pay-off questionable.
The Rittel-Kunz paper introduced earlier also offers a short description of how these early IBIS systems operated:
An initially unstructured problem area or topic denotes the task named by a “trigger phrase” (“Urban Renewal in Baltimore,” “The War,” “Tax Reform”). About this topic and its subtopics a discourse develops. Issues are brought up and disputed because different positions (Rittel’s word for ideas or responses) are assumed. Arguments are constructed in defense of or against the different positions until the issue is settled by convincing the opponents or decided by a formal decision procedure. Frequently questions of fact are directed to experts or fed into a documentation system. Answers obtained can be questioned and turned into issues. Through this counterplay of questioning and arguing, the participants form and exert their judgments incessantly, developing more structured pictures of the problem and its solutions. It is not possible to separate “understanding the problem” as a phase from “information” or “solution” since every formulation of the problem is also a statement about a potential solution.
Even today, forty years later, this is an excellent description of how IBIS is used to facilitate a common understanding of complex problems. Moreover, the process of reaching a shared understanding (whether using IBIS or not) is one of the key ways in which knowledge is created within organizations. To foreshadow a point I will elaborate on later, using IBIS to capture the key issues, ideas and arguments, and the connections between them, results in a navigable map of the knowledge that is generated in a discussion.
Fast forward a couple decades (and more!)
In a paper published in 1988 entitled, gIBIS: A hypertext tool for exploratory policy discussion, Conklin and Begeman describe a prototype of a graphical, hypertext-based IBIS-type system (called gIBIS) and its use in capturing design rationale (yes, despite the title of the paper, it is more about capturing design rationale than policy discussions). The development of gIBIS represents a key step between the original Rittel-Kunz version of IBIS and its more recent version as implemented in Compendium. Amongst other things, IBIS was finally off paper and on to disk, opening up a world of new possibilities.
gIBIS aimed to offer users:
- The ability to capture design rationale – the options discussed (including the ones rejected) and the discussion around the pros and cons of each.
- A platform for promoting computer-mediated collaborativedesign work – ideally in situations where participants were located at sites remote from each other.
- The ability to store a large amount of information and to be able to navigate through it in an intuitive way.
The gIBIS prototype proved successful enough to catalyse the development of Questmap, a commercially available software tool that supported IBIS. In a recent conversation Jeff Conklin mentioned to me that Questmap was one of the earliest Windows-based groupware tools available on the market…and it won a best-of-show award in that category. It is interesting to note that in contrast to Questmap (which no longer exists), Compendium is a single-user, desktop software.
The primary application of Questmap was in the area of sensemaking which is all about helping groups reach a collective understanding of complex situations that might otherwise lead them into tense or adversarial conditions. Indeed, that is precisely how Rittel and Kunz intended IBIS to be used. The key advantage offered by computerized IBIS systems was that one could map dialogues in real-time, with the map representing the points raised in the conversation along with their logical connections. This proved to be a big step forward in the use of IBIS to help groups achieve a shared understanding of complex issues.
That said, although there were some notable early successes in the real-time use of IBIS in industry environments (see this paper, for example), these were not accompanied by widespread adoption of the technique. It is worth exploring the reasons for this briefly.
The tacitness of IBIS mastery
The reasons for the lack of traction of IBIS-type techniques for real-time knowledge capture are discussed in a paper by Shum et. al. entitled, Hypermedia Support for Argumentation-Based Rationale: 15 Years on from gIBIS and QOC. The reasons they give are:
- For acceptance, any system must offer immediate value to the person who is using it. Quoting from the paper, “No designer can be expected to altruistically enter quality design rationale solely for the possible benefit of a possibly unknown person at an unknown point in the future for an unknown task. There must be immediate value.” Such immediate value is not obvious to novice users of IBIS-type systems.
- There is some effort involved in gaining fluency in the use of IBIS-based software tools. It is only after this that users can gain an appreciation of the value of such tools in overcoming the limitations of mapping design arguments on paper, whiteboards etc.
While the rules of IBIS are simple to grasp, the intellectual effort – or cognitive overhead in using IBIS in real time involves:
- Teasing out issues, ideas and arguments from the dialogue.
- Classifying points raised into issues, ideas and arguments.
- Naming (or describing) the point succinctly.
- Relating (or linking) the point to the existing map (or anticipating how it will fit in later)
- Developing a sense for conversational patterns.
Expertise in these skills can only be developed through sustained practice, so it is no surprise that beginners find it hard to use IBIS to map dialogues. Indeed, the use of IBIS for real-time conversation mapping is a tacit skill, much like riding a bike or swimming – it can only be mastered by doing.
Making sense through IBIS
Despite the difficulties of mastering IBIS, it is easy to see that it offers considerable advantages over conventional methods of documenting discussions. Indeed, Rittel and Kunz were well aware of this. Their paper contains a nice summary of the advantages, which I paraphrase below:
- IBIS can bridge the gap between discussions and records of discussions (minutes, audio/video transcriptions etc.). IBIS sits between the two, acting as a short-term memory. The paper thus foreshadows the use of issue-based systems as an aid to organizational or project memory.
- Many elements (issues, ideas or arguments) that come up in a discussion have contextual meanings that are different from any pre-existing definitions. That is, the interpretation of points made or questions raised depends on the circumstances surrounding the discussion. What is more important is that contextual meaning is more important than formal meaning. IBIS captures the former in a very clear way – for example a response to a question “What do we mean by X?” elicits the meaning of X in the context of the discussion, which is then subsequently captured as an idea (position)”. I’ll have much more to say about this towards the end of this article.
- The reasoning used in discussions is made transparent, as is the supporting (or opposing) evidence.
- The state of the argument (discussion) at any time can be inferred at a glance (unlike the case in written records). See this post for more on the advantages of visual documentation over prose.
- Often times it happens that the commonality of issues with other, similar issues might be more important than its precise meaning. To quote from the paper, “…the description of the subject matter in terms of librarians or documentalists (sic) may be less significant than the similarity of an issue with issues dealt with previously and the information used in their treatment…” This is less of an issue now because of search of technologies. However, search technologies are still largely based on keywords rather than context. A properly structured, context-searchable IBIS-based archive would be more useful than a conventional document-based system. As I’ll discuss in the next section, the technology for this is now available.
To sum up, then: although IBIS offers a means to map out arguments what is lacking is the ability to make these maps available and searchable across an organization.
IBIS in the enterprise
It is interesting to note that Compendium, unlike its predecessor, Questmap, is a single-user, desktop tool – it does not, by itself, enable the sharing of maps across the enterprise. To be sure, it is possible work around this limitation but the workarounds are somewhat clunky. A recent advance in IBIS technology addresses this issue rather elegantly: Seven Sigma, an Australian consultancy founded by Paul Culmsee, Chris Tomich and Peter Chow, has developed Glyma (pronounced “glimmer”): a product that makes IBIS available on collaboration platforms like Microsoft SharePoint. This is a game-changer because it enables sharing and searching of IBIS maps across the enterprise. Moreover, as we shall see below, the implications of this go beyond sharing and search.
Full Disclosure: As regular readers of this blog know, Paul is a good friend and we have jointly written a book and a few papers. However, at the time of writing, I have no commercial association with Seven Sigma. My comments below are based on playing with beta version of the product that Paul was kind enough to give me to access to as well as some discussions that I have had with him.
The look and feel of Glyma is much the same as Compendium (see Fig 3 above) – and the keystrokes and shortcuts are quite similar. I have trialled Glyma for a few weeks and feel that the overall experience is actually a bit better than in Compendium. For example one can navigate through a series of maps and sub-maps using a breadcrumb trail. Another example: documents and videos are embedded within the map – so one does not need to leave the map in order to view tagged media (unless of course one wants to see it at a higher resolution).
I won’t go into any detail about product features etc. since that kind of information is best accessed at source – i.e. the product website and documentation. Instead, I will now focus on how Glyma addresses a longstanding gap in knowledge management systems.
Revisiting the problem of context
In most organisations, knowledge artefacts (such as documents and audio-visual media) are stored in hierarchical or relational structures (for example, a folder within a directory structure or a relational database). To be sure, stored artefacts are often tagged with metadata indicating when, where and by whom they were created, but experience tells me that such metadata is not as useful as it should be. The problem is that the context in which an artefact was created is typically not captured. Anyone who has read a document and wondered, “What on earth were the authors thinking when they wrote this?” has encountered the problem of missing context.
Context, though hard to capture, is critically important in understanding the content of a knowledge artefact. Any artefact, when accessed without an appreciation of the context in which it was created is liable to be misinterpreted or only partially understood.
Capturing context in the enterprise
Glyma addresses the issue of context rather elegantly at the level of the enterprise. I’ll illustrate this point an inspiring case study on the innovative use of SharePoint in education that Paul has written about some time ago.
The case study
Here is the backstory in Paul’s words:
Earlier this year, I met Louis Zulli Jnr – a teacher out of Florida who is part of a program called the Centre of Advanced Technologies. We were co-keynoting at a conference and he came on after I had droned on about common SharePoint governance mistakes…The majority of Lou’s presentation showcased a whole bunch of SharePoint powered solutions that his students had written. The solutions were very impressive…We were treated to examples like:
- IOS, Android and Windows Phone apps that leveraged SharePoint to display teacher’s assignments, school events and class times;
- Silverlight based application providing a virtual tour of the campus;
- Integration of SharePoint with Moodle (an open source learning platform)
- An Academic Planner web application allowing students to plan their classes, submit a schedule, have them reviewed, track of the credits of the classes selected and whether a student’s selections meet graduation requirements;
All of this and more was developed by 16 to 18 year olds and all at a level of quality that I know most SharePoint consultancies would be jealous of…
Although the examples highlighted by Louis were very impressive, what Paul found more interesting were the anecdotes that Lou related about the dedication and persistence that students displayed in their work. Quoting again from Paul,
So the demos themselves were impressive enough, but that is actually not what impressed me the most. In fact, what had me hooked was not on the slide deck. It was the anecdotes that Lou told about the dedication of his students to the task and how they went about getting things done. He spoke of students working during their various school breaks to get projects completed and how they leveraged each other’s various skills and other strengths. Lou’s final slide summed his talk up brilliantly:
- Students want to make a difference! Give them the right project and they do incredible things.
- Make the project meaningful. Let it serve a purpose for the campus community.
- Learn to listen. If your students have a better way, do it. If they have an idea, let them explore it.
- Invest in success early. Make sure you have the infrastructure to guarantee uptime and have a development farm.
- Every situation is different but there is no harm in failure. “I have not failed. I’ve found 10,000 ways that won’t work” – Thomas A. Edison
In brief: these points highlight the fact that Lou’s primary role as director of the center is to create the conditions that make it possible for students to do great work. The commercial-level quality of work turned out by students suggests that Lou’s knowledge on how to build high-performing teams is definitely worth capturing.
The question is: what’s the best way to do this (short of getting him to visit you and talk about his experiences)?
Seeing the forest for the trees
Paul recently interviewed Lou with the intent of documenting Lou’s experiences. The conversation was recorded on video and then “Glymafied” it – i.e the video was mapped using IBIS (see Figure 4 below).
There are a few points worth noting here:
- The content of the entire conversation is mapped out so one can “see” the conversation at a glance.
- The context in which a particular point (i.e. the content of a node) is made is clarified by the connections between a node and its neighbours. Moving left from a node gives a higher level picture, moving right drills down into details.
Of course, the reader will have noted that these are core IBIS capabilities that are available in Compendium (or any other IBIS tool). Glyma offers much more. Consider the following:
- Relevant documents or audio visual media can be tagged to specific nodes to provide supplementary material. In this case the video recording was tagged to specific nodes that related to points made in the video. Clicking on the play icon attached to such a node plays the segment in which the content of the node is being discussed. This is a really nice feature as it saves the user from having to watch the whole video (or play an extended game of ffwd-rew to get to the point of interest). Moreover, this provides additional context that cannot (or is not) captured by in the map. For example, one can attach papers, links to web pages, Slideshare presentations etc. to fill in background and context.
- Glyma is integrated with an enterprise content management system by design. One can therefore link map and video content to the powerful built-in search and content aggregation features of these systems. For example, users would be able enter a search from their intranet home page and retrieve not only traditional content such as documents, but also stories, reflections and anecdotes from experts such as Lou.
- Another critical aspect to intranet integration is the ability to provide maps as contextual navigation. Amazon’s ability to sell books that people never intended to buy is an example of the power of such navigation. The ability to execute the kinds of queries outlined in the previous point, along with contextual information such as user profile details, previous activity on the intranet and the area of an intranet the user is browsing, makes it possible to present recommendations of nodes or maps that may be of potential interest to users. Such targeted recommendations might encourage users to explore video (and other rich media) content.
Technical Aside: An interesting under-the-hood feature of Glyma is that it uses an implementation of a hypergraph database to store maps. (Note: this is a database that can store representations of graphs (maps) in which an edge can connect to more than 2 vertices). These databases enable the storing of very general graph structures. What this means is that Glyma can be extended to store any kind of map (Mind Maps, Concept Maps, Argument Maps or whatever)…and nodes can be shared across maps. This feature has not been developed as yet, but I mention it because it offers some exciting possibilities in the future.
To summarise: since Glyma stores all its data in an enterprise-class database, maps can be made available across an organization. It offers the ability to tag nodes with pretty much any kind of media (documents, audio/video clips etc.), and one can tag specific parts of the media that are relevant to the content of the node (a snippet of a video, for example). Moreover, the sophisticated search capabilities of the platform enable context aware search. Specifically, we can search for nodes by keywords or tags, and once a node of interest is located, we can also view the map(s) in which it appears. The map(s) inform us of the context(s) relating to the node. The ability to display the “contextual environment” of a piece of information is what makes Glyma really interesting.
In metaphorical terms, Glyma enables us to see the forest for the trees.
…and so, to conclude
My aim in this post has been to introduce readers to the IBIS notation and trace its history from its origins in issue mapping to recent developments in knowledge management. The history of a technique is valuable because it gives insight into the rationale behind its creation, which leads to a better understanding of the different ways in which it can be used. Indeed, it would not be an exaggeration to say that the knowledge management applications discussed above are but an extension of Rittel’s original reasons for inventing IBIS.
I would like to close this piece with a couple of observations from my experience with IBIS:
Firstly, the real surprise for me has been that the technique can capture most written arguments and conversations, despite having only three distinct elements and a very simple grammar. Yes, it does require some thought to do this, particularly when mapping discussions in real time. However, this cognitive overhead is good because it forces the mapper to think about what’s being said instead of just writing it down blind. Thoughtful transcription is the aim of the game. When done right, this results in a map that truly reflects an understanding of a complex issue.
Secondly, although most current discussions of IBIS focus on its applications in dialogue mapping, it has a more important role to play in mapping organizational knowledge. Maps offer a powerful means to navigate the complex network of knowledge within an organisation. The (aspirational) end-goal of such an effort would be a “global” knowledge map that highlights interconnections between different kinds of knowledge that exists within an organization. To be sure, such a map will make little sense to the human eye, but powerful search capabilities could make it navigable. To the extent that this is a feasible direction, I foresee IBIS becoming an important skill in the repertoire of knowledge management professionals.
Work commitments have conspired to keep this post short. Well, short compared to my usual long-winded essays at any rate. Among other things, I’m currently helping get a biggish project started while also trying to finish my current writing commitments in whatever little free time I have. Fortunately, I have a ready-made topic to write about this week: my recently published paper on the use of dialogue mapping in project management. Instead of summarizing the paper, as I usually do in my paper reviews, I’ll simply present some background to the paper and describe, in brief, my rationale for writing it.
As regular readers of this blog will know, I am a fan of dialogue mapping, a conversation mapping technique pioneered by Jeff Conklin. Those unfamiliar with the technique will find a super-quick introduction here. Dialogue mapping uses a visual notation called issue based information system (IBIS) which I have described in detail in this post. IBIS was invented by Horst Rittel as a means to capture and clarify facets of wicked problems – problems that are hard to define, let alone solve. However, as I discuss in the paper, the technique also has utility in the much more mundane day-to-day business of managing projects.
In essence, IBIS provides a means to capture questions, responses to questions and arguments for and against those responses. This, coupled with the fact that it is easy to use, makes it eminently suited to capturing conversations in which issues are debated and resolved. Dialogue mapping is therefore a great way to surface options, debate them and reach a “best for group” decision in real-time. The technique thus has many applications in organizational settings. I have used it regularly in project meetings, particularly those in which critical decisions regarding design or approach are being discussed.
Early last year I used the technique to kick-start a data warehousing initiative within the organisation I work for. In the paper I use this experience as a case-study to illustrate some key aspects and features of dialogue mapping that make it useful in project discussions. For completeness I also discuss why other visual notations for decision and design rationale don’t work as well as IBIS for capturing conversations in real-time. However, the main rationale for the paper is to provide a short, self-contained introduction to the technique via a realistic case-study.
Most project managers would have had to confront questions such as “what approach should we take to solve this problem?” in situations where there is not enough information to make a sound decision. In such situations, the only recourse one has is to dialogue – to talk it over with the team, and thereby reach a shared understanding of the options available. More often than not, a consensus decision emerges from such dialogue. Such a decision would be based on the collective knowledge of the team, not just that of an individual. Dialogue mapping provides a means to get to such a collective decision.
How do we choose between competing design proposals for information systems? In principle this should be done using an evaluation process based on objective criteria. In practice, though, people generally make choices based on their interests and preferences. These can differ widely, so decisions are often made on the basis of criteria that do not satisfy the interests of all stakeholders. Consequently, once a system becomes operational, complaints from stakeholder groups whose interests were overlooked are almost inevitable (Just think back to any system implementation that you were involved in…).
The point is: choosing between designs is not a purely technical issue; it also involves value judgements – what’s right and what’s wrong – or even, what’s good and what’s not. Problem is, this is a deeply personal matter – different folks have different values and, consequently, differing ideas of which design ideal is best (Note: the term ideal refers to the value judgements associated with a design). Ten years ago, Heinz Klein and Rudy Hirschheim addressed this issue in a landmark paper entitled, Choosing Between Competing Design Ideals in Information Systems Development. This post is a summary of the main ideas presented in their paper.
Design ideals and deliberation
A design ideal is not an abstract, philosophical concept. The notion of good and bad or right and wrong can be applied to the technical, economic and social aspects of a system. For example, a choice between building and buying a system has different economic and social consequences for stakeholder groups within and outside the organisation. What’s more, the competing ideals may be in conflict – developers employed in the organisation would obviously prefer to build rather than buy because their employment depends on it; management, however, may have a very different take on the issue.
The essential point that Hirschheim and Klein make is that differences in values can be reconciled only through honest discussion. They propose a deliberative approach wherein all stakeholders discuss issues in order to come to an agreement. To this end, they draw on the theories of argumentation and communicative rationality to come up with a rational framework for comparing design ideals. Since these terms are new, I’ll spend a couple of paragraphs in describing them briefly.
Argumentation is essentially reasoned debate – i.e. the process of reaching conclusions via arguments that use informal logic – which, according to the definition in the foregoing link, is the attempt to develop a logic to assess, analyse and improve ordinary language (or “everyday”) reasoning. Hirschheim and Klein use the argumentation framework proposed by Stephen Toulmin, to illustrate their approach.
The basic premise of communicative rationality is that rationality (or reason) is tied to social interactions and dialogue. In other words, the exercise of reason can occur only through dialogue. Such communication, or mutual deliberation, ought to result in a general agreement about the issues under discussion. Only once such agreement is achieved can there be a consensus on actions that need to be taken. See my article on rational dialogue in project environments for more on communicative rationality.
Obstacles to rational dialogue and how to overcome them
The key point about communicative rationality is that it assumes the following conditions hold:
- Inclusion: includes all stakeholders.
- Autonomy: all participants should be able to present and criticise views without fear of reprisals.
- Empathy: participants must be willing to listen to and understand claims made by others.
- Power neutrality: power differences (levels of authority) between participants should not affect the discussion.
- Transparency: participants must not indulge in strategic actions (i.e. lying!).
Clearly these are idealistic conditions, difficult to achieve in any real organisation. Klein and Hirschheim acknowledge this point, and note the following barriers to rationality in organisational decision making:
- Social barriers: These include inequalities (between individuals) in power, status, education and resources.
- Motivational barriers: This refers to the psychological cost of prolonged debate. After a period of sustained debate, people will often cave in just to stop arguing even though they may have the better argument.
- Economic barriers: Time is money: most organisations cannot afford a prolonged debate on contentious issues.
- Personality differences: How often is it that the most charismatic or articulate person gets their way, and the quiet guy in the corner (with a good idea or two) is completely overlooked?
- Linguistic barriers: This refers to the difficulty of formulating arguments in a way that makes sense to the listener. This involves, among other things, the ability to present ideas in a way that is succinct, without glossing over the important issue – a skill not possessed by many.
These barriers will come as no surprise to most readers. It will be just as unsurprising that it is difficult to overcome them. Klein and Hirschheim offer the usual solutions including:
- Encourage open debate – They suggest the use of technologies that support collaboration. They can be forgiven for their optimism given that the paper was written a decade ago, but the fact of the matter is that all the technologies that have sprouted since have done little to encourage open debate.
- Implement group decision techniques – these include arrangements such as quality circles, nominal groups and constructive controversy. However, these too will not work unless people feel safe enough to articulate their opinions freely.
Even though the barriers to open dialogue are daunting, it behooves system designers to strive towards reducing or removing them. There are effective ways to do this, but that’s a topic I won’t go into here as it has been dealt with at length elsewhere.
Principles for arguments about value judgements
So, assuming the environment is right, how should we debate value judgements? Klein and Hirschheim recommend using informal logic supplemented with ethical principles. Let’s look at these two elements briefly.
Informal logic is a means to reason about human concerns. Typically, in these issues there is no clear cut notion of truth and falsity. Toulmin’s argumentation framework (mentioned earlier in this post) tells us how arguments about such issues should be constructed. It consists of the following elements:
- Claim: A statement that one party asks another to accept as true. An example would be my claim that I did not show up to work yesterday because I was not well.
- Data (Evidence): The basis on which one party expects to other to accept a claim as true. To back the claim made in the previous line, I might draw attention to my runny nose and hoarse voice.
- Warrant: The bridge between the data and the claim. Again, using the same example, a warrant would be that I look drawn today, so it is likely that I really was sick yesterday.
- Backing: Further evidence, if the warrant should prove insufficient. If my boss is unconvinced by my appearance he may insist on a doctor certificate.
- Qualifier: These are words that express a degree of certainty about the claim. For instance, to emphasise just how sick I was, I might tell my boss that I stayed in bed all day because I had high fever.
This is all quite theoretical: when we debate issues we do not stop to think whether a statement is a warrant or a backing or something else; we just get on with the argument. Nevertheless, knowledge of informal logic can help us construct better arguments for our positions. Further, at the practical level there are computer supported deliberative techniques such as argument mapping and dialogue mapping which can assist in structuring and capturing such arguments.
The other element is ethics: Klein and Hirschheim contend that moral and ethical principles ought to be considered when value judgements are being evaluated. These principles include:
- Ought implies can – which essentially means that one morally ought to do something only if one can do it (see this paper for an interesting counterview of this principle). Taking the negation of this statement, one gets “Cannot implies ought not” which means that a design can be criticised if it involves doing something that is (demonstrably) impossible – or makes impossible demands on certain parties.
- Conflicting principles – This is best explained via an example. Consider a system that saves an organisation money but involves putting a large number of people out of work. In this case we would have an economic principle coming up against a social one. According to the principle, a design ideal based on an economic imperative can be criticised on social grounds.
- The principle of reconsideration – This implies reconsidering decisions if relevant new information becomes available. For example, if it is found that a particular design overlooked a certain group of users, then the design should be reconsidered in the light of their needs.
They also mention that new ethical and moral theories may trigger the principle of reconsideration. In my opinion, however, this is a relatively rare occurrence: how often are new ethical or moral theories proposed?
The main point made by the authors is that system design involves value judgements. Since these are largely subjective, open debate using the principles of informal logic is the best way to deal with conflicting values. Moreover, since such issues are not entirely technical, one has to use ethical principles to guide debate. These principles – not asking people to do the impossible; taking everyone’s interests into account and reconsidering decisions in the light of new information – are reasonable if not self-evident. However, as obvious as they are, they are often ignored in design deliberations. Hirschheim and Klein do us a great service by reminding us of their importance.
Most knowledge management efforts on projects focus on capturing what or how rather than why. That is, they focus on documenting approaches and procedures rather than the reasons behind them. This often leads to a situation where folks working on subsequent projects (or even the same project!) are left wondering why a particular technique or approach was favoured over others. How often have you as a project manager asked yourself questions like the following when browsing through a project archive?
- Why did project decision-makers choose to co-develop the system rather than build it in-house or outsource it?
- Why did the developer responsible for a module use this approach rather than that one?
More often than not, project archives are silent on such matters because the reasoning behind decisions isn’t documented. In this post I discuss how the Issue Based Information System (IBIS) notation can be used to fill this “rationale gap” by capturing the reasoning behind project decisions.
Note: Those unfamiliar with IBIS may want to have a browse of my post on entitled what and whence of issue-based information systems for a quick introduction to the notation and its uses. I also recommend downloading and installing Compendium, a free software tool that can be used to create IBIS maps.
Example 1: Build or outsource?
In a post entitled The Approach: A dialogue mapping story, I presented a fictitious account of how a project team member constructed an IBIS map of a project discussion (Note: dialogue mapping refers to the art of mapping conversations as they occur). The issue under discussion was the approach that should be used to build a system.
The options discussed by the team were:
- Build the system in-house.
- Outsource system development.
- Co-develop using a mix of external and internal staff.
Additionally, the selected approach had to satisfy the following criteria:
- Must be within a specified budget.
- Must implement all features that have been marked as top priority.
- Must be completed within a specified time
The post details how the discussion was mapped in real-time. Here I’ll simply show the final map of the discussion (see Figure 1).
Although the option chosen by the group is not marked (they chose to co-develop), the figure describes the pros and cons of each approach (and elaborations of these) in a clear and easy-to-understand manner. In other words, it maps the rationale behind the decision – a person looking at the map can get a sense for why the team chose to co-develop rather than use any of the other approaches.
Example 2: Real-time updates of a data mart
In another post on dialogue mapping I described how IBIS was used to map a technical discussion about the best way to update selected tables in a data mart during business hours. For readers who are unfamiliar with the term: data marts are databases that are (generally) used purely for reporting and analysis. They are typically updated via batch processes that are run outside of normal business hours. The requirement to do real-time updates arose from a business need to see up-to-the-minute reports at specified times during the financial year.
Again, I’ll refer the reader to the post for details, and simply present the final map (see Figure 2).
Since there are a few technical terms involved, here’s a brief rundown of the options, lifted straight from my earlier post (Note: feel free skip this detail – it is incidental to the main point of this post) :
- Use our messaging infrastructure to carry out the update.
- Write database triggers on transaction tables. These triggers would update the data mart tables directly or indirectly.
- Write custom T-SQL procedures (or an SSIS package) to carry out the update (the database is SQL Server 2005).
- Run the relevant (already existing) Extract, Transform, Load (ETL) procedures at more frequent intervals – possibly several times during the day.
In this map the option chosen by the group decision is marked out – it was decided that no additional development was needed; the “real-time” requirement could be satisfied simply by running existing update procedures during business hours (option 4 listed above).
Once again, the reasoning behind the decision is easy to see: the option chosen offered the simplest and quickest way to satisfy the business requirement, even though the update was not really done in real-time.
The above examples illustrate how IBIS captures the reasoning behind project decisions. It does so by:
- Making explicit all the options considered.
- Describing the pros and cons of each option (and elaborations thereof).
- Providing a means to explicitly tag an option as a decision.
- Optionally, providing a means to link out to external source (documents, spreadsheets, urls). In the second example I could have added clickable references to documents elaborating on technical detail using the external link capability of Compendium.
Issue maps (as IBIS maps are sometimes called) lay out the reasoning behind decisions in a visual, easy-to-understand way. The visual aspect is important – see this post for more on why visual representations of reasoning are more effective than prose.
I’ve used IBIS to map discussions ranging from project approaches to mathematical model building, and have found them to be invaluable when asked questions about why things were done in a certain way. Just last week, I was able to answer a question about variables used in a market segmentation model that I built almost two years ago – simply by referring back to the issue map of the discussion and the notes I had made in it.
In summary: IBIS provides a means to capture decision rationale in a visual and easy-to-understand way, something that is hard to do using other means.
Over the last year or so, I’ve used IBIS (Issue-Based Information System) to map a variety of discussions at work, ranging from design deliberations to project meetings [Note: see this post for an introduction to IBIS and this one for an example of mapping dialogues using IBIS]. Feedback from participants indicated that IBIS helps to keep the discussion focused on the key issues, thus leading to better outcomes and decisions. Some participants even took the time to learn the notation (which doesn’t take long) and try it out in their own meetings. Yet, despite their initial enthusiasm, most of them gave it up after a session or two. Their reasons are well summed up by a colleague who said, “It is just too hard to build a coherent map on the fly while keeping track of the discussion.”
My colleague’s comment points to a truth about the technique: the success of a sense-making session depends rather critically on the skill of the practitioner. The question is: how do experienced practitioners engage their audience and build a coherent map whilst keeping the discussion moving in productive directions? Al Selvin, Simon Buckingham-Shum and Mark Aakhus provide a part answer to this question in their paper entitled, The Practice Level in Participatory Design Rationale : Studying Practitioner Moves and Choices. Specifically, they describe a general framework within which the practice of participatory design rationale (PDR) can be analysed (Note: more on PDR in the next section). This post is a discussion of some aspects of the framework and some personal reflections based on my (limited) experience.
A couple of caveats are in order before I proceed. Firstly, my discussion focuses on understanding the dimensions (or variables) that describe the act of creating design representations in real-time. Secondly, my comments and reflections on the model are based on my experiences with a specific design rationale technique – IBIS.
First up, it is worth clarifying the meaning of participatory design rationale (PDR). The term refers to the collective reasoning behind decisions that are made when a group designs an artifact. Generally such rationale involves consideration of various alternatives and why they were or weren’t chosen by the group. Typically this involves several people with differing views. Participatory design is thus an argumentative process, often with political overtones.
Clearly, since design involves deliberation and may also involve a healthy dose of politics, the process will work more effectively if it is structured. The structure should, however, be flexible: it must not constrain the choices and creativity of those involved. This is where the notation and practitioner (facilitator) come in: the notation lends structure to the discussion and the practitioner keeps it going in productive directions, yet in a way that sustains a coherent narrative. The latter is a creative process, much like an art. The representation (such as IBIS) – through its minimalist notation and grammar – helps keep the key issues, ideas and arguments firmly in focus. As Selvin noted in an earlier paper, this encourages collective creativity because it forces participants to think through their arguments more carefully than they would otherwise. Selvin coined the term knowledge art to refer to this process of developing and engaging with design representations. Indeed, the present paper is a detailed look at how practitioners create knowledge art – i.e. creative, expressive representations of the essence of design discussions. Quoting from the paper:
…we are looking at the experience of people in the role of caretakers or facilitators of such events – those who have some responsibility for the functioning of the group and session as a whole. Collaborative DR practitioners craft expressive representations on the fly with groups of people. They invite participant engagement, employing techniques like analysis, modeling, dialogue mapping, creative exploration, and rationale capture as appropriate. Practitioners inhabit this role and respond to discontinuities with a wide variety of styles and modes of action. Surfacing and describing this variety are our interests here.
The authors have significant experience in leading deliberations using IBIS and other design rationale methods. They propose a theoretical framework to identify and analyse various moves that practitioners make in order to keep the discussion moving in productive direction. They also describe various tools that they used to analyse discussions, and specific instances of the use of these tools. In the remainder of this post, I’ll focus on their theoretical framework rather than the specific case studies, as the former (I think) will be of more interest to readers. Further, I will focus on aspects of the framework that pertain to the practice – i.e. the things the practitioner does in order to keep the design representation coherent, the participants engaged and the discussion useful, i.e. moving in productive directions.
The dimensions of design rationale practice
So what do facilitators do when they lead deliberations? The key actions they undertake are best summed up in the authors’ words:
…when people act as PDR practitioners, they inherently make choices about how to proceed, give form to the visual and other representational products , help establish meanings, motives, and causality and respond when something breaks the expected flow of events , often having to invent fresh and creative responses on the spot.
This sentence summarises the important dimensions of the practice. Let’s look at each of the dimensions in brief:
- Ethics: At key points in the discussion, the practitioner is required to make decisions on how to proceed. These decisions cannot (should not!) be dispassionate or objective (as is often assumed), they need to be made with due consideration of “what is good and what is not good” from the perspective of the entire group.
- Aesthetics: This refers to the representation (map) of the discussion. As the authors put it, “All diagrammatic DR approaches have explicit and implicit rules about what constitutes a clear and expressive representation. People conversant with the approaches can quickly tell whether a particular artifact is a “good” example. This is the province of aesthetics.” In participatory design, representations are created as the discussion unfolds. The aesthetic responsibility of the practitioner is to create a map that is syntactically correct and expressive. Another aspect of the aesthetic dimension is that a “beautiful” map will engage the audience, much like a work of art.
- Narrative: One of the key responsibilities of the practitioner is to construct a coherent narrative from the diverse contributions made by the participants. Skilled practitioners pick up connections between different contributions and weave these into a coherent narrative. That said, the narrative isn’t just the practitioner’s interpretation; the practitioner has to ensure that everyone in the group is happy with the story; the story is to the group’ s story. A coherent narrative helps the group make sense of the discussion: specifically the issues canvassed, ideas offered and arguments for and against each of them. Building such a narrative can be challenging because design discussions often head off in unexpected directions.
- Sensemaking: During deliberations it is quite common that the group gets stuck. Progress can be blocked for a variety of reasons ranging from a lack of ideas on how to make progress to apparently irreconcilable differences of opinion on the best way to move forward. At these junctures the role of the practitioner is to break the impasse. Typically this involves conversational moves that open new ground (not considered by the group up to that point) or find ways around obstacles (perhaps by suggesting compromises or new choices). The key skill in sensemaking is the ability to improvise, which segues rather nicely into the next variable.
- Improvisation: Books such as Jeff Conklin’s classic on dialogue mapping describe some standard moves and good practices in PDR practice. In reality, however, a practitioner will inevitably encounter situations that cannot be tackled using standard techniques. In such situations the practitioner has to improvise. This could involve making unconventional moves within the representation or even using another representation altogether. These improvisations are limited only by the practitioner’s creativity and experience.
Using case studies, the authors illustrate how design rationale sessions can be analysed along the aforementioned dimensions, both at a micro and macro level. The former involves a detailed move-by-move study of the session and the latter an aggregated view, based on the overall tenor of episodes consisting of several moves. I won’t say any more about the analyses here, instead I’ll discuss the relevance of the model to the actual practice of design rationale techniques such as dialogue mapping.
Some reflections on the model
When I first heard about dialogue mapping, I felt the claims made about the technique were exaggerated: it seemed impossible that a simple notation like IBIS (which consists of just three elements and a simple grammar) could actually enhance collaboration and collective creativity of a group. With a bit of experience, I began to see that it actually did do what it claimed to do. However, I was unable to explain to others how or why it worked. In one conversation with a manager, I found myself offering hand-waving explanations about the technique – which he (quite rightly) found unconvincing. It seemed that the only way to see how or why it worked was to use it oneself. In short: I realised that the technique involved tacit rather than explicit knowledge.
Now, most practices– even the most mundane ones – involve a degree of tacitness. In fact, in an earlier post I have argued that the concept of best practice is flawed because it assumes that the knowledge involved in a practice can be extracted in its entirety and codified in a manner that others can understand and reproduce. This assumption is incorrect because the procedural aspects of a practice (which can be codified) do not capture everything – they miss aspects such as context and environmental influences, for instance. As a result a practice that works in a given situation may not work in another, even though the two may be similar. So it is with PDR techniques – they work only when tailored on the fly to the situation at hand. Context is king. In contrast the procedural aspects of PDR techniques– syntax, grammar etc – are trivial and can be learnt in a short time.
In my opinion, the value of the model is that it attempts to articulate tacit aspects of PDR techniques. In doing so, it tells us why the techniques work in one particular situation but not in another. How so? Well, the model tells us the things that PDR practitioners worry about when they facilitate PDR sessions – they worry about the form of the map (aesthetics), the story it tells (narrative), helping the group resolve difficult issues (sensemaking), making the right choices (ethics) and stepping outside the box if necessary (improvisation). These are tacit skills- they can’t be taught via textbooks, they can only be learnt by doing. Moreover, when such techniques fail the reason can usually be traced back to a failure (of the facilitator) along one or more of these dimensions.
Techniques to capture participatory design rationale have been around for a while. Although it is generally acknowledged that such techniques aid the process of collaborative design, it is also known that their usefulness depends rather critically on the skill of the practitioner. This being the case, it is important to know what exactly skilled practitioners do that sets them apart from novices and journeymen. The model is a first step towards this. By identifying the dimensions of PDR practice, the model gives us a means to analyse practitioner moves during PDR sessions – for example, one can say that this was a sensemaking move or that was an improvisation. Awareness of these types of moves and how they work in real life situations can help novices learn the basics of the craft and practitioners master its finer points.