Eight to Late

Sensemaking and Analytics for Organizations

Archive for March 2011

Value judgements in system design

leave a comment »


How do we choose between competing design proposals for information systems? In principle this should be done using an evaluation process based on objective criteria. In practice, though, people generally make choices based on their interests and preferences. These can differ widely, so decisions are often made on the basis of criteria that do not satisfy the interests of all stakeholders. Consequently, once a system becomes operational, complaints from stakeholder groups whose interests were overlooked are almost inevitable (Just think back to any system implementation that you were involved in…).

The point is: choosing between designs is not a purely technical issue; it also involves value judgements – what’s right and what’s wrong – or even, what’s good and what’s not. Problem is, this is a deeply personal matter – different folks have different values and, consequently, differing ideas of which design ideal is best (Note: the term ideal refers to the value judgements associated with a design). Ten years ago, Heinz Klein and Rudy Hirschheim addressed this issue in a landmark paper entitled, Choosing Between Competing Design Ideals in Information Systems Development. This post is a summary of the main ideas presented in their paper.

Design ideals and deliberation

A design ideal is not an abstract, philosophical concept. The notion of good and bad or right and wrong can be applied to the technical, economic and social aspects of a system. For example, a choice between building and buying a system has different economic and social consequences for stakeholder groups within and outside the organisation. What’s more, the competing ideals may be in conflict – developers employed in the organisation would obviously prefer to build rather than buy because their employment depends on it; management, however, may have a very different take on the issue.

The essential point that Hirschheim and Klein make is that differences in values can be reconciled only through honest discussion. They propose a deliberative approach wherein all stakeholders discuss issues in order to come to an agreement. To this end, they draw on the theories of argumentation and communicative rationality to come up with a rational framework for comparing design ideals.  Since these terms are new, I’ll spend a couple of paragraphs in describing them briefly.

Argumentation is essentially reasoned debate – i.e. the process of reaching conclusions via arguments that use informal logic – which, according to the definition in the foregoing link, is the attempt to develop a logic to assess, analyse and improve ordinary language (or “everyday”) reasoning. Hirschheim and Klein use the argumentation framework proposed by Stephen Toulmin, to illustrate their approach.

The basic premise of communicative rationality is that rationality (or reason) is tied to social interactions and dialogue. In other words, the exercise of reason can  occur only through dialogue.  Such communication, or mutual deliberation, ought to result in a general agreement about the issues under discussion.  Only once such agreement is achieved can there be a consensus on actions that need to be taken.  See my article on rational dialogue in project environments for more on communicative rationality.

Obstacles to rational dialogue and how to overcome them

The key point about communicative rationality is that it assumes the following conditions hold:

  1. Inclusion:  includes all stakeholders.
  2. Autonomy: all participants should be able to present and criticise views without fear of reprisals.
  3. Empathy: participants must be willing to listen to and understand claims made by others.
  4. Power neutrality: power differences (levels of authority) between participants should not affect the discussion.
  5. Transparency: participants must not indulge in strategic actions (i.e. lying!).

Clearly these are idealistic conditions, difficult to achieve in any real organisation.  Klein and Hirschheim acknowledge this point, and note the following barriers to rationality in organisational decision making:

  1. Social barriers: These include inequalities (between individuals) in power, status, education and resources.
  2. Motivational barriers: This refers to  the psychological cost of prolonged debate.  After a period of sustained debate, people will often cave in just to stop arguing even though they may have the better argument.
  3. Economic barriers: Time is money: most organisations cannot afford a prolonged debate on contentious issues.
  4. Personality differences: How often is it that the most charismatic or articulate person gets their way, and the quiet guy in the corner (with a good idea or two) is completely overlooked?
  5. Linguistic barriers: This refers to the difficulty of formulating arguments in a way that makes sense to the listener. This involves, among other things, the ability to present ideas in a way that is succinct, without glossing over the important issue – a skill not possessed by many.

These barriers will come as no surprise to most readers. It will be just as unsurprising that it is difficult to overcome them. Klein and Hirschheim offer the usual solutions including:

  1. Encourage open debate – They suggest the use of technologies that support collaboration. They can be forgiven for their optimism given that the paper was written a decade ago, but the fact of the matter is that all the technologies that have sprouted since have done little to encourage open debate.
  2. Implement group decision techniques –  these include arrangements such as quality circles, nominal groups and constructive controversy. However, these too will not work unless people feel safe enough to articulate their opinions freely.

Even though the barriers to open dialogue are daunting, it behooves system designers to strive towards reducing or removing them.  There are effective ways to do this, but that’s a topic I won’t go into here as it has been dealt with at length elsewhere.

Principles for arguments about value judgements

So, assuming the environment is right, how should we debate value judgements?  Klein and Hirschheim recommend using informal logic supplemented with ethical principles. Let’s look at these two elements briefly.

Informal logic is a means to reason about human concerns. Typically, in these issues there is no clear cut notion of truth and falsity. Toulmin’s argumentation framework (mentioned earlier in this post) tells us how arguments about such issues should be constructed. It consists of the following elements:

  1. Claim:   A statement that one party asks another to accept as true. An example would be my claim that I did not show up to work yesterday because I was not well.
  2. Data (Evidence): The basis on which one party expects to other to accept a claim as true. To back the claim made in the previous line, I might draw attention to my runny nose and hoarse voice.
  3. Warrant:  The bridge between the data and the claim.  Again, using the same example, a warrant would be that I look drawn today, so it is likely that I really was sick yesterday.
  4. Backing: Further evidence, if the warrant should prove insufficient. If my boss is unconvinced by my appearance he may insist on a doctor certificate.
  5. Qualifier: These are words that express a degree of certainty about the claim. For instance, to emphasise just how sick I was, I might tell my boss  that I stayed in bed all day because I had  high fever.

This is all quite theoretical: when we debate issues we do not stop to think whether a statement is a warrant or a backing or something else; we just get on with the argument. Nevertheless, knowledge of informal logic can help us construct better arguments for our positions. Further, at the practical level there are computer supported deliberative techniques such as argument mapping and dialogue mapping which can assist in structuring and capturing such arguments.

The other element is ethics: Klein and Hirschheim  contend that moral and ethical principles ought to be considered when value judgements are being evaluated. These principles include:

  1. Ought implies can – which essentially means that one morally ought to do something only if one can do it (see this paper for an interesting counterview of this principle). Taking the negation of this statement, one gets “Cannot implies ought not” which means that a design can be criticised if it involves doing something that is (demonstrably) impossible – or makes impossible demands on certain parties.
  2. Conflicting principles – This is best explained via an example. Consider a system that saves an organisation money but involves putting a large number of people out of work. In this case we would have an economic principle coming up against a social one. According to the principle, a design ideal based on an economic imperative can be criticised on social grounds.
  3. The principle of reconsideration – This implies reconsidering decisions if relevant new information becomes available. For example, if it is found that a particular design overlooked a certain group of users, then the design should be reconsidered in the light of their needs.

They also mention that new ethical and moral theories may trigger the principle of reconsideration. In my opinion, however, this is a relatively rare occurrence:  how often are new ethical or moral theories proposed?


The main point made by the authors is that system design involves value judgements. Since these are largely subjective,  open debate using the principles of informal logic is the best way to deal with conflicting values. Moreover, since such issues are not entirely technical, one has to use ethical principles to guide debate.  These principles – not asking people to do the impossible; taking everyone’s interests into account and reconsidering decisions in the light of new information – are reasonable if not self-evident. However, as obvious as they are, they are often ignored in design deliberations. Hirschheim and Klein do us a great service by reminding us of their importance.

Written by K

March 31, 2011 at 10:16 pm

Metaphors we argue by

with 11 comments


In our professional lives we often come across people who have opinions  that differ from ours. If our views matter to us, we may attempt to influence such people by presenting reasons why we think our positions are better than theirs.  In response they will do the same, and so we have an argument: a debate or a discussion involving differing points of view.  The point of disagreement could be just about anything –a design, a business decision or even a company dinner menu.  In this post I  explore the idea that the dictionary meanings of the word “argument” do  not tell the whole story about what an argument actually is. In their classic work on conceptual metaphors, George Lakoff and Mark Johnson show how metaphors  influence the way we view and understand human experiences (such as arguments).  Below, I look at a few metaphors for argument and discuss how they influence our attitudes  to the act of arguing.

Argument as war

In the very first chapter of their book, Lakoff and Johnson use the metaphor argument is war to illustrate how arguments are often viewed, practiced and experienced. Consider the following statements:

  1. He attacked my idea.
  2. I defended my position.
  3. He countered my argument.
  4. I won the argument.

These statements – and others similar to them – are often used to describe the experience of arguing. They highlight the essentially adversarial nature of debate in our society. Lakoff and Johnson suggest that this metaphor colours the way we think about and approach arguments. This makes sense – just think about the negative emotions and confrontational attitudes that people bring to meetings in which contentious matters are discussed.

However,  it doesn’t have to be that way. Consider the following metaphor…

Argument as art

In a  post on collaborative knowledge creation, I discussed how a specific kind of argument – design discussion – can be seen as a act of collaborative knowledge creation. Al Selvin uses the term knowledge art to refer to this form of argument.  As he sees it, knowledge art is a marriage of the two forms of discourse that make up the term. On the one hand, we have knowledge which, “… in an organizational setting, can be thought of as what is needed to perform work; the tacit and explicit concepts, relationships, and rules that allow us to know how to do what we do.” On the other, we have art which “… is concerned with heightened expression, metaphor, crafting, emotion, nuance, creativity, meaning, purpose, beauty, rhythm, timbre, tone, immediacy, and connection.”

Design discussions can be no less contentious than other arguments – in fact often they are even more so.   Nevertheless, if participants view the process as one of creation, then their emphasis is on working together to craft an aesthetically pleasing and functional design. They would then be less concerned about being right (or proving the other wrong) than about contributing what they know to make the design better. Of course, the right environment has to be in place for people to be able to work together (and that is another story in itself!), but the important point here is that there are non-adversarial metaphors for arguments, which can lead to productive rather than point-scoring debates.

Note that this metaphor does not just apply to design discussions.  Consider the following statements, which could be used in the context of any kind of argument:

  1. His words were well crafted.
  2. His ideas gave us a new perspective.
  3. He expressed his views clearly.

Among other things these statements emphasise that arguments can be conducted in a respectful and non-confrontational manner.

Argument as cooperation

One of the features of productive arguments is the way in which participants work together to make contributions that make a coherent whole. Consider the following statements:

  1. His contribution was important.
  2. His ideas complemented mine.
  3. The discussion helped us reach a shared understanding of the issue.
  4. The discussion helped us achieve a consensus.

Although this metaphor is almost the opposite of the “argument as war,” it is not hard to see that, given the right conditions and environment, arguments can actually work this way. But even if the conditions are not right, use of words that allude to cooperation can itself have a positive effect on how the argument is viewed by participants. In this sense the metaphor we use to describe the act of arguing actually influences the way we argue.

Argument as journey

This metaphor, also from Lakoff and Johnson, draws on the similarities between a journey and a debate. Consider the following statements:

  1. He outlined his arguments step-by-step.
  2. I didn’t know where he was going with that idea.
  3. We are going around in circles.

Use of the “argument as journey” metaphor, sets the tone for gradual elaboration / understanding of issues as the argument unfolds. The emphasis here is on progress, as in a journey. Note that this metaphor complements the “argument as art” and “argument as cooperation” metaphors – creating a work of art can be likened to a journey and cooperation can be viewed as a collective journey. These are examples of what Lakoff and Johnson call coherent metaphors.

Argument as quest

In my view, the “argument as quest” metaphor is perhaps a particularly useful one, especially for collaborative design discussions. Consider the following statements:

  1. We explored our options.
  2. We looked for the best approach.
  3. We examined our assumptions.

This metaphor, together with the  one that views argument as as a cooperative process, capture the essence of what collaborative design should be.

In summary

The most common metaphor for argument is the first one – argument as war. It is no surprise, then, that arguments are generally viewed in a negative way. To see that this is so, one only has to look up synonyms for the word – some of these are: disagreement, bickering, fighting, altercation etc. Positive synonyms are harder to come by – exchange was the best I could find, but even that has a negative connotation in this context (an exchange of words rather than ideas).

In their book, Lakoff and Johnson speculate what the metaphor argument as dance might entail. Here’s what they have to say:

Imagine a culture where argument is viewed as a dance, the participants are seen as performers, and the goal is to perform in a balanced and aesthetically pleasing way. In such a culture, people would view arguments differently, experience them differently, carry them out differently and talk about them differently.

They conclude that we may not even see what they are doing as “arguing.” They would simply have a different mode of discourse from our  adversarial one.

Lakoff and Johnson tell us that metaphors influence the way we conceptualise and structure our experiences, attitudes and actions.  In this post I have discussed how different metaphors for the term argument lead to different views of and attitudes toward the act of arguing.  Now, I’m no philosopher nor am I a linguist, but it seems reasonable to me that the metaphors people use when talking about the act of arguing tell us quite a bit about the attitudes will assume in deliberations.

In short: the metaphors we argue by matter because they influence the way we argue.

Written by K

March 22, 2011 at 10:17 pm

Capturing decision rationale on projects

with 3 comments


Most knowledge  management efforts on projects focus on capturing what or how rather than why. That is, they focus on documenting approaches and procedures rather than the reasons behind them. This often leads to a situation where folks working on subsequent projects (or even the same project!) are left wondering why a particular technique or approach was favoured over others.  How often have you as a project manager asked yourself questions like the following when browsing through a project archive?

  1. Why did project decision-makers choose to co-develop the system rather than build it in-house or outsource it?
  2. Why did the developer responsible for a module use this approach rather than that one?

More often than not, project archives are silent on such matters because the reasoning behind decisions isn’t documented. In this post I discuss how the Issue Based Information System (IBIS) notation can be used to fill this “rationale gap” by capturing the reasoning behind project decisions.

Note: Those unfamiliar with IBIS may want to have a browse of my post on entitled what and whence of issue-based information systems for a quick introduction to the notation and its uses. I also recommend downloading and installing Compendium, a free software tool that can be used to create IBIS maps.

Example 1: Build or outsource?

In a post entitled The Approach: A dialogue mapping story, I presented a fictitious account of  how a project team member constructed an IBIS map of a project discussion (Note: dialogue mapping refers to the art of mapping conversations as they occur). The issue under discussion was the approach that should be used to build a system.

The options discussed by the team were:

  1. Build the system in-house.
  2. Outsource system development.
  3. Co-develop using a mix of external and internal staff.

Additionally, the selected approach had to satisfy the following criteria:

  1. Must be within a specified budget.
  2. Must implement all features that have been marked as top priority.
  3. Must be completed within a specified time

The post details how the discussion was mapped in real-time.  Here I’ll simply show the final map of  the discussion (see  Figure 1).

Figure 1: IBIS map for Example 1

Although the option chosen by the group is not marked (they chose to co-develop), the figure describes the pros and cons of each approach (and elaborations of these) in a clear and easy-to-understand manner.  In other words, it maps the rationale behind the decision – a  person looking at the map can get a sense for why the team chose to co-develop rather than use any of the other approaches.

Example 2: Real-time updates of a data mart

In another post on dialogue mapping I described how IBIS was used to map a technical discussion about the best way to update selected tables in a data mart during business hours.  For readers who are unfamiliar with the term: data marts are databases that are (generally) used purely for reporting and analysis.  They are typically updated via batch processes that are run outside of normal business hours.   The requirement to do real-time updates arose from a business need to see up-to-the-minute reports at specified times during the financial year.

Again, I’ll refer the reader to the post for details, and simply present the final map (see Figure 2).

Figure 2: IBIS Map for Example 2.

Since there are a few technical terms involved, here’s a brief rundown of the options, lifted straight from my earlier post (Note: feel free skip this detail  – it is incidental to the main point of this post) :

  1. Use our messaging infrastructure to carry out the update.
  2. Write database triggers on transaction tables. These triggers would update the data mart tables directly or indirectly.
  3. Write custom T-SQL procedures (or an SSIS package) to carry out the update (the database is SQL Server 2005).
  4. Run the relevant (already existing) Extract, Transform, Load (ETL) procedures at more frequent intervals – possibly several times during the day.

In this map the option chosen by the group decision is marked out – it was decided that no additional development was needed; the “real-time” requirement could be satisfied simply by running existing update procedures during business hours (option 4 listed above).

Once again, the reasoning behind the decision is easy to see:  the option chosen offered the simplest and quickest way to satisfy the business requirement, even though the update was not really done in real-time.


The above examples illustrate how IBIS captures the reasoning behind project decisions.  It does so  by:

  1. Making explicit all the options considered.
  2. Describing the pros and cons of each option (and elaborations thereof).
  3. Providing a means to explicitly tag an option as a decision.
  4. Optionally, providing a means to link out to external source (documents, spreadsheets, urls). In the second example I could have added clickable references to documents elaborating on technical detail using the external link capability of Compendium.

Issue maps (as IBIS maps are sometimes called)  lay out the reasoning behind decisions in a visual, easy-to-understand way.  The visual aspect is important –  see this post for more on why visual representations of reasoning are more effective than prose.

I’ve used IBIS to map discussions ranging from project approaches to mathematical model building, and have found them to be invaluable when asked questions about why things were done in a certain way. Just last week, I was able to answer a question about variables used in a  market segmentation model that I built almost two years ago – simply by referring back to the issue map of the discussion and the notes I had made in it.

In summary: IBIS provides a means to capture decision rationale in a visual and easy-to-understand way,  something that is hard to do using other means.

Pathways to folly: a brief foray into non-knowledge

leave a comment »

One of the assumptions of managerial practice is that organisational knowledge is based on valid data. Of course, knowledge is more than just data. The steps from data to knowledge and beyond are described in the much used (and misused)  data-information-knowledge-wisdom (DIKW) hierarchy. The model organises the aforementioned elements in a “knowledge pyramid” as shown in Figure 1.  The basic idea is that data, when organised in a way that makes contextual sense, equates to information which, when understood and assimilated, leads to knowledge which then, finally, after much cogitation and reflection, may lead to wisdom.

Figure 1: Data-Information-Knowledge-Wisdom (DIKW) Pyramid

In this post, I explore “evil twins” of the DIKW framework:  hierarchical models of non-knowledge. My discussion is based on a paper by Jay Bernstein, with some  extrapolations of my own. My aim is to illustrate  (in  a not-so-serious way)  that there are many more managerial pathways to ignorance and folly than there are to knowledge and wisdom.

I’ll start with a quote from the paper.  Bernstein states that:

Looking at the way DIKW decomposes a sequence of levels surrounding knowledge invites us to wonder if an analogous sequence of stages surrounds ignorance, and where associated phenomena like credulity and misinformation fit.

Accordingly he starts his argument by noting opposites for each term in the DIKW hierarchy. These are listed in the table below:

DIKW term Opposite
Data Incorrect data, Falsehood, Missing data,
Information Misinformation, Disinformation, Guesswork,
Knowledge Delusion, Unawareness, Ignorance
Wisdom Folly

This is not an exhaustive list of antonyms –   only a few terms that make sense in the context of an “evil twin” of DIKW are listed. It should also be noted that   I have added some antonyms that Bernstein does not mention.  In the remainder of this post, I will focus on discussing the possible relationships between these terms that are opposites of those that appear in the DIKW model.

The first thing to note is that there is generally more than one antonym for each element of the DIKW hierarchy. Further, every antonym has a  different meaning from others. For example – the absence of data is different from incorrect data which in turn is different from a deliberate falsehood. This is no surprise – it is simply a manifestation of the principle that there are many more ways to get things wrong than there are to get them right.

An implication of the above is that there can be more than one road to folly depending on how one gets things wrong. Before we discuss these, it is best to nail down the meanings of some of the words listed above (in the sense in which they are used in this article):

Misinformation – information that is incorrect or inaccurate

Disinformation – information that is deliberately manipulated to mislead.

Delusion – false belief.

Unawareness – the state of not being fully cognisant of the facts.

Ignorance – a lack of knowledge.

Folly – foolishness, lack of understanding or sense.

The meanings of the other words in the table are clear enough and need no elaboration.

Meanings clarified, we can now look at the some of the “pyramids of folly” that can be constructed from the opposites listed in the table.

Let’s start with incorrect data. Data that is incorrect will mislead, hence resulting in misinformation. Misinformed people end up with false beliefs – i.e. they are deluded. These beliefs can cause them to make foolish decisions that betray a lack of understanding or sense. This gives us the pyramid of delusion shown in Figure 2.

Figure 2: A pyramid of delusion

Similarly, Figure 3 shows a pyramid of unawareness that arises from falsehoods and Figure 4, a pyramid of ignorance that results from missing data.

Figure 3: A pyramid of unawareness

Figure 4: A pyramid of ignorance

Figures 2 through 4 are distinct pathways to folly. I  reckon many of my readers would have seen examples of these in real life situations. (Tragically, many managers who traverse these pathways are unaware that they are doing so.  This may be a manifestation of the Dunning-Kruger effect.)

There’s more though – one can get things wrong at higher level independent of whether or not the lower levels are done right. For example, one can draw the wrong conclusions from (correct) data. This would result in the pyramid shown in Figure 5.

Figure 5: Yet another pyramid of delusion

Finally, I should mention that it’s even worse: since we are talking about non-knowledge, anything goes. Folly needs no effort whatsoever, it can be achieved without any data, information or knowledge (or their opposites).  Indeed, one can play endlessly with  antonyms and near-antonyms of the DIKW terms (including those not listed here) and come up with a plethora of pyramids, each denoting a possible pathway to folly.

Written by K

March 3, 2011 at 11:13 pm

%d bloggers like this: