Eight to Late

From the coalface: an essay on the early history of sociotechnical systems

leave a comment »

The story of sociotechnical systems began a little over half a century ago, in a somewhat unlikely setting: the coalfields of Yorkshire.

The British coal industry had just been nationalised and new mechanised mining methods were being introduced in the mines. It was thought that nationalisation would sort out the chronic labour-management issues and mechanisation would address the issue of falling productivity.

But things weren’t going as planned. In the words of Eric Trist, one of the founders of the Tavistock Institute:

…the newly nationalized industry was not doing well. Productivity failed to increase in step with increases in mechanization. Men were leaving the mines in large numbers for more attractive opportunities in the factory world. Among those who remained, absenteeism averaged 20%. Labour disputes were frequent despite improved conditions of employment.   – excerpted from, The evolution of Socio-technical systems – a conceptual framework and an action research program, E. Trist (1980)

Trist and his colleagues were asked by the National Coal Board to come in and help. To this end, they did a comparative study of two mines that were similar except that one had high productivity and morale whereas the other suffered from low performance and had major labour issues.

Their job was far from easy: they were not welcome at the coalface because workers associated them with management and the Board.

Trist recounts that around the time the study started, there were a number of postgraduate fellows at the Tavistock Institute. One of them, Ken Bamforth, knew the coal industry well as he had been a miner himself.  Postgraduate fellows who had worked in the mines were encouraged to visit their old workplaces after  a year and  write up their impressions, focusing on things that had changed since they had worked there.   After one such visit, Bamforth reported back with news of a workplace innovation that had occurred at a newly opened seam at Haighmoor. Among other things, morale and productivity at this particular seam was high compared to other similar ones.  The team’s way of working was entirely novel, a world away from the hierarchically organised set up that was standard in most mechanised mines at the time. In Trist’s words:

The work organization of the new seam was, to us, a novel phenomenon consisting of a set of relatively autonomous groups interchanging roles and shifts and regulating their affairs with a minimum of supervision. Cooperation between task groups was everywhere in evidence; personal commitment was obvious, absenteeism low, accidents infrequent, productivity high. The contrast was large between the atmosphere and arrangements on these faces and those in the conventional areas of the pit, where the negative features characteristic of the industry were glaringly apparent. Excerpted from the paper referenced above.

To appreciate the radical nature of practices at this seam, one needs to understand the backdrop against which they occurred. To this end, it is helpful to compare the  mechanised work practices introduced in the post-war years with the older ones from the pre-mechanised era of mining.

In the days before mines were mechanised, miners would typically organise themselves into workgroups of six miners, who would cover three work shifts in teams of two. Each miner was able to do pretty much any job at the seam and so could pick up where his work-mates from the previous shift had left off. This was necessary in order to ensure continuity of work between shifts. The group negotiated the price of their mined coal directly with management and the amount received was shared equally amongst all members of the group.

This mode of working required strong cooperation and trust within the group, of course.  However, as workgroups were reorganised from time to time due to attrition or other reasons, individual miners understood the importance of maintaining their individual reputations as reliable and trustworthy workmates. It was important to get into a good workgroup because such groups were more likely to get more productive seams to work on. Seams were assigned by bargaining, which was typically the job of the senior miner on the group. There was considerable competition for the best seams, but this was generally kept within bounds of civility via informal rules and rituals.

This traditional way of working could not survive mechanisation. For one, mechanised mines encouraged specialisation because they were organised like assembly lines, with clearly defined job roles each with different responsibilities and pay scales. Moreover, workers in a shift would perform only part of the extraction process leaving those from subsequent shifts to continue where work was left off.

As miners were paid by the job they did rather than the amount of coal they produced, no single group had end-to-end responsibility for the product.   Delays due to unexpected events tended to get compounded as no one felt the need to make up time. As a result, it would often happen that work that was planned for a shift would not be completed. This meant that the next shift (which could well be composed of a group with completely different skills) could not or would not start their work because they did not see it as their job to finish the work of the earlier shift. Unsurprisingly, blame shifting and scapegoating was rife.

From a supervisor’s point of view, it was difficult to maintain the same level of oversight and control in underground mining work as was possible in an assembly line. The environment underground is simply not conducive to close supervision and is also more uncertain in that it is prone to unexpected events.  Bureaucratic organisational structures are completely unsuited to dealing with these because decision-makers are too far removed from the coalface (literally!).  This is perhaps the most important insight to come out of the Tavistock coal mining studies.

As Claudio Ciborra  puts it in his classic book on teams:

Since the production process at any seam was much more prone to disorganisation than due to uncertainty and complexity of underground conditions, any ‘bureaucratic’ allocation of jobs could be easily disrupted. Coping with emergencies and coping with coping became part of worker’s and supervisors’ everyday activities. These activities would lead to stress, conflict and low productivity because they continually clashed with the technological arrangements and the way they were planned and subdivided around them.

Thus we see that the new assembly-line bureaucracy inspired work organisation was totally unsuited to the work environment because there was no end-to-end responsibility, and decision making was far removed from the action. In contrast, the traditional workgroup of six was able to deal with uncertainties and complexities of underground work because team members had a strong sense of responsibility for the performance of the team as a whole. Moreover, teams were uniquely placed to deal with unexpected events because they were actually living them as they occurred and could therefore decide on the best way to deal with them.

What Bamforth found at the Haighmoor seam was that it was possible to recapture the spirit of the old ways of working by adapting these to the larger specialised groups that were necessary in the mechanised mines. As Ciborra describes it in his book:

The new form of work organisation features forty one men who allocate themselves to tasks and shifts. Although tasks and shifts those of the conventional mechanised system, management and supervisors do not monitor, enforce and reward single task executions. The composite group takes over some of the managerial tasks, as it had in the pre-mechanised marrow group, such as the selection of group members and the informal monitoring of work…Cycle completion, not task execution becomes a common goal that allows for mutual learning and support…There is basic wage and a bonus linked to the overall productivity of the group throughout the whole cycle rather than a shift.  The competition between shifts that plagued the conventional mechanised method is effectively eliminated…

Bamforth and Trist’s studies on Haighmoor convinced them that there were viable (and better!) alternatives to those that were typical of mid to late 20th century work places.  Their work led them to the insight that the best work arrangements come out of seeking a match between technical and social elements of the modern day workplace, and thus was born the notion of sociotechnical systems.

Ever since the assembly-line management philosophies of Taylor and Ford, there has been an increasing trend towards division of labour, bureaucratisation and mechanisation / automation of work processes.  Despite the early work of the Tavistock school and others who followed, this trend continues to dominate management practice, arguably even more so in recent years. The Haighmoor innovation described above was one of the earliest demonstrations that there is a better way.   This message has since been echoed by many academics and thinkers,  but remains largely under-appreciated or ignored by professional managers who have little idea – or have completely forgotten – what it is like to work at the coalface.

Written by K

April 7, 2015 at 10:30 pm

TOGAF or not TOGAF… but is that the question?

with 4 comments

The ‘Holy Grail’ of effective collaboration is creating shared understanding, which is a precursor to shared commitment.” – Jeff Conklin.

Without context, words and actions have no meaning at all.” – Gregory Bateson.

I spent much of last week attending a class on the TOGAF Enterprise Architecture (EA) framework.  Prior experience with  IT frameworks such as PMBOK and ITIL had taught me that much depends on the instructor – a good one can make the material come alive whereas a not-so-good one can make it an experience akin to watching grass grow. I needn’t have worried: the instructor was superb, and my classmates, all of whom are experienced IT professionals / architects, livened up the proceedings through comments and discussions both in class and outside it. All in all, it was a thoroughly enjoyable and educative experience, something I cannot say for many of the professional courses I have attended.

One of the things about that struck me about TOGAF is the way in which the components of the framework hang together to make a coherent whole (see the introductory chapter of the framework for an overview). To be sure, there is a lot of detail within those components, but there is a certain abstract elegance – dare I say, beauty – to the framework.

That said TOGAF is (almost) entirely silent on the following question which I addressed in a post late last year:

Why is Enterprise Architecture so hard to get right?

Many answers have been offered. Here are some, extracted from articles published by IT vendors and consultancies:

  • Lack of sponsorship
  • Not engaging the business
  • Inadequate communication
  • Insensitivity to culture / policing mentality
  • Clinging to a particular tool or framework
  • Building an ivory tower
  • Wrong choice of architect

(Note: the above points are taken from this article and this one)

It is interesting that the first four issues listed are related to the fact that different stakeholders in an organization have vastly different perspectives on what an enterprise architecture initiative should achieve.  This lack of shared understanding is what makes enterprise architecture a socially complex problem rather than a technically difficult one. As Jeff Conklin points out in this article, problems that are technically complex will usually have a solution that will be acceptable to all stakeholders, whereas socially complex problems will not.  Sending a spacecraft to Mars is an example of the former whereas an organization-wide ERP  (or EA!) project or (on a global scale) climate change are instances of the latter.

Interestingly, even the fifth and sixth points in the list above – framework dogma and retreating to an ivory tower – are usually consequences of the inability to manage social complexity. Indeed, that is precisely the point made in the final item in the list: enterprise architects are usually selected for their technical skills rather than their ability to deal with ambiguities that are characteristic of social complexity.

TOGAF offers enterprise architects a wealth of tools to manage technical complexity. These need to be complemented by a suite of techniques to reconcile worldviews of different stakeholder groups.  Some examples of such techniques are Soft Systems Methodology, Polarity Management, and Dialogue Mapping. I won’t go into details of these here, but if you’re interested, please have a look at my posts entitled, The Approach – a dialogue mapping story and The dilemmas of enterprise IT for brief introductions to the latter two techniques via IT-based examples.

<Advertisement > Better yet, you could check out Chapter 9 of my book for a crash course on Soft Systems Methodology and Polarity Management and Dialogue Mapping, and the chapters thereafter for a deep dive into Dialogue Mapping </Advertisement>.

Apart from social complexity, there is the problem of context – the circumstances that shape the unique culture and features of an organization.  As I mentioned in my introductory remarks, the framework is abstract – it applies to an ideal organization in which things can be done by the book. But such an organization does not exist!  Aside from unique people-related and political issues, all organisations have their own quirks and unique features that distinguish them from other organisations, even within the same domain. Despite superficial resemblances, no two pharmaceutical companies are alike. Indeed, the differences are the whole point because they are what make a particular organization what it is. To paraphrase the words of the anthropologist, Gregory Bateson, the differences are what make a difference.

Some may argue that the framework acknowledges this and encourages, even exhorts, people to tailor the framework to their needs. Sure, the word “tailor” and its variants appear almost 700 times in the version 9.1 of the standard but, once again, there is no advice offered on how this tailoring should be done.  And one can well understand why: it is impossible to offer any sensible advice if one doesn’t know the specifics of the organization, which includes its context.

On a related note, the TOGAF framework acknowledges that there is a hierarchy of architectures ranging from the general (foundation) to the specific (organization). However despite the acknowledgement of diversity,   in practice TOGAF tends to focus on similarities between organisations. Most of the prescribed building blocks and processes are based on assumed commonalities between the structures and processes in different organisations.   My point is that, although similarities are important, architects need to focus on differences. These could be differences between the organization they are working in and the TOGAF ideal, or even between their current organization and others that they have worked with in the past (and this is where experience comes in really handy). Cataloguing and understanding these unique features –  the differences that make a difference – draws attention to precisely those issues that can cause heartburn and sleepless nights later.

I have often heard arguments along the lines of “80% of what we do follows a standard process, so it should be easy for us to standardize on a framework.” These are famous last words, because some of the 20% that is different is what makes your organization unique, and is therefore worthy of attention. You might as well accept this upfront so that you get a realistic picture of the challenges early in the game.

To sum up, frameworks like TOGAF are abstractions based on an ideal organization; they gloss over social complexity and the unique context of individual organisations.  So, questions such as the one posed in the title of this post are akin to the pseudo-choice between Coke and Pepsi, for the real issue is something else altogether. As Tom Graves tells us in his wonderful blog and book, the enterprise is a story rather than a structure, and its architecture an ongoing sociotechnical drama.

Written by K

March 17, 2015 at 8:09 pm

Three types of uncertainty you (probably) overlook

with 3 comments

Introduction – uncertainty and decision-making

Managing uncertainty deciding what to do in the absence of reliable information – is a significant part of project management and many other managerial roles. When put this way, it is clear that managing uncertainty is primarily a decision-making problem. Indeed, as I will discuss shortly, the main difficulties associated with decision-making are related to specific types of uncertainties that we tend to overlook.

Let’s begin by looking at the standard approach to decision-making, which goes as follows:

  1. Define the decision problem.
  2. Identify options.
  3. Develop criteria for rating options.
  4. Evaluate options against criteria.
  5. Select the top rated option.

As I have pointed out in this post, the above process is too simplistic for some of the complex, multifaceted decisions that we face in life and at work (switching jobs, buying a house or starting a business venture, for example). In such cases:

  1. It may be difficult to identify all options.
  2. It is often impossible to rate options meaningfully because of information asymmetry – we know more about some options than others. For example, when choosing whether or not to switch jobs, we know more about our current situation than the new one.
  3. Even when ratings are possible, different people will rate options differently – i.e. different people invariably have different preferences for a given outcome. This makes it difficult to reach a consensus.

Regular readers of this blog will know that the points listed above are characteristics of wicked problems.  It is fair to say that in recent years, a general awareness of the ubiquity of wicked problems has led to an appreciation of the limits of classical decision theory. (That said,  it should be noted that academics have been aware of this for a long time: Horst Rittel’s classic paper on the dilemmas of planning, written in 1973, is a good example. And there are many others that predate it.)

In this post  I look into some hard-to-tackle aspects of uncertainty by focusing on the aforementioned shortcomings of classical decision theory. My discussion draws on a paper by Richard Bradley and Mareile Drechsler.

This article is organised as follows: I first present an overview of the standard approach to dealing with uncertainty and discuss its limitations. Following this, I elaborate on three types of uncertainty that are discussed in the paper.

Background – the standard view of uncertainty

The standard approach to tackling uncertainty was  articulated by Leonard Savage in his classic text, Foundations of Statistics. Savage’s approach can be summarized as follows:

  1. Figure out all possible states (outcomes)
  2. Enumerate actions that are possible
  3. Figure out the consequences of actions for all possible states.
  4. Attach a value (aka preference) to each consequence
  5. Select the course of action that maximizes value (based on an appropriately defined measure, making sure to factor in the likelihood of achieving the desired consequence)

(Note the close parallels between this process and the standard approach to decision-making outlined earlier.)

To keep things concrete it is useful to see how this process would work in a simple real-life example. Bradley and Drechsler quote the following example from Savage’s book that does just that:

…[consider] someone who is cooking an omelet and has already broken five good eggs into a bowl, but is uncertain whether the sixth egg is good or rotten. In deciding whether to break the sixth egg into the bowl containing the first five eggs, to break it into a separate saucer, or to throw it away, the only question this agent has to grapple with is whether the last egg is good or rotten, for she knows both what the consequence of breaking the egg is in each eventuality and how desirable each consequence is. And in general it would seem that for Savage once the agent has settled the question of how probable each state of the world is, she can determine what to do simply by averaging the utilities (Note: utility is basically a mathematical expression of preference or value) of each action’s consequences by the probabilities of the states of the world in which they are realised…

In this example there are two states (egg is good, egg is rotten), three actions (break egg into bowl, break egg into separate saucer to check if it rotten, throw egg away without checking) and three consequences (spoil all eggs, save eggs in bowl and save all eggs if last egg is not rotten, save eggs in bowl and potentially waste last egg). The problem then boils down to figuring out our preferences for the options (in some quantitative way) and the probability of the two states.  At first sight, Savage’s approach seems like a reasonable way to deal with uncertainty.  However, a closer look reveals major problems.

Problems with the standard approach

Unlike the omelet example, in real life situations it is often difficult to enumerate all possible states or foresee all consequences of an action. Further, even if states and consequences are known, we may not what value to attach to them – that is, we may not be able to determine our preferences for those consequences unambiguously. Even in those situations  where we can,  our preferences for may be subject to change  – witness the not uncommon situation where lottery winners end up wishing they’d never wonThe standard prescription works therefore works only in situations where all states, actions and consequences are known – i.e. tame situations, as opposed to wicked ones.

Before going any further, I should mention that Savage was cognisant of the limitations of his approach. He pointed out that it works only in what he called small world situations–  i.e. situations in which it is possible to enumerate and evaluate all options.  As Bradley and Drechsler put it,

Savage was well aware that not all decision problems could be represented in a small world decision matrix. In Savage’s words, you are in a small world if you can “look before you leap”; that is, it is feasible to enumerate all contingencies and you know what the consequences of actions are. You are in a grand world when you must “cross the bridge when you come to it”, either because you are not sure what the possible states of the world, actions and/or consequences are…

In the following three sections  I elaborate on the complications mentioned above emphasizing, once again, that many real life situations are prone to such complications.

State space uncertainty

The standard view of uncertainty assumes that all possible states are given as a part of the problem definition – as in the omelet example discussed earlier.  In real life, however, this is often not the case.

Bradley and Drechsler identify two distinct cases of state space uncertainty. The first one is when we are unaware that we’re missing states and/or consequences. For example, organisations that embark on a restructuring program are so focused on the cost-related consequences that they may overlook factors such as loss of morale and/or loss of talent (and the consequent loss of productivity). The second, somewhat rarer, case is when we are aware that we might be missing something but we don’t quite know what it is. All one can do here, is make appropriate contingency plans based on  guesses regarding possible consequences.

Figuring out possible states and consequences is largely a matter of scenario envisioning based on knowledge and practical experience. It stands to reason that this is best done by leveraging the collective experience and wisdom of people from diverse backgrounds. This is pretty much the rationale behind collective decision-making techniques such as Dialogue Mapping.

Option uncertainty

The standard approach to tackling uncertainty assumes that the connection between actions and consequences is well defined. This is often not the case, particularly for wicked problems.  For example, as I have discussed in this post, enterprise transformation programs with well-defined and articulated objectives often end up having a host of unintended consequences. At an even more basic level, in some situations it can be difficult to identify sensible options.

Option uncertainty is a fairly common feature in real-life decisions. As Bradley and Drechsler put it:

Option uncertainty is an endemic feature of decision making, for it is rarely the case that we can predict consequences of our actions in every detail (alternatively, be sure what our options are). And although in many decision situations, it won’t matter too much what the precise consequence of each action is, in some the details will matter very much.

…and unfortunately, the cases in which the details matter are precisely those problems in which they are the hardest to figure out – i.e. in wicked problems.

Preference uncertainty

An implicit assumption in the standard approach is that once states and consequences are known, people will be able to figure out their relative preferences for these unambiguously. This assumption is incorrect, as there are at least two situations in which people will not be able to determine their preferences. Firstly, there may be  a lack of factual information about one or more of the states. Secondly, even when one is able to get the required facts, it is hard to figure out how we would value the consequences.

A common example of the aforementioned situation is the job switch dilemma. In many (most?) cases in which one is debating whether or not to switch jobs, one lacks enough factual information about the new job – for example, the new boss’ temperament, the work environment etc. Further, even if one is able to get the required information, it is impossible to know how it would be to actually work there.  Most people would have struggled with this kind of uncertainty at some point in their lives. Bradley and Drechsler term this ethical uncertainty. I prefer the term preference uncertainty, as it has more to do with preferences than ethics.

Some general remarks

The first point to note is that the three types of uncertainty noted above map exactly on to the three shortcomings of classical decision theory discussed in the introduction.  This suggests a connection between the types of uncertainty and wicked problems. Indeed, most wicked problems are exemplars of one or more of the above uncertainty types.  For example, the paradigm-defining super-wicked problem of climate change displays all three types of uncertainty.

The three types of uncertainty discussed above are overlooked by the standard approach to managing uncertainty.  This happens in a number of ways. Here are two common ones:

  1. The standard approach assumes that all uncertainties can somehow be incorporated into a single probability function describing all possible states and/or consequences. This is clearly false for state space and option uncertainty: it is impossible to define a sensible probability function when one is uncertain about the possible states and/or outcomes.
  2. The standard approach assumes that preferences for different consequences are known. This is clearly not true in the case of preference uncertainty…and even for state space and option uncertainty for that matter.

In their paper, Bradley and Dreschsler arrive at these three types of uncertainty from considerations different from the ones I have used above. Their approach, while more general, is considerably more involved. Nevertheless, I would recommend that readers who are interested should take a look at it because they cover a lot of things that I have glossed over or ignored altogether.

Just as an example, they show how the aforementioned uncertainties can be reduced. There is a price to be paid, however: any reduction in uncertainty results in an increase in its severity. An example might help illustrate how this comes about. Consider a situation of state space uncertainty. One can reduce- or even, remove – this by defining a catch-all state (labelled, say, “all other outcomes”). It is easy to see that although one has formally reduced state space uncertainty to zero, one has increased the severity of the uncertainty because the catch-all state is but a reflection of our ignorance and our refusal to do anything about it!

There are many more implications of the above. However, I’ll point out just one more that serves to illustrate the very practical implications of these uncertainties. In a post on the shortcomings of enterprise risk management, I pointed out that the notion of an organisation-wide risk appetite is problematic because it is impossible to capture the diversity of viewpoints through such a construct. Moreover,  rule or process based approaches to risk management tend to focus only on those uncertainties that can be quantified, or conversely they assume that all uncertainties can somehow be clumped into a single probability distribution as prescribed by the standard approach to managing uncertainty. The three types of uncertainty discussed above highlight the limitations of such an approach to enterprise risk.

Conclusion

The standard approach to managing uncertainty assumes that all possible states, actions and consequences are known or can be determined. In this post I have discussed why this is not always so.  In particular, it often happens that we do not know all possible outcomes (state space uncertainty), consequences (option uncertainty) and/or our preferences for consequences (preference or ethical uncertainty).

As I was reading the paper, I felt the authors were articulating issues that I had often felt uneasy about but chose to overlook (suppress?).  Generalising from one’s own experience is always a fraught affair, but  I reckon we tend to deny these uncertainties because they are inconvenient – that is, they are difficult if not impossible to deal with within the procrustean framework of the standard approach.  What is needed as a corrective is a recognition that the pseudo-quantitative approach that is commonly used to manage uncertainty may not the panacea it is claimed to be. The first step towards doing this is to acknowledge the existence of the uncertainties that we (probably) overlook.

Written by K

February 25, 2015 at 9:08 pm

Conditions over causes: towards an emergent approach to building high-performance teams

with 7 comments

Introduction

Much of the work that goes on in organisations is done by groups of people who work together in order to achieve shared objectives. Given this, it is no surprise that researchers have expended a great deal of effort in building theories about how teams work. However, as Richard Hackman noted in this paper,  more than 70 years of research (of ever-increasing sophistication) has not resulted in a true understanding of the factors that give rise to high-performing teams.  The main reason for this failure is that:

“…groups are social systems. They redefine objective reality, they create new realities (both for their members and in their system contexts), and they evolve their own purposes and strategies for pursuing those purposes. Groups are not mere assemblies of multiple cause–effect relationships; instead, they exhibit emergent and dynamic properties that are not well captured by standard causal models.”

Hackman had a particular interest in leadership as a causal factor in team performance.  One of the things he established is that leadership matters a whole lot less than is believed…or, more correctly, it matters for reasons that are not immediately obvious. As he noted:

“…60 per cent of the difference in how well a group eventually does is determined by the quality of the condition-setting pre-work the leader does. 30 per cent is determined by how the initial launch of the group goes. And only 10 per cent is determined by what the leader does after the group is already underway with its work. This view stands in stark contrast to popular images of group leadership—the conductor waving a baton throughout a musical performance or an athletic coach shouting instructions from the sidelines during a game.”

Although the numbers quoted above can be contested, the fact is that as far as team performance is concerned, conditions matter more than the quality of leadership. In this post, I draw on Hackman’s paper as well as my work (done in collaboration with Paul Culmsee) to argue that the real work of leaders is not to lead (in the conventional sense of the word) but to create the conditions in which teams can thrive.

The fundamental attribution error

Poor performance of teams is often attributed to a failure of leadership. A common example of this is when the coach of a sporting team is fired after a below par season. On the flip side, CxOs can earn big-buck dollar bonuses when their companies make or exceed their financial targets because they are seen as being directly responsible for the result.

Attributing the blame or credit for the failure or success of a team to a specific individual is called the leadership attribution error. Hackman suggested that this error is a manifestation of a human tendency to assign greater causal priority to factors that are more visible than those that are not: leaders tend to be in the limelight more than their teams and are therefore seen as being responsible for their teams’ successes and failures.

This leader-as-hero (or villain!)  perspective has fueled major research efforts aimed at pinning down those elusive leadership skills and qualities that can magically transform teams into super-performing ensembles.  This has been accompanied by a burgeoning industry of executive training programmes to impart these “scientifically proven” skills to masses of managers. These programmes, often clothed in the doublespeak of organisation culture, are but subtle methods of control that serve to establish directive approaches to leadership. Such methods rarely (if ever) result in high-performing organisations or teams.

An alternate approach to understanding team performance

The failure to find direct causal relationships between such factors and team performance led Hackman to propose a perspective that focuses on structural conditions instead. The basic idea in this alternate approach is to focus on the organisational and social conditions that enable the team to perform well.

This notion of  conditions over causes is relevant in other related areas too. Here are a couple of examples:

  1. Innovation: Most attempts to foster innovation focus on exhorting people to be creative and/or instituting innovation training programmes (causal approach). Such approaches usually result in  innovation of an incremental kind at best.  Instead, establishing a low pressure environment that enables people to think for themselves and follow-up on their ideas without fear of failure generally meets with more success (structural approach).
  2. Collaboration: Organisations generally recognise the importance of collaboration. Yet, they attempt to foster in the worst possible way: via the establishment of cross-functional teams without clear mandates or goals and/or forced team-building exercises that have the opposite effect to the one intended (causal approach).  The alternate approach is to simplify reporting lines, encourage open communication across departments  and generally make it easy for people from different specialisations to work together in informal groups (structural approach). A particularly vexing intra-departmental separation that I have come across recently is the artificial division of responsibilities between information systems development and delivery. Such a separation results in reduced collaboration and increased finger pointing.

That said, let’s take a look at Hackman’s advice on how to create an environment conducive to teamwork.  Hackman identified the following five conditions that tend to correlate well with improved team performance:

  • The group must be a real team– i.e. it must have clear boundaries (clarity as to who is a member and who isn’t), interdependence (the performance of every individual in the team must in some way depend on others in the team) and stability (membership of the team should be stable over time).
  • Compelling direction– the team must have a goal that is clear and worth pursuing. Moreover, and this is important, the team must be allowed to determine how the goal is to be achieved – the end should be prescribed, not the means.
  • The structure must enable teamwork– The team should be structured in a way that allows members to work together. This consists of a couple of factors: 1) The team must be of the right size – as small and diverse as possible (large, homogenous teams are found to be ineffective), and 2) There must be clear norms of conduct. Note that Hackman lists these two as separate points in his paper.
  • Supportive organizational context– the team must have the organisational resources that enable it to carry out its work. For example, access to the information needed for the team to carry out its work and access to technical and subject matter experts.  In addition, there should be a transparent reward system that provides recognition for good work.
  • Coaching– the team must have access to a mentor or coach who understands and has the confidence of the team. Apart from helping team members tide over difficult situations, a good coach should be able to help them navigate organizational politics and identify emerging threats and opportunities that may not be obvious to them.

To reiterate, these are structural rather than causal factors in that they do not enhance team performance directly. Instead, when present, they tend to encourage behaviours that enhance team performance and suppress those that don’t. 

Another interesting point is that some of these factors are more important than others. For example, Ruth Wageman found that team design (the constitution and structure of the team) is about four times more important than coaching in affecting the team’s ability to manage itself and forty times as powerful in affecting team performance (see this paper for details). Although the numbers should not be taken at face value, Wageman’s claim reiterates the main theme of this article: that structural factors matter more than causal ones.

The notion of a holding environment

One of the things I noticed when I first read Hackman’s approach is that it has some similarities to the one that Paul and I advocated in our book, The Heretic’s Guide to Best Practices.

The Heretic’s Guide is largely about collaborative approaches to managing (as opposed to solving!) complex problems in organisations. Our claim is that the most intractable problems in organisations are consequences of social rather than technical issues. For example, the problem of determining the “right” strategy for an organisation cannot be settled on objective grounds because the individuals involved will have diverse opinions on what the organisation’s focus should be.  The process of arriving at a consensual strategy is, therefore, more a matter of dealing with this diversity than reaching an objectively right outcome.  In other words, it is largely about achieving a common view of what the strategy should be and then building a shared commitment to executing it.

The key point is that there is no set process for achieving a shared understanding of a problem. Rather, one needs to have the right environment (structure!) in which contentious issues can be discussed openly without fear.  In our book we used the term holding environment to describe a safe space in which such open dialogue can take place.

The theory of communicative rationality formulated by the German philosopher, Juergen Habermas, outlines the norms that operate within a holding environment. It would be too long a detour to discuss Habermas’ work in any detail – see this paper or chapter 7 of our book to find out more. What is important to note is that an ideal holding environment has the following norms:

  1. Inclusion
  2. Autonomy
  3. Empathy
  4. Power neutrality
  5. Transparency

Problem is, some of these are easier to achieve than others. Inclusionautonomy and power neutrality can be encouraged by putting in place appropriate organisational structures and rules. Empathy and transparency, however, are typically up to the individual. Nevertheless, conditions that enable the former will also encourage (though not guarantee) the latter.

In our book we discuss how such a holding environment can be approximated in multi-organisational settings such as large projects.  It would take me too far afield to get into specifics of the approach here. The point I wish to make, however, is that the notion of a holding environment is in line with Hackman’s thoughts on the importance of environmental or structural factors.

In closing

Some will argue that this article merely sets up and tears down a straw man, and that modern managers are well  aware of the pitfalls of a directive approach to leading teams. Granted, much has been written about the importance of setting the right conditions (such as autonomy)…and it is possible that many managers are aware of it too. The point I would make is that this awareness, if it exists at all, has not been translated into action often enough.  As a result, the gap between the rhetoric and reality of leadership remains as wide as ever – managers talk the talk of leadership, but do not walk it.

Perhaps this is because many (most?) managers are reluctant let go the reins of control when they know they will be held responsible if things were to go belly-up.  The few who manage to overcome their fears know that it requires the ability to trust others, as well as the courage and integrity to absorb the blame  when things go wrong (as they inevitably will from time to time). These all too rare qualities are essential for the approach described here to truly take root and flourish.  In conclusion, I think it is fair to say that the  biggest challenges associated with building high-performance teams are ethical rather than technical ones.

Further Reading

Don’t miss Paul Culmsee’s entertaining and informative posts on the conditions over causes approach in enterprise IT and project management.

Written by K

January 29, 2015 at 9:03 pm

Beyond entities and relationships – towards an emergent approach to data modelling

with 3 comments

Introduction – some truths about data modelling

It has been said that data is the lifeblood of business.  The aptness of this metaphor became apparent to when I was engaged in a data mapping project some years ago. The objective of that effort was to document all the data flows within the organization.   The final map showed very clearly that the volume of data on the move was good indicator of the activity of the function: the greater the volume, the more active the function. This is akin to the case of the human body wherein organs that expend the more energy tend to have a richer network of blood vessels.

Although the above analogy is far from perfect, it serves to highlight the simple fact that most business activities involve the movement and /or processing of data.  Indeed, the key function of information systems that support business activities is to operate on and transfer data.   It therefore matters a lot as to how data is represented and stored. This is the main concern of the discipline of data modelling.

The mainstream approach to data modelling assumes that real world objects and relationships can be accurately represented by models.  As an example, a data model representing a sales process might consist of entities such as customers and products and their relationships, such as sales (customer X purchases product Y). It is tacitly assumed that objective, bias-free models of entities and relationships of interest can be built by asking the right questions and using appropriate information collection techniques.

However, things are not quite so straightforward: as professional data modellers know, real-world data models are invariably tainted by compromises between rigour and reality.  This is inevitable because the process of building a data model involves at least two different sets of stakeholders whose interests are often at odds – namely, business users and data modelling professionals. The former are less interested in the purity of model than the business process that it is intended to support; the interests of the latter, however, are often the opposite.

This reveals a truth about data modelling that is not fully appreciated by practitioners: that it is a process of negotiation rather than a search for a true representation of business reality. In other words, it is a socio-technical problem that has wicked elements.  As such then, data modelling ought to be based on the principles of emergent design. In this post I explore this idea drawing on a brilliant paper by Heinz Klein and Kalle Lyytinen entitled, Towards a New Understanding of Data Modelling as well as my own thoughts on the subject.

Background

Klein and Lyytinen begin their paper by asking four questions that are aimed at uncovering the tacit assumptions underlying the different approaches to data modelling.  The questions are:

  1. What is being modelled? This question delves into the nature of the “universe” that a data model is intended to represent.
  2. How well is the result represented? This question asks if the language, notations and symbols used to represent the results are fit for purpose – i.e. whether the language and constructs used are capable of modelling the domain.
  3. Is the result valid? This asks the question as to whether the model is a correct representation of the domain that is being modelled.
  4. What is the social context in which the discipline operates? This question is aimed at eliciting the views of different stakeholders regarding the model: how they will use it, whether their interests are taken into account and whether they benefit or lose from it.

It should be noted that these questions are general in that they can be used to enquire into any discipline. In the next section we use these questions to uncover the tacit assumptions underlying the mainstream view of data modelling. Following that, we propose an alternate set of assumptions that address a major gap in the mainstream view.

Deconstructing the mainstream view

What is being modelled?

As Klein and Lyytinen put it, the mainstream approach to data modelling assumes that the world is given and made up of concrete objects which have natural properties and are associated with [related to] other objects.   This assumption is rooted in a belief that it is possible to build an objectively true picture of the world around us. This is pretty much how truth is perceived in data modelling: data/information is true or valid if it describes something – a customer, an order or whatever – as it actually is.

In philosophy, such a belief is formalized in the correspondence theory of truth, a term that refers to a family of theories that trace their origins back to antiquity. According to Wikipedia:

Correspondence theories claim that true beliefs and true statements correspond to the actual state of affairs. This type of theory attempts to posit a relationship between thoughts or statements on one hand, and things or facts on the other. It is a traditional model which goes back at least to some of the classical Greek philosophers such as Socrates, Plato, and Aristotle. This class of theories holds that the truth or the falsity of a representation is determined solely by how it relates to a reality; that is, by whether it accurately describes that reality.

In short: the mainstream view of data modelling is based on the belief that the things being modelled have an objective existence.

How well is the result represented?

If data models are to represent reality (as it actually is), then one also needs an appropriate means to express that reality in its entirety. In other words, data models must be complete and consistent in that they represent the entire domain and do not contain any contradictory elements. Although this level of completeness and logical rigour is impossible in practice, much research effort is expended in finding evermore complete and logical consistent notations.

Practitioners have little patience with cumbersome notations invented by theorists, so it is no surprise that the most popular modelling notation is the simplest one: the entity-relationship (ER) approach which was first proposed by Peter Chen. The ER approach assumes that the world can be represented by entities (such as customer) with attributes (such as name), and that entities can be related to each other (for example, a customer might be located at an address – here “is located at” is a relationship between the customer and address entities).  Most commercial data modelling tools support this notation (and its extensions) in one form or another.

To summarise: despite the fact that the most widely used modelling notation is not based on rigorous theory, practitioners generally assume that the ER notation is an appropriate vehicle to represent  what is going on in the domain of interest.

Is the result valid?

As argued above, the mainstream approach to data modelling assumes that the world of interest has an objective existence and can be represented by a simple notation that depicts entities of interest and the relationships between them.  This leads to the question of the validity of the models thus built. To answer this we have to understand how data models are constructed.

The process of model-building involves observation, information gathering and analysis – that is, it is akin to the approach used in scientific enquiry. A great deal of attention is paid to model verification, and this is usually done via interaction with subject matter experts, users and business analysts.  To be sure, the initial model is generally incomplete, but it is assumed that it can be iteratively refined to incorporate newly surfaced facts and fix errors.  The underlying belief is that such a process gets ever-closer to the truth.

In short: it is assumed that it an ER model built using a systematic and iterative process of enquiry will result in a model that is a valid representation of the domain of interest.

What is the social context in which the discipline operates?

From the above, one might get the impression that data modelling involves a lot of user interaction. Although this is generally true, it is important to note that the users’ roles are restricted to providing information to data modellers. The modellers then interpret the information provided by users and cast into a model.

This brings up an important socio-political implication of the mainstream approach: data models generally support business applications that are aimed at maintaining and enhancing managerial control through automation and / or improved oversight. Underlying this is the belief that a properly constructed data model (i.e. one that accurately represents reality) can enhance business efficiency and effectiveness within the domain represented by the model.

In brief: data models are built to further the interests of specific stakeholder groups within an organization.

Summarising the mainstream view

The detailed responses to the questions above reveal that the discipline of data modelling is based on the following assumptions:

  1. The domain of interest has an objective existence.
  2. The domain can be represented using a (more or less) logical language.
  3. The language can represent the domain of interest accurately.
  4. The resulting model is based largely on a philosophy of managerial control, and can be used to drive organizational efficiency and effectiveness.

Many (most?) professional data management professionals will see these assumptions as being uncontroversial.  However, as we shall see next, things are not quite so simple…

Motivating an alternate view of data modelling

In an earlier section I mentioned the correspondence theory of truth which tells us that true statements are those that correspond to the actual state of affairs in the real world.   A problem with correspondence theories is that they assume that: a) there is an objective reality, and b) that it is perceived in the same way by everyone. This assumption is problematic, especially for issues that have a social dimension. Such issues are perceived differently by different stakeholders, each of who will seek data that supports their point of view. The problem is that there is no way to determine which data is “objectively right.” More to the point, in such situations the very notion of “objective rightness” can be legitimately questioned.

Another issue with correspondence theories is that a piece of data can at best be an abstraction of a real-world object or event.  This is a serious problem with correspondence theories in the context of business intelligence. For example, when a sales rep records a customer call, he or she notes down only what is required by the CRM system. Other data that may well be more important is not captured or is relegated to a “Notes” or “Comments” field that is rarely if ever searched or accessed.

Another perspective on truth is offered by the so called consensus theory which asserts that true statement are those that are agreed to by the relevant group of people. This is often the way “truth” is established in organisations. For example, managers may choose to calculate KPIs using certain pieces of data that are deemed to be true.  The problem with this is that consensus can be achieved by means that are not necessarily democratic .For example, a KPI definition chosen by a manager may be contested by an employee.  Nevertheless, the employee has to accept it because organisations are not democracies. A more significant issue is that the notion of “relevant group” is problematic because there is no clear criterion by which to define relevance.  Quite naturally this leads to conflict and ill-will.

This conclusion leads one to formulate alternative answers to four questions posed above, thus paving the way to a new approach to data modelling.

An alternate view of data management

What is being modelled?

The discussion of the previous section suggests that data models cannot represent an objective reality because there is never any guarantee that all interested parties will agree on what that reality is.  Indeed, insofar as data models are concerned, it is more useful to view reality as being socially constructed – i.e. collectively built by all those who have a stake in it.

How is reality socially constructed? Basically it is through a process of communication in which individuals discuss their own viewpoints and agree on how differences (if any) can be reconciled. The authors note that:

…the design of an information system is not a question of fitness for an organizational reality that can be modelled beforehand, but a question of fitness for use in the construction of a [collective] organizational reality…

This is more in line with the consensus theory of truth than the correspondence theory.

In brief: the reality that data models are required to represent is socially constructed.

How well is the result represented?

Given the above, it is clear that any data model ought to be subjected to validation by all stakeholders so that they can check that it actually does represent their viewpoint. This can be difficult to achieve because most stakeholders do not have the time or inclination to validate data models in detail.

In view of the above, it is clear that the ER notation and others of its ilk can represent a truth rather than the truth – that is, they are capable of representing the world according to a particular set of stakeholders (managers or users, for example). Indeed, a data model (in whatever notation) can be thought of one possible representation of a domain. The point is, there are as many representations possible as there are stakeholder groups and in mainstream data modelling, and one of these representations “wins” while the others are completely ignored. Indeed, the alternate views generally remain undocumented so they are invariably forgotten. This suggests that a key step in data modelling would be to capture all possible viewpoints on the domain of interest in a way that makes a sensible comparison possible.  Apart from helping the group reach a consensus, such documentation is invaluable to explain to future users and designers why the data model is the way it is. Regular readers of this blog will no doubt see that the IBIS notation and dialogue mapping could be hugely helpful in this process. It would take me too far afield to explore this point here, but I will do so in a future post.

In brief:  notations used by mainstream data modellers cannot capture multiple worldviews in a systematic and useful way. These conventional data modelling languages need to be augmented by notations that are capable of incorporating diverse viewpoints.

Is the result valid?

The above discussion begs a question, though: what if two stakeholders disagree on a particular point?

One answer to this lies in Juergen Habermas’ theory of communicative rationaility that I have discussed in detail in this post. I’ll explain the basic idea via the following excerpt from that post:

When we participate in discussions we want our views to be taken seriously. Consequently, we present our views through statements that we hope others will see as being rational – i.e. based on sound premises and logical thought.  One presumes that when someone makes claim, he or she is willing to present arguments that will convince others of the reasonableness of the claim.  Others will judge the claim based the validity of the statements claimant makes. When the  validity claims are contested, debate ensues with the aim of  getting to some kind of agreement.

The philosophy underlying such a process of discourse (which is simply another word for “debate” or “dialogue”)  is described in the theory of communicative rationality proposed by the German philosopher Jurgen Habermas.  The basic premise of communicative rationality is that rationality (or reason) is tied to social interactions and dialogue. In other words, the exercise of reason can  occur only through dialogue.  Such communication, or mutual deliberation,  ought to result in a general agreement about the issues under discussion.  Only once such agreement is achieved can there be a consensus on actions that need to be taken.  Habermas refers to the  latter  as communicative action,  i.e.  action resulting from collective deliberation…

In brief: validity is not an objective matter but a subjective – or rather an intersubjective one   that is, validity has to be agreed between all parties affected by the claim.

What is the social context in which the discipline operates?

From the above it should be clear that the alternate view of data management is radically different from the mainstream approach. The difference is particularly apparent when one looks at the way in which the alternate approach views different stakeholder groups. Recall that in the mainstream view, managerial perspectives take precedence over all others because the overriding aim of data modelling (as indeed most enterprise IT activities) is control. Yes, I am aware that it is supposed to be about enablement, but the question is enablement for whom? In most cases, the answer is that it enables managers to control. In contrast, from the above we see that the reality depicted in a data (or any other) model is socially constructed – that is, it is based on a consensus arising from debates on the spectrum of views that people hold. Moreover, no claim has precedence over others on virtue of authority. Different interpretations of the world have to be fused together in order to build a consensually accepted world.

The social aspect is further muddied by conflicts between managers on matters of data ownership, interpretation and access. Typically, however, such matters lie outside the purview of data modellers

In brief: the social context in which the discipline operates is that there are a wide variety of stakeholder groups, each of which may hold different views. These must be debated and reconciled.

Summarising the alternate view

The detailed responses to the questions above reveal that the alternate view of data modelling is based on the following assumptions:

  1. The domain of interest is socially constructed.
  2. The standard representations of data models are inadequate because they cannot represent multiple viewpoints. They can (and should) be supplemented by notations that can document multiple viewpoints.
  3. A valid data model is constructed through an iterative synthesis of multiple viewpoints.
  4. The resulting model is based on a shared understanding of the socially constructed domain.

Clearly these assumptions are diametrically opposed to those in the mainstream. Let’s briefly discuss their implications for the profession

Discussion

The most important implication of the alternate view is that a data model is but one interpretation of reality. As such, there are many possible interpretations of reality and the “correctness” of any particular model hinges not on some objective truth but on an agreed, best-for-group interpretation. A consequence of the above is that well-constructed data models “fuse” or “bring together” at least two different interpretations – those of users and modellers. Typically there are many different groups of users, each with their own interpretation. This being the case, it is clear that the onus lies on modellers to reconcile any differences between these groups as they are the ones responsible for creating models.

A further implication of the above is that it is impossible to build a consistent enterprise-wide data model.  That said, it is possible to have a high-level strategic data model that consists, say, of entities but lacks detailed attribute-level information. Such a model can be useful because it provides a starting point for a dialogue between user groups and also serves to remind modellers of the entities they may need to consider when building a detailed data model.

The mainstream view asserts that data is gathered to establish the truth. The alternate view, however, makes us aware that data models are built in such a way as to support particular agendas. Moreover, since the people who use the data are not those who collect or record it, a gap between assumed and actual meaning is inevitable.  Once again this emphasises that fact that the meaning of a particular piece of data is very much in the eye of the beholder.

In closing

The mainstream approach to data modelling reflects the general belief that the methods of natural sciences can be successfully applied in the area of systems development.  Although this is a good assumption for theoretical computer science, which deals with constructs such as data structures and algorithms, it is highly questionable when it comes to systems development in large organisations. In the latter case social factors dominate, and these tend to escape any logical system. This simple fact remains under-appreciated by the data modelling community and, for that matter, much of the universe of corporate IT.

The alternate view described in this post draws attention to the social and political aspects of data modelling. Although IT vendors and executives tend to give these issues short shrift, the chronically high failure rate of   data-centric initiatives (such as those aimed at building enterprise data warehouses) warns us to pause and think.  If there is anything at all to be learnt from these failures, it is that data modelling is not a science that deals with facts and models, but an art that has more to do with negotiation and law-making.

Written by K

January 6, 2015 at 9:56 pm

The danger within: internally-generated risks in projects

with 2 comments

Introduction

In their book, Waltzing with Bears, Tom DeMarco and Timothy Lister coined the phrase, “risk management is project management for adults”.  Twenty years on, it appears that their words have been taken seriously: risk management now occupies a prominent place in BOKs, and has also become a key element of project management practice.

On the other hand, if the evidence  is to be believed (as per the oft quoted Chaos Report, for example), IT projects continue to fail at an alarming rate. This is curious because one would have expected that a greater focus on risk management ought to have resulted in better outcomes.  So, is it possible at all that risk management (as it is currently preached and practiced in IT project management) cannot address certain risks…or, worse, that there are certain risks are simply not recognized as risks?

Some time ago, I came across a paper by Richard Barber that sheds some light on this very issue. This post elaborates on the nature and importance of such “hidden” risks by drawing on Barber’s work as well as my experiences and those of my colleagues with whom I have discussed the paper.

What are internally generated risks?

The standard approach to risk is based on the occurrence of events. Specifically, risk management is concerned with identifying potential adverse events and taking steps to  reduce either their probability of occurrence or their impact. However, as Barber points out, this is a limited view of risk because it overlooks adverse conditions that are built into the project environment. A good example of this is an organizational norm that centralizes decision making at the corporate or managerial level. Such a norm would discourage a project manager from taking appropriate action when confronted with an event that demands an on-the-spot decision.  Clearly, it is wrong-headed to attribute the risk to the event because the risk actually has its origins in the norm. In other words, it is an internally generated risk.

(Note: the notion of an internally generated risk is akin to the risk as a pathogen concept that I discussed in this post many years ago.)

Barber defines an internally generated risk as one that has its origin within the project organisation or its host, and arises from [preexisting] rules, policies, processes, structures, actions, decisions, behaviours or cultures. Some other examples of such risks include:

  • An overly bureaucratic PMO.
  • An organizational culture that discourages collaboration between teams.
  • An organizational structure that has multiple reporting lines – this is what I like to call a pseudo-matrix organization :-)

These factors are similar to those that I described in my post on the systemic causes of project failure. Indeed, I am tempted to call these systemic risks because they are related to the entire system (project + organization). However, that term has already been appropriated by the financial risk community.

Since the term is relatively new, it is important to draw distinctions between internally generated and other types of risks. It is easy to do so because the latter (by definition) have their origins outside the hosting organization. A good example of the latter is the risk of a vendor not delivering a module on time or worse, going into receivership prior to delivering the code.

Finally, there are certain risks that are neither internally generated nor external. For example, using a new technology is inherently risky simply because it is new. Such a risk is inherent rather than internally generated or external.

Understanding the danger within

The author of the paper surveyed nine large projects with the intent of getting some insight into the nature of internally generated risks.  The questions he attempted to address are the following:

  1. How common are these risks?
  2. How significant are they?
  3. How well are they managed?
  4. What is the relationship between the ability of an organization to manage such risks and the organisation’s project management maturity level (i.e. the maturity of its project management processes)

Data was gathered through group workshops and one-on-one interviews in which the author asked a number of questions that were aimed at gaining insight into:

  1. The key difficulties that project managers encountered on the projects.
  2. What they perceived to be the main barriers to project success.

The aim of the one-on-one interviews was to allow for a more private setting in which sensitive issues (politics, dysfunctional PMOs and brain-dead rules / norms) could be freely discussed.

The data gathered was studied in detail, with the intent of identifying internally generated risks. The author describes the techniques he used to minimize subjectivity and to ensure that only significant risks were considered. I will omit these details here, and instead focus on his findings as they relate to the questions listed above.

Commonality of internally generated risks

Since organizational rules and norms are often flawed, one might expect that internally generated risks would be fairly common in projects. The author found that this was indeed the case with the projects he surveyed: in his words, the smallest number of non-trivial internally generated risks identified in any of the nine projects was 15, and the highest was 30!  Note: the identification of non-trivial risks was done by eliminating those risks that a wide range of stakeholders agreed as being unimportant.

Unfortunately, he does not explicitly list the most common internally-generated risks that he found. However, there are a few that he names later in the article. These are:

I suspect that experienced project managers would be able to name many more.

Significance of internally generated risks

Determining the significance of these risks is tricky because one has to figure out their probability of occurrence.  The impact is much easier to get a handle on, as one has a pretty good idea of the consequences of such risks should they eventuate. (Question: What happens if there is inadequate sponsorship? Answer: the project is highly likely to fail!).   The author attempted to get a qualitative handle on the probability of occurrence by asking relevant stakeholders to estimate the likelihood of occurrence. Based on the responses received, he found that a large fraction of the internally-generated risks are significant (high probability of occurrence and high impact).

Management of internally generated risks

To identify whether internally generated risks are well managed, the author asked relevant project teams to look at all the significant internal risks on their project and classify them as to whether or not they had been identified by the project team prior to the research. He found that in over half the cases, less than 50% of the risks had been identified. However, most of the risks that were identified were not managed!

The relationship between project management maturity and susceptibility to internally generated risk

Project management maturity refers to the level of adoption of  standard good practices within an organization. Conventional wisdom tells us that there should be an inverse correlation between maturity levels and susceptibility to internally generated risk –  the higher the maturity level, the lower the susceptibility.

The author assessed maturity levels by interviewing various stakeholders within the organization and also by comparing the processes used within the organization to well-known standards.  The results indicated a weak negative correlation – that is, organisations with a higher level of maturity tended to have a smaller number of internally generated risks. However, as the author admits, one cannot read much into this finding as the correlation was weak.

Discussion

The study suggests that internally generated risks are common and significant on projects. However, the small sample size also suggests that more comprehensive surveys are needed.  Nevertheless, anecdotal evidence from colleagues who I spoke with suggests that the findings are reasonably robust. Moreover, it is also clear (both, from the study and my conversations) that these risks are not very well managed.  There is a good reason for this:  internally generated risks originate in human behavior and / or dysfunctional structures. These tend to be a difficult topic to address in an organizational setting because people are unlikely to tell those above them in the hierarchy that they (the higher ups) are the cause of a problem.  A classic example of such a risk is estimation by decree – where a project team is told to just get it done by a certain date. Although most project managers are aware of such risks, they are reluctant to name them for obvious reasons.

Conclusion

I suspect most project managers who work in corporate environments will have had to grapple with internally generated risks in one form or another.  Although traditional risk management does not recognize these risks as risks, seasoned project managers know  from experience that people,  politics or even processes can pose major problems to smooth working of projects.  However, even when recognised for what they are, these risks can be hard to tackle because they  lie outside a project manager’s sphere of influence.  They therefore tend to become those proverbial pachyderms in the room – known to all but never discussed, let alone documented….and therein lies the danger within.

Written by K

December 10, 2014 at 7:23 pm

From information to knowledge: the what and whence of issue based information systems

with 2 comments

Issue Based Information System (IBIS) is a notation invented by Horst Rittel and Werner Kunz in the early 1970s.  IBIS is best known for its use in dialogue mapping, a collaborative approach to tackling wicked problems (i.e. contentious issues) in organisations. It has a range of other applications as well – capturing knowledge is a good example, and I’ll have much more to say about that later in this post.

Over the last five years or so, I have written a fair bit on IBIS on this blog and in a book that I co-authored with the dialogue mapping expert, Paul Culmsee.   The present post reprises an article I wrote five years ago on the “what” and “whence” of the notation:  its practical aspects – notation, grammar etc -, as well as its origins, advantages and limitations. My motivations for revisiting the piece are to revise and update the original discussion and, more important, to cover some recent developments in IBIS technology that open up interesting possibilities in the area of knowledge management.

To appreciate the power of the IBIS, it is best to begin by understanding the context in which the notation was invented.  I’ll therefore start with a discussion of the origins of the notation followed by an introduction to it. Finally, I’ll cover its development through the last 40 odd years, focusing on the recent developments that I mentioned above.

Wicked origins

A good place to start is where it all started. IBIS was first described in a paper entitled, Issues as elements of Information Systems; written by Horst Rittel (the man who coined the term wicked problem) and Werner Kunz in July 1970. They state the intent behind IBIS in the very first line of the abstract of their paper:

 Issue-Based Information Systems (IBIS) are meant to support coordination and planning of political decision processes. IBIS guides the identification, structuring, and settling of issues raised by problem-solving groups, and provides information pertinent to the discourse.

Rittel’s preoccupation was the area of public policy and planning – which is also the context in which he defined the term wicked problem originally. Given the above background it is no surprise that Rittel and Kunz foresaw IBIS to be the:

…type of information system meant to support the work of cooperatives like governmental or administrative agencies or committees, planning groups, etc., that are confronted with a problem complex in order to arrive at a plan for decision…

The problems tackled by such cooperatives are paradigm-defining examples of wicked problems. From the start, then, IBIS was intended as a tool to facilitate a collaborative approach to solving…or better, managing a wicked problem by helping develop a shared perspective on it.

A brief introduction to IBIS

The IBIS notation consists of the following three elements:

  1. Issues(or questions): these are issues that are being debated. Typically, issues are framed as questions on the lines of “What should we do about X?” where X is the issue that is of interest to a group. For example, in the case of a group of executives, X might be rapidly changing market condition whereas in the case of a group of IT people, X could be an ageing system that is hard to replace.
  2. Ideas(or positions): these are responses to questions. For example, one of the ideas of offered by the IT group above might be to replace the said system with a newer one. Typically the whole set of ideas that respond to an issue in a discussion represents the spectrum of participant perspectives on the issue.
  3. Arguments: these can be Pros (arguments for) or Cons (arguments against) an issue. The complete set of arguments that respond to an idea represents the multiplicity of viewpoints on it.

Compendium is a freeware tool that can be used to create IBIS maps– it can be downloaded here.

In Compendium, the IBIS elements described above are represented as nodes as shown in Figure 1: issues are represented by blue-green question marks; positions by yellow light bulbs; pros by green + signs and cons by red – signs.  Compendium supports a few other node types, but these are not part of the core IBIS notation. Nodes can be linked only in ways specified by the IBIS grammar as I discuss next.

 

Figure 1: IBIS elements

Figure 1: IBIS elements

The IBIS grammar can be summarized in three simple rules:

  1. Issues can be raised anew or can arise from other issues, positions or arguments. In other words, any IBIS element can be questioned.  In Compendium notation:  a question node can connect to any other IBIS node.
  2. Ideas can only respond to questions– i.e. in Compendium “light bulb” nodes can only link to question nodes. The arrow pointing from the idea to the question depicts the “responds to” relationship.
  3. Arguments  can only be associated with ideas– i.e. in Compendium “+” and “–“  nodes can only link to “light bulb” nodes (with arrows pointing to the latter)

The legal links are summarized in Figure 2 below.

Figure 2: Legal links in IBIS

Figure 2: Legal links in IBIS

Yes, it’s as simple as that.

The rules are best illustrated by example-   follow the links below to see some illustrations of IBIS in action:

  1. See this postfor a simple example of dialogue mapping.
  2. See this postor this one for examples of argument visualisation. (Note: using IBIS to map out the structure of written arguments is called issue mapping.
  3. See this one for an example Paul did with his children. This example also features in our book. that made an appearance in our book.

Now that we know how IBIS works and have seen a few examples of it in action, it’s time to trace the history of the notation from its early days the present.

Operation of early systems

When Rittel and Kunz wrote their paper, there were three IBIS-type systems in operation: two in government agencies (in the US, one presumes) and one in a university environment (quite possibly Berkeley, where Rittel worked). Although it seems quaint and old-fashioned now, it is no surprise that these were manual, paper-based systems; the effort and expense involved in computerizing such systems in the early 70s would have been prohibitive and the pay-off questionable.

The Rittel-Kunz paper introduced earlier also offers a short description of how these early IBIS systems operated:

An initially unstructured problem area or topic denotes the task named by a “trigger phrase” (“Urban Renewal in Baltimore,” “The War,” “Tax Reform”). About this topic and its subtopics a discourse develops. Issues are brought up and disputed because different positions (Rittel’s word for ideas or responses) are assumed. Arguments are constructed in defense of or against the different positions until the issue is settled by convincing the opponents or decided by a formal decision procedure. Frequently questions of fact are directed to experts or fed into a documentation system. Answers obtained can be questioned and turned into issues. Through this counterplay of questioning and arguing, the participants form and exert their judgments incessantly, developing more structured pictures of the problem and its solutions. It is not possible to separate “understanding the problem” as a phase from “information” or “solution” since every formulation of the problem is also a statement about a potential solution.

 Even today, forty years later, this is an excellent description of how IBIS is used to facilitate a common understanding of complex problems.  Moreover, the process of reaching a shared understanding (whether using IBIS or not) is one of the key ways in which knowledge is created within organizations. To foreshadow a point I will elaborate on later, using IBIS to capture the key issues, ideas and arguments, and the connections between them, results in a navigable map of the knowledge that is generated in a discussion.

 Fast forward a couple decades (and more!)

In a paper published in 1988 entitled, gIBIS: A hypertext tool for exploratory policy discussion, Conklin and Begeman describe a prototype of a graphical, hypertext-based  IBIS-type system (called gIBIS) and its use in capturing design rationale (yes, despite the title of the paper, it is more about capturing design rationale than policy discussions). The development of gIBIS represents a key step between the original Rittel-Kunz version of IBIS and its more recent version as implemented in Compendium.  Amongst other things, IBIS was finally off paper and on to disk, opening up a world of new possibilities.

gIBIS aimed to offer users:

  1. The ability to capture design rationale – the options discussed (including the ones rejected) and the discussion around the pros and cons of each.
  2. A platform for promoting computer-mediated collaborativedesign work – ideally in situations where participants were located at sites remote from each other.
  3. The ability to store a large amount of information and to be able to navigate through it in an intuitive way.

The gIBIS prototype proved successful enough to catalyse the development of Questmap, a commercially available software tool that supported IBIS.  In a recent conversation Jeff Conklin mentioned to me that Questmap was one of the earliest Windows-based groupware tools available on the market…and it won a best-of-show award in that category. It is interesting to note that in contrast to Questmap (which no longer exists), Compendium is a single-user, desktop software.

The primary application of Questmap was in the area of sensemaking which is all about helping groups reach a collective understanding of complex situations that might otherwise lead them into tense or adversarial conditions. Indeed, that is precisely how Rittel and Kunz intended IBIS to be used.  The key advantage offered by computerized IBIS systems was that one could map dialogues in real-time, with the map representing the points raised in the conversation along with their logical connections. This proved to be a big step forward in the use of IBIS to help groups achieve a shared understanding of complex issues.

That said, although there were some notable early successes in the real-time use of IBIS in industry environments (see this paper, for example), these were not accompanied by widespread adoption of the technique. It is worth exploring the reasons for this briefly.

 The tacitness of IBIS mastery

The reasons for the lack of traction of IBIS-type techniques for real-time knowledge capture are discussed in a paper by Shum et. al. entitled, Hypermedia Support for Argumentation-Based Rationale: 15 Years on from gIBIS and QOC.  The reasons they give are:

  1. For acceptance, any system must offer immediate value to the person who is using it. Quoting from the paper, “No designer can be expected to altruistically enter quality design rationale solely for the possible benefit of a possibly unknown person at an unknown point in the future for an unknown task. There must be immediate value.” Such immediate value is not obvious to novice users of IBIS-type systems.
  2. There is some effort involved in gaining fluency in the use of IBIS-based software tools. It is only after this that users can gain an appreciation of the value of such tools in overcoming the limitations of mapping design arguments on paper, whiteboards etc.

While the rules of IBIS are simple to grasp, the intellectual effort – or cognitive overhead in using IBIS in real time involves:

  1. Teasing out issues, ideas and arguments from the dialogue.
  2. Classifying points raised into issues, ideas and arguments.
  3. Naming (or describing) the point succinctly.
  4. Relating (or linking) the point to the existing map (or anticipating how it will fit in later)
  5. Developing a sense for conversational patterns.

Expertise in these skills can only be developed through sustained practice, so it is no surprise that beginners find it hard to use IBIS to map dialogues.   Indeed, the use of IBIS for real-time conversation mapping is a tacit skill, much like riding a bike or swimming – it can only be mastered by doing.

Making sense through IBIS 

Despite the difficulties of mastering IBIS, it is easy to see that it offers considerable advantages over conventional methods of documenting discussions. Indeed, Rittel and Kunz were well aware of this. Their paper contains a nice summary of the advantages, which I paraphrase below:

  1. IBIS can bridge the gap between discussions and records of discussions (minutes, audio/video transcriptions etc.). IBIS sits between the two, acting as a short-term memory. The paper thus foreshadows the use of issue-based systems as an aid to organizational or project memory.
  2. Many elements (issues, ideas or arguments) that come up in a discussion have contextual meanings that are different from any pre-existing definitions. That is, the interpretation of points made or questions raised depends on the circumstances surrounding the discussion. What is more important is that contextual meaning is more important than formal meaning. IBIS captures the former in a very clear way – for example a response to a question “What do we mean by X?” elicits the meaning of X in the context of the discussion, which is then subsequently captured as an idea (position)”. I’ll have much more to say about this towards the end of this article.
  3. The reasoning used in discussions is made transparent, as is the supporting (or opposing) evidence.
  4. The state of the argument (discussion) at any time can be inferred at a glance (unlike the case in written records). See this post for more on the advantages of visual documentation over prose.
  5. Often times it happens that the commonality of issues with other, similar issues might be more important than its precise meaning. To quote from the paper, “…the description of the subject matter in terms of librarians or documentalists (sic) may be less significant than the similarity of an issue with issues dealt with previously and the information used in their treatment…”  This is less of an issue now because of search of technologies. However, search technologies are still largely based on keywords rather than context. A properly structured, context-searchable IBIS-based archive would be more useful than a conventional document-based system. As I’ll discuss in the next section, the technology for this is now available.

To sum up, then: although IBIS offers a means to map out arguments what is lacking is the ability to make these maps available and searchable across an organization.

IBIS in the enterprise

It is interesting to note that Compendium, unlike its predecessor, Questmap, is a single-user, desktop tool – it does not, by itself, enable the sharing of maps across the enterprise. To be sure, it is possible work around this limitation but the workarounds are somewhat clunky.  A recent advance in IBIS technology addresses this issue rather elegantly: Seven Sigma, an Australian consultancy founded by Paul Culmsee, Chris Tomich and Peter Chow, has developed Glyma (pronounced “glimmer”): a product that makes IBIS available on collaboration platforms like Microsoft SharePoint. This is a game-changer because it enables sharing and searching of IBIS maps across the enterprise. Moreover, as we shall see below, the implications of this go beyond sharing and search.

Full Disclosure: As regular readers of this blog know, Paul is a good friend and we have jointly written a book and a few papers. However, at the time of writing, I have no commercial association with Seven Sigma.  My comments below are based on playing with beta version of the product that Paul was kind enough to give me to access to as well as some discussions that I have had with him.

Figure 3: IBIS in Glyma

Figure 3: IBIS in Glyma (Click to see larger picture)

The look and feel of Glyma is much the same as Compendium (see Fig 3 above) – and the keystrokes and shortcuts are quite similar. I have trialled Glyma for a few weeks and feel that the overall experience is actually a bit better than in Compendium. For example one can navigate through a series of maps and sub-maps using a breadcrumb trail. Another example: documents and videos are embedded within the map – so one does not need to leave the map in order to view tagged media (unless of course one wants to see it at a higher resolution).

I won’t go into any detail about product features etc. since that kind of information is best accessed at source – i.e. the product website and documentation. Instead, I will now focus on how Glyma addresses a longstanding gap in knowledge management systems.

Revisiting the problem of context

In most organisations, knowledge artefacts (such as documents and audio-visual media) are stored in hierarchical or relational structures (for example, a folder within a directory structure or a relational database). To be sure, stored artefacts are often tagged with metadata indicating when, where and by whom they were created, but experience tells me that such metadata is not as useful as it should be.  The problem is that the context in which an artefact was created is typically not captured. Anyone who has read a document and wondered, “What on earth were the authors thinking when they wrote this?” has encountered the problem of missing context.

Context, though hard to capture, is critically important in understanding the content of a knowledge artefact. Any artefact, when accessed without an appreciation of the context in which it was created is liable to be misinterpreted or only partially understood.

 Capturing context in the enterprise

Glyma addresses the issue of context rather elegantly at the level of the enterprise.  I’ll illustrate this point an inspiring case study on the innovative use of SharePoint in education that Paul has written about some time ago.

The case study

Here is the backstory in Paul’s words:

Earlier this year, I met Louis Zulli Jnr – a teacher out of Florida who is part of a program called the Centre of Advanced Technologies. We were co-keynoting at a conference and he came on after I had droned on about common SharePoint governance mistakes…The majority of Lou’s presentation showcased a whole bunch of SharePoint powered solutions that his students had written. The solutions were very impressive…We were treated to examples like:

  • IOS, Android and Windows Phone apps that leveraged SharePoint to display teacher’s assignments, school events and class times;
  • Silverlight based application providing a virtual tour of the campus;
  • Integration of SharePoint with Moodle (an open source learning platform)
  • An Academic Planner web application allowing students to plan their classes, submit a schedule, have them reviewed, track of the credits of the classes selected and whether a student’s selections meet graduation requirements;

All of this and more was developed by 16 to 18 year olds and all at a level of quality that I know most SharePoint consultancies would be jealous of…

Although the examples highlighted by Louis were very impressive, what Paul found more interesting were the anecdotes that Lou related about the dedication and persistence that students displayed in their work. Quoting again from Paul,

So the demos themselves were impressive enough, but that is actually not what impressed me the most. In fact, what had me hooked was not on the slide deck. It was the anecdotes that Lou told about the dedication of his students to the task and how they went about getting things done. He spoke of students working during their various school breaks to get projects completed and how they leveraged each other’s various skills and other strengths. Lou’s final slide summed his talk up brilliantly:

  • Students want to make a difference! Give them the right project and they do incredible things.
  • Make the project meaningful. Let it serve a purpose for the campus community.
  • Learn to listen. If your students have a better way, do it. If they have an idea, let them explore it.
  • Invest in success early. Make sure you have the infrastructure to guarantee uptime and have a development farm.
  • Every situation is different but there is no harm in failure. “I have not failed. I’ve found 10,000 ways that won’t work” – Thomas A. Edison

In brief:  these points highlight the fact that Lou’s primary role as director of the center is to create the conditions that make it possible for students to do great work.  The commercial-level quality of work turned out by students suggests that Lou’s knowledge on how to build high-performing teams is definitely worth capturing.

The question is: what’s the best way to do this (short of getting him to visit you and talk about his experiences)?

Seeing the forest for the trees

Paul recently interviewed Lou with the intent of documenting Lou’s experiences. The conversation was recorded on video and then “Glymafied” it – i.e the video was mapped using IBIS (see Figure 4 below).

Figure 4: Knowledge capture via Glyma

Figure 4: Knowledge capture via Glyma (Click to see larger picture)

There are a few points worth noting here:

  1. The content of the entire conversation is mapped out so one can “see” the conversation at a glance.
  2. The context in which a particular point (i.e. the content of a node) is made is clarified by the connections between a node and its neighbours. Moving left from a node gives a higher level picture, moving right drills down into details.

Of course, the reader will have noted that these are core IBIS capabilities that are available in Compendium (or any other IBIS tool).  Glyma offers much more. Consider the following:

  1. Relevant documents or audio visual media can be tagged to specific nodes to provide supplementary material. In this case the video recording was tagged to specific nodes that related to points made in the video. Clicking on the play icon attached to such a node plays the segment in which the content of the node is being discussed. This is a really nice feature as it saves the user from having to watch the whole video (or play an extended game of ffwd-rew to get to the point of interest). Moreover, this provides additional context that cannot (or is not) captured by in the map. For example, one can attach papers, links to web pages, Slideshare presentations etc. to fill in background and context.
  2. Glyma is integrated with an enterprise content management system by design. One can therefore link map and video content to the powerful built-in search and content aggregation features of these systems. For example, users would be able enter a search from their intranet home page and retrieve not only traditional content such as documents, but also stories, reflections and anecdotes from experts such as Lou.
  3. Another critical aspect to intranet integration is the ability to provide maps as contextual navigation. Amazon’s ability to sell books that people never intended to buy is an example of the power of such navigation. The ability to execute the kinds of queries outlined in the previous point, along with contextual information such as user profile details, previous activity on the intranet and the area of an intranet the user is browsing, makes it possible to present recommendations of nodes or maps that may be of potential interest to users. Such targeted recommendations might encourage users to explore video (and other rich media) content.

Technical Aside: An interesting under-the-hood feature of Glyma is that it uses an implementation of a hypergraph database to store maps. (Note: this is a database that can store representations of graphs (maps) in which an edge can connect to more than 2 vertices). These databases enable the storing of very general graph structures. What this means is that Glyma can be extended to store any kind of map (Mind Maps, Concept Maps, Argument Maps or whatever)…and nodes can be shared across maps. This feature has not been developed as yet, but I mention it because it offers some exciting possibilities in the future.

To summarise: since Glyma stores all its data in an enterprise-class database, maps can be made available across an organization.  It offers the ability to tag nodes with pretty much any kind of media (documents, audio/video clips etc.), and one can tag specific parts of the media that are relevant to the content of the node (a snippet of a video, for example). Moreover, the sophisticated search capabilities of the platform enable context aware search.  Specifically, we can search for nodes by keywords or tags, and once a node of interest is located, we can also view the map(s) in which it appears. The map(s) inform us of the context(s) relating to the node. The ability to display the “contextual environment” of a piece of information is what makes Glyma really interesting.

In metaphorical terms, Glyma enables us to see the forest for the trees.

…and so, to conclude

My aim in this post has been to introduce readers to the IBIS notation and trace its history from its origins in issue mapping to recent developments in knowledge management.  The history of a technique is valuable because it gives insight into the rationale behind its creation, which leads to a better understanding of the different ways in which it can be used. Indeed, it would not be an exaggeration to say that the knowledge management applications discussed above are but an extension of Rittel’s original reasons for inventing IBIS.

I would like to close this piece with a couple of observations from my experience with IBIS:

Firstly, the real surprise for me has been that the technique can capture most written arguments and conversations, despite having only three distinct elements and a very simple grammar. Yes, it does require some thought to do this, particularly when mapping discussions in real time. However, this cognitive overhead is good because it forces the mapper to think about what’s being said instead of just writing it down blind. Thoughtful transcription is the aim of the game. When done right, this results in a map that truly reflects an understanding of a complex issue.

Secondly, although most current discussions of IBIS focus on its applications in dialogue mapping, it has a more important role to play in mapping organizational knowledge. Maps offer a powerful means to navigate the complex network of knowledge within an organisation. The (aspirational) end-goal of such an effort would be a “global” knowledge map that highlights interconnections between different kinds of knowledge that exists within an organization. To be sure, such a map will make little sense to the human eye, but powerful search capabilities could make it navigable. To the extent that this is a feasible direction, I foresee IBIS becoming an important skill in the repertoire of knowledge management professionals.

Follow

Get every new post delivered to your Inbox.

Join 316 other followers

%d bloggers like this: