Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Emergent Design’ Category

Evolution, obsolescence and enterprise architecture

with 2 comments

Introduction

Enterprise architects are seldom (never?) given a blank canvas on which they can draw as they please. They invariably have to begin with an installed base of systems over which they have no control.  As I wrote in a piece on the legacy of legacy systems:

An often unstated (but implicit) requirement [on new systems] is that [they] must maintain continuity between the past and present. This is true even for systems that claim to represent a clean break from the past; one never has the luxury of a completely blank slate, there are always arbitrary constraints placed by legacy systems.

Indeed the system landscape of any large organization is a palimpsest, always retaining traces of what came before.  Those who actually maintain systems  – usually not architects – are painfully aware of this simple truth.

The IT landscape of an organization is therefore a snapshot, a picture that begins to age the instant is taken. Practicing enterprise architects will say they know this “of course”, and pay due homage to it in their words…but often not their actions.  The conflicts and contradictions between legacy and their aspirational architectures are hard to deal with and hence easier to ignore. In this post, I draw a parallel between this central conundrum of enterprise architecture and the process of biological evolution.

A Batesonian perspective on evolution

I’ve recently been re-reading Mind and Nature: A Necessary Unity, a book that Gregory Bateson wrote towards the end of his life of eclectic scholarship. Tucked away in the appendix of the book is an essay lamenting the fragmentation of knowledge and the lack of transdisciplinary thinking within universities.  Central to the essay is the notion of obsolescence. Bateson argued that much of what was taught in universities lagged behind the practical skills and mindsets that were needed to tackle the problems of that time.  Most people would agree that this is as true today as it was in Bateson’s time, perhaps even more so.

Bateson had a very specific idea of obsolescence in mind. He suggested that the educational system is lopsided because it invariably lags behind what is needed in the “real world”. Specifically, there is a lag between the typical university curriculum and the attitudes, dispositions, knowledge and skills needed to the problems of an ever-changing world. This lag is what Bateson referred to as obsolescence. Indeed, if the external world did not change there would be no lag and hence no obsolescence. As he noted:

I therefore propose to analyze the lopsided process called “obsolescence” which we might more precisely call “one-sided progress.” Clearly for obsolescence to occur there must be, in other parts of the system, other changes compared with which the obsolete is somehow lagging or left behind. In a static system, there would be no obsolescence…

This notion of obsolescence-as-lag has a direct connection with the contrasting process of developmental and evolutionary biology. The process of development of an embryo is inherently conservative – it develops according predetermined rules and is relatively robust to external stimuli. On the other hand, after birth, individuals are continually subject to a wide range of external factors (e.g. climate, stress etc.) that are unpredictable. If exposed to such factors over an extended period, they may change their characteristics in response to them (e.g. the tanning effect of sunlight, adaptability etc).  However, these characteristics are not inheritable.  They are passed on (if at all) by a much slower process of natural selection.  As a consequence, there is a significant lag between external stimuli and the inheritability of the associated characteristics.

As Bateson puts it:

Survival depends upon two contrasting phenomena or processes, two ways of achieving adaptive action. Evolution must always, Janus-like, face in two directions: inward towards the developmental regularities and physiology of the living creature and outward towards the vagaries and demands of the environment. These two necessary components of life contrast in interesting ways: the inner development-the embryology or “epigenesis”-is conservative and demands that every new thing shall conform or be compatible with the regularities of the status quo ante. If we think of a natural selection of new features of anatomy or physiology-then it is clear that one side of this selection process will favor those new items which do not upset the old apple cart. This is minimal necessary conservatism.

In contrast, the outside world is perpetually changing and becoming ready to receive creatures which have undergone change, almost insisting upon change. No animal or plant can ever be “readymade.” The internal recipe insists upon compatibility but is never sufficient for the development and life of the organism. Always the creature itself must achieve change of its own body. It must acquire certain somatic characteristics by use, by disuse, by habit, by hardship, and by nurture. These “acquired characteristics” must, however, never be passed on to the offspring. They must not be directly incorporated into the DNA. In organisational terms, the injunction – e.g. to make babies with strong shoulders who will work better in coal mines- must be transmitted through channels, and the channel in this case is via natural external selection of those offspring who happen (thanks to the random shuffling of genes and random creation of mutations) to have a greater propensity for developing stronger shoulders under the stress of working in coal mine.

The upshot of the above is that the genetic code of any species is inherently obsolete because it is, in at least a few ways, maladapted to its environment.  This is a good thing. Sustainable and lasting change to the genome of a population should occur only through the trial-and-error process of natural selection over many generations. It is only through such a gradual process that one can be sure that that a) the adaptation is necessary and b) that it occurs with minimal disruption to the existing order.

…and so to enterprise architecture

In essence, the aim of enterprise architecture is to come up with a strategy and plan to move from an old system landscape to a new one. Typically, architectures are proposed based on current technology trends and extrapolations thereof. Frameworks such as The Open Group Architecture Framework (TOGAF) present a range of options for migrating from legacy architecture.

Here’s an excerpt from Chapter 13 of the TOGAF Guide:

[The objective is to] create an overall Implementation and Migration Strategy that will guide the implementation of the Target Architecture, and structure any Transition Architectures. The first activity is to determine an overall strategic approach to implementing the solutions and/or exploiting opportunities. There are three basic approaches as follows:

  • Greenfield: A completely new implementation.
  • Revolutionary: A radical change (i.e., switches on, switch off).
  • Evolutionary: A strategy of convergence, such as parallel running or a phased approach to introduce new capabilities.

What can we say about these options in light of the discussion of the previous sections?

Firstly, from the discussion of the introduction, it is clear that Greenfield situations can be discounted on grounds rarity alone.  So let’s look at the second option – revolutionary change – and ask if it is viable in light of the discussion of the previous section.

In the case of a particular organization, the gap between an old architecture and technology trends/extrapolations is analogous to the lag between inherited characteristics and external forces. The former resist change; the latter insist on it.  The discussion of the previous section tells us that the former cannot be wished away, they are a natural consequence of “technology genes” embedded in the organization. Because this is so, changes are best introduced in a gradual way that permits adaptation through the slow and painful process of trial and error. This is why the revolutionary approach usually fails.

It follows from the above that the only viable approach to enterprise architecture is an evolutionary one. This process is necessarily gradual. Architects may wish for green fields and revolutions, but the reality is that lasting and sustainable change in an organisation’s technology landscape can only be achieved incrementally, akin to the way in which an aspiring marathon runner’s physiology adapts to the extreme demands of the sport.

The other, perhaps more subtle point made by this analogy is that a particular organization is but one member of a “species” which, in the present context, is a population of organisations that have a certain technology landscape. Clearly, a new style of architecture will be deemed a success only if it is adopted successfully by a significant number of organisations within this population. Equally clear is that this eventuality is improbable because new architectural paradigms are akin to random mutations. Most of these are rightly rejected by organizations, but only after exacting a high price. This explains why most technology fads tend to fade away.

Some consequences

The analogy between the evolution of biological systems and organizational technology landscapes has some interesting implications for enterprise architects. Here are a few that are worth highlighting:

  1. Enterprise architects are caught between a rock and a hard place: to demonstrate value they have to change things rapidly, but rapid changes are more likely to fail than succeed.
  2. The best chance of success lies in an evolutionary approach that accepts trial and error as a natural part of the process. The trick lies in selling that to management…and there are ways to do that.
  3. A corollary of (2) is that old and new elements of the landscape will necessarily have to coexist, often for periods much longer than one might expect. One must therefore design for coexistence. Above all, the focus here should be on the interfaces for these are the critical elements that enable the old and the new to “talk” to each other.
  4. Enterprise architects should be skeptical of cutting edge technologies. It almost always better to bet on proven technologies because they have the benefit of the experience of others.
  5. One of the consequences of an evolutionary process of trial and error is that benefits (or downsides) are often not evident upfront. One must therefore always keep an eye out for these unexpected features.

Finally, it is worth pointing out that an interesting feature of all the above points is that they are consistent with the principles of emergent design.

Wrapping up

In this article I’ve attempted to highlight a connection between the evolution of organizational technology landscapes and the process of biological evolution. At the heart of both lie a fundamental tension between inherent conservatism (the tendency to preserve the status quo change) and the imperative to evolve in order to adapt to changes imposed by the environment. There is no question that maintaining the status quo is never an option. The question is how to evolve in order to ensure the best chance of success. Evolution tells us that the best approach is a gradual one, via a process of trial, error and learning.

Written by K

December 16, 2015 at 7:26 am

Setting up an internal data analytics practice – some thoughts from a wayfarer

leave a comment »

Introduction

This year has been hugely exciting so far: I’ve been exploring and playing with various techniques that fall under the general categories of data mining and text analytics. What’s been particularly satisfying is that I’ve been fortunate to find meaningful applications for these techniques within my organization.

Although I have a fair way to travel yet, I’ve learnt that common wisdom about data analytics – especially the stuff that comes from software vendors and high-end consultancies – can be misleading, even plain wrong. Hence this post in which I dispatch some myths and share a few pointers on establishing data analytics capabilities within an organization.

Busting a few myths

Let’s get right to it by taking a critical look at a few myths about setting up an internal data analytics practice.

  1. Requires high-end technology and a big budget: this myth is easy to bust because I can speak from recent experience. No, you do not need cutting-edge technology or an oversized budget.   You can get started for with an outlay of 0$ – yes, that’s right, for free!  All you need to is the open-source statistical package R (check out this section of my article on text mining for more on installing and using R) and the willingness to roll-up your sleeves and learn (more about this  later).  No worries if you prefer to stick with familiar tools – you can even begin with Excel.
  2. Needs specialist skills: another myth floating around is that you need Phd level knowledge in statistics or applied mathematics to do practical work in analytics. Sorry, but that’s plain wrong. You do need a PhD to do research in the analytics and develop your own algorithms, but not if you want to apply algorithms written by others.Yes, you will need to develop an understanding of the algorithms you plan to use, a feel for how they work and the ability to tell whether the results make sense. There are many good resources that can help you develop these skills – see, for example, the outstanding books by James, Witten, Hastie and Tibshirani and Kuhn and Johnson.
  3. Must have sponsorship from the top: this one is possibly a little more controversial than the previous two. It could be argued that it is impossible to gain buy in for a new capability without sponsorship from top management. However, in my experience, it is OK to start small by finding potential internal “customers” for analytics services through informal conversations with folks in different functions.I started by having informal conversations with managers in two different areas: IT infrastructure and sales / marketing.  I picked these two areas because I knew that they had several gigabytes of under-exploited data – a good bit of it unstructured – and a lot of open questions that could potentially be answered (at least partially) via methods of data and text analytics.  It turned out I was right. I’m currently doing a number of proofs of concept and small projects in both these areas.  So you don’t need sponsorship from the top as long as you can get buy in from people who have problems they believe you can solve. If you deliver, they may even advocate your cause to their managers.

A caveat is in order at this point:  my organization is not the same as yours, so you may well need to follow a different path from mine. Nevertheless, I do believe that it is always possible to find a way to start without needing permission or incurring official wrath.  In that spirit, I now offer some suggestions to help kick-start your efforts

Getting started

As the truism goes, the hardest part of any new effort is getting started.  The first thing to keep in mind is to start small. This is true even if you have official sponsorship and a king-sized budget. It is very tempting to spend a lot of time garnering management support for investing in high-end technology.  Don’t do it!  Do the following instead:

  1. Develop an understanding of the problems faced by people you plan to approach: The best way to do this is to talk to analysts or frontline managers. In my case, I was fortunate to have access to some very savvy analysts in IT service management and marketing who gave me a slew of interesting ideas to pursue. A word of advice: it is best not to talk to senior managers until you have a few concrete results that you can quantify in terms of dollar values.
  2. Invest time and effort in understanding analytics algorithms and gaining practical experience with them: As mentioned earlier, I started with R – and I believe it is the best choice. Not just because it is free but also because there are a host of packages available to tackle just about any analytics problem you might encounter.  There are some excellent free resources available to get you started with R (check out this listing on the r-statistics blog, for example).It is important that you start cutting code as you learn. This will help you build a repertoire of techniques and approaches as you progress. If you get stuck when coding, chances are you will find a solution on the wonderful stackoverflow site.
  3. Evangelise, evangelise, evangelise: You are, in effect, trying to sell an idea to people within your organization. You therefore have to identify people who might be able to help you and then convince them that your idea has merit. The best way to do the latter is to have concrete examples of problems that you have tackled. This is a chicken-and-egg situation in that you can’t have any examples until you gain support.  I got support by approaching people I know well. I found that most – no, all – of them were happy to provide me with interesting ideas and access to their data.
  4. Begin with small (but real) problems: It is important to start with the “low-hanging fruit” – the problems that would take the least effort to solve. However, it is equally important to address real problems, i.e. those that matter to someone.
  5. Leverage your organisation’s existing data infrastructure: From what I’ve written thus far, I may have given you the impression that the tools of data analytics stand separate from your existing data infrastructure. Nothing could be further from the truth. In reality, I often do the initial work  (basic preprocessing and exploratory analysis) using my organisation’s relational database infrastructure. Relational databases have sophisticated analytical extensions to SQL as well as efficient bulk data cleansing and transport facilities. Using these make good sense, particularly if your R installation is on a desktop or laptop computer as it is in my case. Moreover, many enterprise database vendors now offer add-on options that integrate R with their products. This gives you the best of both worlds – relational and analytical capabilities on an enterprise-class platform.
  6. Build relationships with the data management team: Remember the work you are doing falls under the ambit of the group that is officially responsible for managing data in your organization. It is therefore important that you keep them informed of what you’re doing.  Sooner or later your paths will cross, and you want to be sure that there are no nasty surprises (for either side!) at that point. Moreover, if you build connections with them early, you may even find that the data management team supports your efforts.

Having waxed verbose, I should mention that my effort is work in progress and I do not know where it will lead. Nevertheless, I offer these suggestions as a wayfarer who is considerably further down the road from where he started.

Parting thoughts

You may have noticed that I’ve refrained from using the overused and over-hyped term “Big Data” in this piece. This is deliberate. Indeed, the techniques I have been using have nothing to do with the size of the datasets. To be honest, I’ve applied them to datasets ranging from a few thousand to a few hundred thousand records, both of which qualify as Very Small Data in today’s world.

Your vendor will be only too happy to sell you Big Data infrastructure that will set you back a good many dollars. However, the chances are good that you do not need it right now.  You’ll be much better off going back to them after you hit the limits of your current data processing infrastructure. Moreover, you’ll also be better informed about your needs then.

You may also be wondering why I haven’t said much about the composition of the analytics team (barring the point about not needing PhD statisticians) and how it should be organized.  The reason I haven’t done so is that I believe the right composition and organizational structure will emerge from the initial projects done and feedback received from internal customers. The resulting structure will be better suited to the organization than one that is imposed upfront.  Long time readers of this blog might recognize this as a tenet of emergent design.

Finally, I should reiterate that my efforts are still very much in progress and I know not where they will lead. However, even if they go nowhere, I would have learnt something about my organization and picked up a useful, practical skill. And that is good enough for me.

Written by K

September 3, 2015 at 8:28 pm

From the coalface: an essay on the early history of sociotechnical systems

with 2 comments

The story of sociotechnical systems began a little over half a century ago, in a somewhat unlikely setting: the coalfields of Yorkshire.

The British coal industry had just been nationalised and new mechanised mining methods were being introduced in the mines. It was thought that nationalisation would sort out the chronic labour-management issues and mechanisation would address the issue of falling productivity.

But things weren’t going as planned. In the words of Eric Trist, one of the founders of the Tavistock Institute:

…the newly nationalized industry was not doing well. Productivity failed to increase in step with increases in mechanization. Men were leaving the mines in large numbers for more attractive opportunities in the factory world. Among those who remained, absenteeism averaged 20%. Labour disputes were frequent despite improved conditions of employment.   – excerpted from, The evolution of Socio-technical systems – a conceptual framework and an action research program, E. Trist (1980)

Trist and his colleagues were asked by the National Coal Board to come in and help. To this end, they did a comparative study of two mines that were similar except that one had high productivity and morale whereas the other suffered from low performance and had major labour issues.

Their job was far from easy: they were not welcome at the coalface because workers associated them with management and the Board.

Trist recounts that around the time the study started, there were a number of postgraduate fellows at the Tavistock Institute. One of them, Ken Bamforth, knew the coal industry well as he had been a miner himself.  Postgraduate fellows who had worked in the mines were encouraged to visit their old workplaces after  a year and  write up their impressions, focusing on things that had changed since they had worked there.   After one such visit, Bamforth reported back with news of a workplace innovation that had occurred at a newly opened seam at Haighmoor. Among other things, morale and productivity at this particular seam was high compared to other similar ones.  The team’s way of working was entirely novel, a world away from the hierarchically organised set up that was standard in most mechanised mines at the time. In Trist’s words:

The work organization of the new seam was, to us, a novel phenomenon consisting of a set of relatively autonomous groups interchanging roles and shifts and regulating their affairs with a minimum of supervision. Cooperation between task groups was everywhere in evidence; personal commitment was obvious, absenteeism low, accidents infrequent, productivity high. The contrast was large between the atmosphere and arrangements on these faces and those in the conventional areas of the pit, where the negative features characteristic of the industry were glaringly apparent. Excerpted from the paper referenced above.

To appreciate the radical nature of practices at this seam, one needs to understand the backdrop against which they occurred. To this end, it is helpful to compare the  mechanised work practices introduced in the post-war years with the older ones from the pre-mechanised era of mining.

In the days before mines were mechanised, miners would typically organise themselves into workgroups of six miners, who would cover three work shifts in teams of two. Each miner was able to do pretty much any job at the seam and so could pick up where his work-mates from the previous shift had left off. This was necessary in order to ensure continuity of work between shifts. The group negotiated the price of their mined coal directly with management and the amount received was shared equally amongst all members of the group.

This mode of working required strong cooperation and trust within the group, of course.  However, as workgroups were reorganised from time to time due to attrition or other reasons, individual miners understood the importance of maintaining their individual reputations as reliable and trustworthy workmates. It was important to get into a good workgroup because such groups were more likely to get more productive seams to work on. Seams were assigned by bargaining, which was typically the job of the senior miner on the group. There was considerable competition for the best seams, but this was generally kept within bounds of civility via informal rules and rituals.

This traditional way of working could not survive mechanisation. For one, mechanised mines encouraged specialisation because they were organised like assembly lines, with clearly defined job roles each with different responsibilities and pay scales. Moreover, workers in a shift would perform only part of the extraction process leaving those from subsequent shifts to continue where work was left off.

As miners were paid by the job they did rather than the amount of coal they produced, no single group had end-to-end responsibility for the product.   Delays due to unexpected events tended to get compounded as no one felt the need to make up time. As a result, it would often happen that work that was planned for a shift would not be completed. This meant that the next shift (which could well be composed of a group with completely different skills) could not or would not start their work because they did not see it as their job to finish the work of the earlier shift. Unsurprisingly, blame shifting and scapegoating was rife.

From a supervisor’s point of view, it was difficult to maintain the same level of oversight and control in underground mining work as was possible in an assembly line. The environment underground is simply not conducive to close supervision and is also more uncertain in that it is prone to unexpected events.  Bureaucratic organisational structures are completely unsuited to dealing with these because decision-makers are too far removed from the coalface (literally!).  This is perhaps the most important insight to come out of the Tavistock coal mining studies.

As Claudio Ciborra  puts it in his classic book on teams:

Since the production process at any seam was much more prone to disorganisation than due to uncertainty and complexity of underground conditions, any ‘bureaucratic’ allocation of jobs could be easily disrupted. Coping with emergencies and coping with coping became part of worker’s and supervisors’ everyday activities. These activities would lead to stress, conflict and low productivity because they continually clashed with the technological arrangements and the way they were planned and subdivided around them.

Thus we see that the new assembly-line bureaucracy inspired work organisation was totally unsuited to the work environment because there was no end-to-end responsibility, and decision making was far removed from the action. In contrast, the traditional workgroup of six was able to deal with uncertainties and complexities of underground work because team members had a strong sense of responsibility for the performance of the team as a whole. Moreover, teams were uniquely placed to deal with unexpected events because they were actually living them as they occurred and could therefore decide on the best way to deal with them.

What Bamforth found at the Haighmoor seam was that it was possible to recapture the spirit of the old ways of working by adapting these to the larger specialised groups that were necessary in the mechanised mines. As Ciborra describes it in his book:

The new form of work organisation features forty one men who allocate themselves to tasks and shifts. Although tasks and shifts those of the conventional mechanised system, management and supervisors do not monitor, enforce and reward single task executions. The composite group takes over some of the managerial tasks, as it had in the pre-mechanised marrow group, such as the selection of group members and the informal monitoring of work…Cycle completion, not task execution becomes a common goal that allows for mutual learning and support…There is basic wage and a bonus linked to the overall productivity of the group throughout the whole cycle rather than a shift.  The competition between shifts that plagued the conventional mechanised method is effectively eliminated…

Bamforth and Trist’s studies on Haighmoor convinced them that there were viable (and better!) alternatives to those that were typical of mid to late 20th century work places.  Their work led them to the insight that the best work arrangements come out of seeking a match between technical and social elements of the modern day workplace, and thus was born the notion of sociotechnical systems.

Ever since the assembly-line management philosophies of Taylor and Ford, there has been an increasing trend towards division of labour, bureaucratisation and mechanisation / automation of work processes.  Despite the early work of the Tavistock school and others who followed, this trend continues to dominate management practice, arguably even more so in recent years. The Haighmoor innovation described above was one of the earliest demonstrations that there is a better way.   This message has since been echoed by many academics and thinkers,  but remains largely under-appreciated or ignored by professional managers who have little idea – or have completely forgotten – what it is like to work at the coalface.

Written by K

April 7, 2015 at 10:30 pm

Conditions over causes: towards an emergent approach to building high-performance teams

with 7 comments

Introduction

Much of the work that goes on in organisations is done by groups of people who work together in order to achieve shared objectives. Given this, it is no surprise that researchers have expended a great deal of effort in building theories about how teams work. However, as Richard Hackman noted in this paper,  more than 70 years of research (of ever-increasing sophistication) has not resulted in a true understanding of the factors that give rise to high-performing teams.  The main reason for this failure is that:

“…groups are social systems. They redefine objective reality, they create new realities (both for their members and in their system contexts), and they evolve their own purposes and strategies for pursuing those purposes. Groups are not mere assemblies of multiple cause–effect relationships; instead, they exhibit emergent and dynamic properties that are not well captured by standard causal models.”

Hackman had a particular interest in leadership as a causal factor in team performance.  One of the things he established is that leadership matters a whole lot less than is believed…or, more correctly, it matters for reasons that are not immediately obvious. As he noted:

“…60 per cent of the difference in how well a group eventually does is determined by the quality of the condition-setting pre-work the leader does. 30 per cent is determined by how the initial launch of the group goes. And only 10 per cent is determined by what the leader does after the group is already underway with its work. This view stands in stark contrast to popular images of group leadership—the conductor waving a baton throughout a musical performance or an athletic coach shouting instructions from the sidelines during a game.”

Although the numbers quoted above can be contested, the fact is that as far as team performance is concerned, conditions matter more than the quality of leadership. In this post, I draw on Hackman’s paper as well as my work (done in collaboration with Paul Culmsee) to argue that the real work of leaders is not to lead (in the conventional sense of the word) but to create the conditions in which teams can thrive.

The fundamental attribution error

Poor performance of teams is often attributed to a failure of leadership. A common example of this is when the coach of a sporting team is fired after a below par season. On the flip side, CxOs can earn big-buck dollar bonuses when their companies make or exceed their financial targets because they are seen as being directly responsible for the result.

Attributing the blame or credit for the failure or success of a team to a specific individual is called the leadership attribution error. Hackman suggested that this error is a manifestation of a human tendency to assign greater causal priority to factors that are more visible than those that are not: leaders tend to be in the limelight more than their teams and are therefore seen as being responsible for their teams’ successes and failures.

This leader-as-hero (or villain!)  perspective has fueled major research efforts aimed at pinning down those elusive leadership skills and qualities that can magically transform teams into super-performing ensembles.  This has been accompanied by a burgeoning industry of executive training programmes to impart these “scientifically proven” skills to masses of managers. These programmes, often clothed in the doublespeak of organisation culture, are but subtle methods of control that serve to establish directive approaches to leadership. Such methods rarely (if ever) result in high-performing organisations or teams.

An alternate approach to understanding team performance

The failure to find direct causal relationships between such factors and team performance led Hackman to propose a perspective that focuses on structural conditions instead. The basic idea in this alternate approach is to focus on the organisational and social conditions that enable the team to perform well.

This notion of  conditions over causes is relevant in other related areas too. Here are a couple of examples:

  1. Innovation: Most attempts to foster innovation focus on exhorting people to be creative and/or instituting innovation training programmes (causal approach). Such approaches usually result in  innovation of an incremental kind at best.  Instead, establishing a low pressure environment that enables people to think for themselves and follow-up on their ideas without fear of failure generally meets with more success (structural approach).
  2. Collaboration: Organisations generally recognise the importance of collaboration. Yet, they attempt to foster in the worst possible way: via the establishment of cross-functional teams without clear mandates or goals and/or forced team-building exercises that have the opposite effect to the one intended (causal approach).  The alternate approach is to simplify reporting lines, encourage open communication across departments  and generally make it easy for people from different specialisations to work together in informal groups (structural approach). A particularly vexing intra-departmental separation that I have come across recently is the artificial division of responsibilities between information systems development and delivery. Such a separation results in reduced collaboration and increased finger pointing.

That said, let’s take a look at Hackman’s advice on how to create an environment conducive to teamwork.  Hackman identified the following five conditions that tend to correlate well with improved team performance:

  • The group must be a real team– i.e. it must have clear boundaries (clarity as to who is a member and who isn’t), interdependence (the performance of every individual in the team must in some way depend on others in the team) and stability (membership of the team should be stable over time).
  • Compelling direction– the team must have a goal that is clear and worth pursuing. Moreover, and this is important, the team must be allowed to determine how the goal is to be achieved – the end should be prescribed, not the means.
  • The structure must enable teamwork– The team should be structured in a way that allows members to work together. This consists of a couple of factors: 1) The team must be of the right size – as small and diverse as possible (large, homogenous teams are found to be ineffective), and 2) There must be clear norms of conduct. Note that Hackman lists these two as separate points in his paper.
  • Supportive organizational context– the team must have the organisational resources that enable it to carry out its work. For example, access to the information needed for the team to carry out its work and access to technical and subject matter experts.  In addition, there should be a transparent reward system that provides recognition for good work.
  • Coaching– the team must have access to a mentor or coach who understands and has the confidence of the team. Apart from helping team members tide over difficult situations, a good coach should be able to help them navigate organizational politics and identify emerging threats and opportunities that may not be obvious to them.

To reiterate, these are structural rather than causal factors in that they do not enhance team performance directly. Instead, when present, they tend to encourage behaviours that enhance team performance and suppress those that don’t. 

Another interesting point is that some of these factors are more important than others. For example, Ruth Wageman found that team design (the constitution and structure of the team) is about four times more important than coaching in affecting the team’s ability to manage itself and forty times as powerful in affecting team performance (see this paper for details). Although the numbers should not be taken at face value, Wageman’s claim reiterates the main theme of this article: that structural factors matter more than causal ones.

The notion of a holding environment

One of the things I noticed when I first read Hackman’s approach is that it has some similarities to the one that Paul and I advocated in our book, The Heretic’s Guide to Best Practices.

The Heretic’s Guide is largely about collaborative approaches to managing (as opposed to solving!) complex problems in organisations. Our claim is that the most intractable problems in organisations are consequences of social rather than technical issues. For example, the problem of determining the “right” strategy for an organisation cannot be settled on objective grounds because the individuals involved will have diverse opinions on what the organisation’s focus should be.  The process of arriving at a consensual strategy is, therefore, more a matter of dealing with this diversity than reaching an objectively right outcome.  In other words, it is largely about achieving a common view of what the strategy should be and then building a shared commitment to executing it.

The key point is that there is no set process for achieving a shared understanding of a problem. Rather, one needs to have the right environment (structure!) in which contentious issues can be discussed openly without fear.  In our book we used the term holding environment to describe a safe space in which such open dialogue can take place.

The theory of communicative rationality formulated by the German philosopher, Juergen Habermas, outlines the norms that operate within a holding environment. It would be too long a detour to discuss Habermas’ work in any detail – see this paper or chapter 7 of our book to find out more. What is important to note is that an ideal holding environment has the following norms:

  1. Inclusion
  2. Autonomy
  3. Empathy
  4. Power neutrality
  5. Transparency

Problem is, some of these are easier to achieve than others. Inclusionautonomy and power neutrality can be encouraged by putting in place appropriate organisational structures and rules. Empathy and transparency, however, are typically up to the individual. Nevertheless, conditions that enable the former will also encourage (though not guarantee) the latter.

In our book we discuss how such a holding environment can be approximated in multi-organisational settings such as large projects.  It would take me too far afield to get into specifics of the approach here. The point I wish to make, however, is that the notion of a holding environment is in line with Hackman’s thoughts on the importance of environmental or structural factors.

In closing

Some will argue that this article merely sets up and tears down a straw man, and that modern managers are well  aware of the pitfalls of a directive approach to leading teams. Granted, much has been written about the importance of setting the right conditions (such as autonomy)…and it is possible that many managers are aware of it too. The point I would make is that this awareness, if it exists at all, has not been translated into action often enough.  As a result, the gap between the rhetoric and reality of leadership remains as wide as ever – managers talk the talk of leadership, but do not walk it.

Perhaps this is because many (most?) managers are reluctant let go the reins of control when they know they will be held responsible if things were to go belly-up.  The few who manage to overcome their fears know that it requires the ability to trust others, as well as the courage and integrity to absorb the blame  when things go wrong (as they inevitably will from time to time). These all too rare qualities are essential for the approach described here to truly take root and flourish.  In conclusion, I think it is fair to say that the  biggest challenges associated with building high-performance teams are ethical rather than technical ones.

Further Reading

Don’t miss Paul Culmsee’s entertaining and informative posts on the conditions over causes approach in enterprise IT and project management.

Written by K

January 29, 2015 at 9:03 pm

Beyond entities and relationships – towards an emergent approach to data modelling

with 3 comments

Introduction – some truths about data modelling

It has been said that data is the lifeblood of business.  The aptness of this metaphor became apparent to when I was engaged in a data mapping project some years ago. The objective of that effort was to document all the data flows within the organization.   The final map showed very clearly that the volume of data on the move was good indicator of the activity of the function: the greater the volume, the more active the function. This is akin to the case of the human body wherein organs that expend the more energy tend to have a richer network of blood vessels.

Although the above analogy is far from perfect, it serves to highlight the simple fact that most business activities involve the movement and /or processing of data.  Indeed, the key function of information systems that support business activities is to operate on and transfer data.   It therefore matters a lot as to how data is represented and stored. This is the main concern of the discipline of data modelling.

The mainstream approach to data modelling assumes that real world objects and relationships can be accurately represented by models.  As an example, a data model representing a sales process might consist of entities such as customers and products and their relationships, such as sales (customer X purchases product Y). It is tacitly assumed that objective, bias-free models of entities and relationships of interest can be built by asking the right questions and using appropriate information collection techniques.

However, things are not quite so straightforward: as professional data modellers know, real-world data models are invariably tainted by compromises between rigour and reality.  This is inevitable because the process of building a data model involves at least two different sets of stakeholders whose interests are often at odds – namely, business users and data modelling professionals. The former are less interested in the purity of model than the business process that it is intended to support; the interests of the latter, however, are often the opposite.

This reveals a truth about data modelling that is not fully appreciated by practitioners: that it is a process of negotiation rather than a search for a true representation of business reality. In other words, it is a socio-technical problem that has wicked elements.  As such then, data modelling ought to be based on the principles of emergent design. In this post I explore this idea drawing on a brilliant paper by Heinz Klein and Kalle Lyytinen entitled, Towards a New Understanding of Data Modelling as well as my own thoughts on the subject.

Background

Klein and Lyytinen begin their paper by asking four questions that are aimed at uncovering the tacit assumptions underlying the different approaches to data modelling.  The questions are:

  1. What is being modelled? This question delves into the nature of the “universe” that a data model is intended to represent.
  2. How well is the result represented? This question asks if the language, notations and symbols used to represent the results are fit for purpose – i.e. whether the language and constructs used are capable of modelling the domain.
  3. Is the result valid? This asks the question as to whether the model is a correct representation of the domain that is being modelled.
  4. What is the social context in which the discipline operates? This question is aimed at eliciting the views of different stakeholders regarding the model: how they will use it, whether their interests are taken into account and whether they benefit or lose from it.

It should be noted that these questions are general in that they can be used to enquire into any discipline. In the next section we use these questions to uncover the tacit assumptions underlying the mainstream view of data modelling. Following that, we propose an alternate set of assumptions that address a major gap in the mainstream view.

Deconstructing the mainstream view

What is being modelled?

As Klein and Lyytinen put it, the mainstream approach to data modelling assumes that the world is given and made up of concrete objects which have natural properties and are associated with [related to] other objects.   This assumption is rooted in a belief that it is possible to build an objectively true picture of the world around us. This is pretty much how truth is perceived in data modelling: data/information is true or valid if it describes something – a customer, an order or whatever – as it actually is.

In philosophy, such a belief is formalized in the correspondence theory of truth, a term that refers to a family of theories that trace their origins back to antiquity. According to Wikipedia:

Correspondence theories claim that true beliefs and true statements correspond to the actual state of affairs. This type of theory attempts to posit a relationship between thoughts or statements on one hand, and things or facts on the other. It is a traditional model which goes back at least to some of the classical Greek philosophers such as Socrates, Plato, and Aristotle. This class of theories holds that the truth or the falsity of a representation is determined solely by how it relates to a reality; that is, by whether it accurately describes that reality.

In short: the mainstream view of data modelling is based on the belief that the things being modelled have an objective existence.

How well is the result represented?

If data models are to represent reality (as it actually is), then one also needs an appropriate means to express that reality in its entirety. In other words, data models must be complete and consistent in that they represent the entire domain and do not contain any contradictory elements. Although this level of completeness and logical rigour is impossible in practice, much research effort is expended in finding evermore complete and logical consistent notations.

Practitioners have little patience with cumbersome notations invented by theorists, so it is no surprise that the most popular modelling notation is the simplest one: the entity-relationship (ER) approach which was first proposed by Peter Chen. The ER approach assumes that the world can be represented by entities (such as customer) with attributes (such as name), and that entities can be related to each other (for example, a customer might be located at an address – here “is located at” is a relationship between the customer and address entities).  Most commercial data modelling tools support this notation (and its extensions) in one form or another.

To summarise: despite the fact that the most widely used modelling notation is not based on rigorous theory, practitioners generally assume that the ER notation is an appropriate vehicle to represent  what is going on in the domain of interest.

Is the result valid?

As argued above, the mainstream approach to data modelling assumes that the world of interest has an objective existence and can be represented by a simple notation that depicts entities of interest and the relationships between them.  This leads to the question of the validity of the models thus built. To answer this we have to understand how data models are constructed.

The process of model-building involves observation, information gathering and analysis – that is, it is akin to the approach used in scientific enquiry. A great deal of attention is paid to model verification, and this is usually done via interaction with subject matter experts, users and business analysts.  To be sure, the initial model is generally incomplete, but it is assumed that it can be iteratively refined to incorporate newly surfaced facts and fix errors.  The underlying belief is that such a process gets ever-closer to the truth.

In short: it is assumed that it an ER model built using a systematic and iterative process of enquiry will result in a model that is a valid representation of the domain of interest.

What is the social context in which the discipline operates?

From the above, one might get the impression that data modelling involves a lot of user interaction. Although this is generally true, it is important to note that the users’ roles are restricted to providing information to data modellers. The modellers then interpret the information provided by users and cast into a model.

This brings up an important socio-political implication of the mainstream approach: data models generally support business applications that are aimed at maintaining and enhancing managerial control through automation and / or improved oversight. Underlying this is the belief that a properly constructed data model (i.e. one that accurately represents reality) can enhance business efficiency and effectiveness within the domain represented by the model.

In brief: data models are built to further the interests of specific stakeholder groups within an organization.

Summarising the mainstream view

The detailed responses to the questions above reveal that the discipline of data modelling is based on the following assumptions:

  1. The domain of interest has an objective existence.
  2. The domain can be represented using a (more or less) logical language.
  3. The language can represent the domain of interest accurately.
  4. The resulting model is based largely on a philosophy of managerial control, and can be used to drive organizational efficiency and effectiveness.

Many (most?) professional data management professionals will see these assumptions as being uncontroversial.  However, as we shall see next, things are not quite so simple…

Motivating an alternate view of data modelling

In an earlier section I mentioned the correspondence theory of truth which tells us that true statements are those that correspond to the actual state of affairs in the real world.   A problem with correspondence theories is that they assume that: a) there is an objective reality, and b) that it is perceived in the same way by everyone. This assumption is problematic, especially for issues that have a social dimension. Such issues are perceived differently by different stakeholders, each of who will seek data that supports their point of view. The problem is that there is no way to determine which data is “objectively right.” More to the point, in such situations the very notion of “objective rightness” can be legitimately questioned.

Another issue with correspondence theories is that a piece of data can at best be an abstraction of a real-world object or event.  This is a serious problem with correspondence theories in the context of business intelligence. For example, when a sales rep records a customer call, he or she notes down only what is required by the CRM system. Other data that may well be more important is not captured or is relegated to a “Notes” or “Comments” field that is rarely if ever searched or accessed.

Another perspective on truth is offered by the so called consensus theory which asserts that true statement are those that are agreed to by the relevant group of people. This is often the way “truth” is established in organisations. For example, managers may choose to calculate KPIs using certain pieces of data that are deemed to be true.  The problem with this is that consensus can be achieved by means that are not necessarily democratic .For example, a KPI definition chosen by a manager may be contested by an employee.  Nevertheless, the employee has to accept it because organisations are not democracies. A more significant issue is that the notion of “relevant group” is problematic because there is no clear criterion by which to define relevance.  Quite naturally this leads to conflict and ill-will.

This conclusion leads one to formulate alternative answers to four questions posed above, thus paving the way to a new approach to data modelling.

An alternate view of data management

What is being modelled?

The discussion of the previous section suggests that data models cannot represent an objective reality because there is never any guarantee that all interested parties will agree on what that reality is.  Indeed, insofar as data models are concerned, it is more useful to view reality as being socially constructed – i.e. collectively built by all those who have a stake in it.

How is reality socially constructed? Basically it is through a process of communication in which individuals discuss their own viewpoints and agree on how differences (if any) can be reconciled. The authors note that:

…the design of an information system is not a question of fitness for an organizational reality that can be modelled beforehand, but a question of fitness for use in the construction of a [collective] organizational reality…

This is more in line with the consensus theory of truth than the correspondence theory.

In brief: the reality that data models are required to represent is socially constructed.

How well is the result represented?

Given the above, it is clear that any data model ought to be subjected to validation by all stakeholders so that they can check that it actually does represent their viewpoint. This can be difficult to achieve because most stakeholders do not have the time or inclination to validate data models in detail.

In view of the above, it is clear that the ER notation and others of its ilk can represent a truth rather than the truth – that is, they are capable of representing the world according to a particular set of stakeholders (managers or users, for example). Indeed, a data model (in whatever notation) can be thought of one possible representation of a domain. The point is, there are as many representations possible as there are stakeholder groups and in mainstream data modelling, and one of these representations “wins” while the others are completely ignored. Indeed, the alternate views generally remain undocumented so they are invariably forgotten. This suggests that a key step in data modelling would be to capture all possible viewpoints on the domain of interest in a way that makes a sensible comparison possible.  Apart from helping the group reach a consensus, such documentation is invaluable to explain to future users and designers why the data model is the way it is. Regular readers of this blog will no doubt see that the IBIS notation and dialogue mapping could be hugely helpful in this process. It would take me too far afield to explore this point here, but I will do so in a future post.

In brief:  notations used by mainstream data modellers cannot capture multiple worldviews in a systematic and useful way. These conventional data modelling languages need to be augmented by notations that are capable of incorporating diverse viewpoints.

Is the result valid?

The above discussion begs a question, though: what if two stakeholders disagree on a particular point?

One answer to this lies in Juergen Habermas’ theory of communicative rationaility that I have discussed in detail in this post. I’ll explain the basic idea via the following excerpt from that post:

When we participate in discussions we want our views to be taken seriously. Consequently, we present our views through statements that we hope others will see as being rational – i.e. based on sound premises and logical thought.  One presumes that when someone makes claim, he or she is willing to present arguments that will convince others of the reasonableness of the claim.  Others will judge the claim based the validity of the statements claimant makes. When the  validity claims are contested, debate ensues with the aim of  getting to some kind of agreement.

The philosophy underlying such a process of discourse (which is simply another word for “debate” or “dialogue”)  is described in the theory of communicative rationality proposed by the German philosopher Jurgen Habermas.  The basic premise of communicative rationality is that rationality (or reason) is tied to social interactions and dialogue. In other words, the exercise of reason can  occur only through dialogue.  Such communication, or mutual deliberation,  ought to result in a general agreement about the issues under discussion.  Only once such agreement is achieved can there be a consensus on actions that need to be taken.  Habermas refers to the  latter  as communicative action,  i.e.  action resulting from collective deliberation…

In brief: validity is not an objective matter but a subjective – or rather an intersubjective one   that is, validity has to be agreed between all parties affected by the claim.

What is the social context in which the discipline operates?

From the above it should be clear that the alternate view of data management is radically different from the mainstream approach. The difference is particularly apparent when one looks at the way in which the alternate approach views different stakeholder groups. Recall that in the mainstream view, managerial perspectives take precedence over all others because the overriding aim of data modelling (as indeed most enterprise IT activities) is control. Yes, I am aware that it is supposed to be about enablement, but the question is enablement for whom? In most cases, the answer is that it enables managers to control. In contrast, from the above we see that the reality depicted in a data (or any other) model is socially constructed – that is, it is based on a consensus arising from debates on the spectrum of views that people hold. Moreover, no claim has precedence over others on virtue of authority. Different interpretations of the world have to be fused together in order to build a consensually accepted world.

The social aspect is further muddied by conflicts between managers on matters of data ownership, interpretation and access. Typically, however, such matters lie outside the purview of data modellers

In brief: the social context in which the discipline operates is that there are a wide variety of stakeholder groups, each of which may hold different views. These must be debated and reconciled.

Summarising the alternate view

The detailed responses to the questions above reveal that the alternate view of data modelling is based on the following assumptions:

  1. The domain of interest is socially constructed.
  2. The standard representations of data models are inadequate because they cannot represent multiple viewpoints. They can (and should) be supplemented by notations that can document multiple viewpoints.
  3. A valid data model is constructed through an iterative synthesis of multiple viewpoints.
  4. The resulting model is based on a shared understanding of the socially constructed domain.

Clearly these assumptions are diametrically opposed to those in the mainstream. Let’s briefly discuss their implications for the profession

Discussion

The most important implication of the alternate view is that a data model is but one interpretation of reality. As such, there are many possible interpretations of reality and the “correctness” of any particular model hinges not on some objective truth but on an agreed, best-for-group interpretation. A consequence of the above is that well-constructed data models “fuse” or “bring together” at least two different interpretations – those of users and modellers. Typically there are many different groups of users, each with their own interpretation. This being the case, it is clear that the onus lies on modellers to reconcile any differences between these groups as they are the ones responsible for creating models.

A further implication of the above is that it is impossible to build a consistent enterprise-wide data model.  That said, it is possible to have a high-level strategic data model that consists, say, of entities but lacks detailed attribute-level information. Such a model can be useful because it provides a starting point for a dialogue between user groups and also serves to remind modellers of the entities they may need to consider when building a detailed data model.

The mainstream view asserts that data is gathered to establish the truth. The alternate view, however, makes us aware that data models are built in such a way as to support particular agendas. Moreover, since the people who use the data are not those who collect or record it, a gap between assumed and actual meaning is inevitable.  Once again this emphasises that fact that the meaning of a particular piece of data is very much in the eye of the beholder.

In closing

The mainstream approach to data modelling reflects the general belief that the methods of natural sciences can be successfully applied in the area of systems development.  Although this is a good assumption for theoretical computer science, which deals with constructs such as data structures and algorithms, it is highly questionable when it comes to systems development in large organisations. In the latter case social factors dominate, and these tend to escape any logical system. This simple fact remains under-appreciated by the data modelling community and, for that matter, much of the universe of corporate IT.

The alternate view described in this post draws attention to the social and political aspects of data modelling. Although IT vendors and executives tend to give these issues short shrift, the chronically high failure rate of   data-centric initiatives (such as those aimed at building enterprise data warehouses) warns us to pause and think.  If there is anything at all to be learnt from these failures, it is that data modelling is not a science that deals with facts and models, but an art that has more to do with negotiation and law-making.

Written by K

January 6, 2015 at 9:56 pm

From information to knowledge: the what and whence of issue based information systems

with 3 comments

Issue Based Information System (IBIS) is a notation invented by Horst Rittel and Werner Kunz in the early 1970s.  IBIS is best known for its use in dialogue mapping, a collaborative approach to tackling wicked problems (i.e. contentious issues) in organisations. It has a range of other applications as well – capturing knowledge is a good example, and I’ll have much more to say about that later in this post.

Over the last five years or so, I have written a fair bit on IBIS on this blog and in a book that I co-authored with the dialogue mapping expert, Paul Culmsee.   The present post reprises an article I wrote five years ago on the “what” and “whence” of the notation:  its practical aspects – notation, grammar etc -, as well as its origins, advantages and limitations. My motivations for revisiting the piece are to revise and update the original discussion and, more important, to cover some recent developments in IBIS technology that open up interesting possibilities in the area of knowledge management.

To appreciate the power of the IBIS, it is best to begin by understanding the context in which the notation was invented.  I’ll therefore start with a discussion of the origins of the notation followed by an introduction to it. Finally, I’ll cover its development through the last 40 odd years, focusing on the recent developments that I mentioned above.

Wicked origins

A good place to start is where it all started. IBIS was first described in a paper entitled, Issues as elements of Information Systems; written by Horst Rittel (the man who coined the term wicked problem) and Werner Kunz in July 1970. They state the intent behind IBIS in the very first line of the abstract of their paper:

 Issue-Based Information Systems (IBIS) are meant to support coordination and planning of political decision processes. IBIS guides the identification, structuring, and settling of issues raised by problem-solving groups, and provides information pertinent to the discourse.

Rittel’s preoccupation was the area of public policy and planning – which is also the context in which he defined the term wicked problem originally. Given the above background it is no surprise that Rittel and Kunz foresaw IBIS to be the:

…type of information system meant to support the work of cooperatives like governmental or administrative agencies or committees, planning groups, etc., that are confronted with a problem complex in order to arrive at a plan for decision…

The problems tackled by such cooperatives are paradigm-defining examples of wicked problems. From the start, then, IBIS was intended as a tool to facilitate a collaborative approach to solving…or better, managing a wicked problem by helping develop a shared perspective on it.

A brief introduction to IBIS

The IBIS notation consists of the following three elements:

  1. Issues(or questions): these are issues that are being debated. Typically, issues are framed as questions on the lines of “What should we do about X?” where X is the issue that is of interest to a group. For example, in the case of a group of executives, X might be rapidly changing market condition whereas in the case of a group of IT people, X could be an ageing system that is hard to replace.
  2. Ideas(or positions): these are responses to questions. For example, one of the ideas of offered by the IT group above might be to replace the said system with a newer one. Typically the whole set of ideas that respond to an issue in a discussion represents the spectrum of participant perspectives on the issue.
  3. Arguments: these can be Pros (arguments for) or Cons (arguments against) an issue. The complete set of arguments that respond to an idea represents the multiplicity of viewpoints on it.

Compendium is a freeware tool that can be used to create IBIS maps– it can be downloaded here.

In Compendium, the IBIS elements described above are represented as nodes as shown in Figure 1: issues are represented by blue-green question marks; positions by yellow light bulbs; pros by green + signs and cons by red – signs.  Compendium supports a few other node types, but these are not part of the core IBIS notation. Nodes can be linked only in ways specified by the IBIS grammar as I discuss next.

 

Figure 1: IBIS elements

Figure 1: IBIS elements

The IBIS grammar can be summarized in three simple rules:

  1. Issues can be raised anew or can arise from other issues, positions or arguments. In other words, any IBIS element can be questioned.  In Compendium notation:  a question node can connect to any other IBIS node.
  2. Ideas can only respond to questions– i.e. in Compendium “light bulb” nodes can only link to question nodes. The arrow pointing from the idea to the question depicts the “responds to” relationship.
  3. Arguments  can only be associated with ideas– i.e. in Compendium “+” and “–“  nodes can only link to “light bulb” nodes (with arrows pointing to the latter)

The legal links are summarized in Figure 2 below.

Figure 2: Legal links in IBIS

Figure 2: Legal links in IBIS

Yes, it’s as simple as that.

The rules are best illustrated by example-   follow the links below to see some illustrations of IBIS in action:

  1. See this postfor a simple example of dialogue mapping.
  2. See this postor this one for examples of argument visualisation. (Note: using IBIS to map out the structure of written arguments is called issue mapping.
  3. See this one for an example Paul did with his children. This example also features in our book. that made an appearance in our book.

Now that we know how IBIS works and have seen a few examples of it in action, it’s time to trace the history of the notation from its early days the present.

Operation of early systems

When Rittel and Kunz wrote their paper, there were three IBIS-type systems in operation: two in government agencies (in the US, one presumes) and one in a university environment (quite possibly Berkeley, where Rittel worked). Although it seems quaint and old-fashioned now, it is no surprise that these were manual, paper-based systems; the effort and expense involved in computerizing such systems in the early 70s would have been prohibitive and the pay-off questionable.

The Rittel-Kunz paper introduced earlier also offers a short description of how these early IBIS systems operated:

An initially unstructured problem area or topic denotes the task named by a “trigger phrase” (“Urban Renewal in Baltimore,” “The War,” “Tax Reform”). About this topic and its subtopics a discourse develops. Issues are brought up and disputed because different positions (Rittel’s word for ideas or responses) are assumed. Arguments are constructed in defense of or against the different positions until the issue is settled by convincing the opponents or decided by a formal decision procedure. Frequently questions of fact are directed to experts or fed into a documentation system. Answers obtained can be questioned and turned into issues. Through this counterplay of questioning and arguing, the participants form and exert their judgments incessantly, developing more structured pictures of the problem and its solutions. It is not possible to separate “understanding the problem” as a phase from “information” or “solution” since every formulation of the problem is also a statement about a potential solution.

 Even today, forty years later, this is an excellent description of how IBIS is used to facilitate a common understanding of complex problems.  Moreover, the process of reaching a shared understanding (whether using IBIS or not) is one of the key ways in which knowledge is created within organizations. To foreshadow a point I will elaborate on later, using IBIS to capture the key issues, ideas and arguments, and the connections between them, results in a navigable map of the knowledge that is generated in a discussion.

 Fast forward a couple decades (and more!)

In a paper published in 1988 entitled, gIBIS: A hypertext tool for exploratory policy discussion, Conklin and Begeman describe a prototype of a graphical, hypertext-based  IBIS-type system (called gIBIS) and its use in capturing design rationale (yes, despite the title of the paper, it is more about capturing design rationale than policy discussions). The development of gIBIS represents a key step between the original Rittel-Kunz version of IBIS and its more recent version as implemented in Compendium.  Amongst other things, IBIS was finally off paper and on to disk, opening up a world of new possibilities.

gIBIS aimed to offer users:

  1. The ability to capture design rationale – the options discussed (including the ones rejected) and the discussion around the pros and cons of each.
  2. A platform for promoting computer-mediated collaborativedesign work – ideally in situations where participants were located at sites remote from each other.
  3. The ability to store a large amount of information and to be able to navigate through it in an intuitive way.

The gIBIS prototype proved successful enough to catalyse the development of Questmap, a commercially available software tool that supported IBIS.  In a recent conversation Jeff Conklin mentioned to me that Questmap was one of the earliest Windows-based groupware tools available on the market…and it won a best-of-show award in that category. It is interesting to note that in contrast to Questmap (which no longer exists), Compendium is a single-user, desktop software.

The primary application of Questmap was in the area of sensemaking which is all about helping groups reach a collective understanding of complex situations that might otherwise lead them into tense or adversarial conditions. Indeed, that is precisely how Rittel and Kunz intended IBIS to be used.  The key advantage offered by computerized IBIS systems was that one could map dialogues in real-time, with the map representing the points raised in the conversation along with their logical connections. This proved to be a big step forward in the use of IBIS to help groups achieve a shared understanding of complex issues.

That said, although there were some notable early successes in the real-time use of IBIS in industry environments (see this paper, for example), these were not accompanied by widespread adoption of the technique. It is worth exploring the reasons for this briefly.

 The tacitness of IBIS mastery

The reasons for the lack of traction of IBIS-type techniques for real-time knowledge capture are discussed in a paper by Shum et. al. entitled, Hypermedia Support for Argumentation-Based Rationale: 15 Years on from gIBIS and QOC.  The reasons they give are:

  1. For acceptance, any system must offer immediate value to the person who is using it. Quoting from the paper, “No designer can be expected to altruistically enter quality design rationale solely for the possible benefit of a possibly unknown person at an unknown point in the future for an unknown task. There must be immediate value.” Such immediate value is not obvious to novice users of IBIS-type systems.
  2. There is some effort involved in gaining fluency in the use of IBIS-based software tools. It is only after this that users can gain an appreciation of the value of such tools in overcoming the limitations of mapping design arguments on paper, whiteboards etc.

While the rules of IBIS are simple to grasp, the intellectual effort – or cognitive overhead in using IBIS in real time involves:

  1. Teasing out issues, ideas and arguments from the dialogue.
  2. Classifying points raised into issues, ideas and arguments.
  3. Naming (or describing) the point succinctly.
  4. Relating (or linking) the point to the existing map (or anticipating how it will fit in later)
  5. Developing a sense for conversational patterns.

Expertise in these skills can only be developed through sustained practice, so it is no surprise that beginners find it hard to use IBIS to map dialogues.   Indeed, the use of IBIS for real-time conversation mapping is a tacit skill, much like riding a bike or swimming – it can only be mastered by doing.

Making sense through IBIS 

Despite the difficulties of mastering IBIS, it is easy to see that it offers considerable advantages over conventional methods of documenting discussions. Indeed, Rittel and Kunz were well aware of this. Their paper contains a nice summary of the advantages, which I paraphrase below:

  1. IBIS can bridge the gap between discussions and records of discussions (minutes, audio/video transcriptions etc.). IBIS sits between the two, acting as a short-term memory. The paper thus foreshadows the use of issue-based systems as an aid to organizational or project memory.
  2. Many elements (issues, ideas or arguments) that come up in a discussion have contextual meanings that are different from any pre-existing definitions. That is, the interpretation of points made or questions raised depends on the circumstances surrounding the discussion. What is more important is that contextual meaning is more important than formal meaning. IBIS captures the former in a very clear way – for example a response to a question “What do we mean by X?” elicits the meaning of X in the context of the discussion, which is then subsequently captured as an idea (position)”. I’ll have much more to say about this towards the end of this article.
  3. The reasoning used in discussions is made transparent, as is the supporting (or opposing) evidence.
  4. The state of the argument (discussion) at any time can be inferred at a glance (unlike the case in written records). See this post for more on the advantages of visual documentation over prose.
  5. Often times it happens that the commonality of issues with other, similar issues might be more important than its precise meaning. To quote from the paper, “…the description of the subject matter in terms of librarians or documentalists (sic) may be less significant than the similarity of an issue with issues dealt with previously and the information used in their treatment…”  This is less of an issue now because of search of technologies. However, search technologies are still largely based on keywords rather than context. A properly structured, context-searchable IBIS-based archive would be more useful than a conventional document-based system. As I’ll discuss in the next section, the technology for this is now available.

To sum up, then: although IBIS offers a means to map out arguments what is lacking is the ability to make these maps available and searchable across an organization.

IBIS in the enterprise

It is interesting to note that Compendium, unlike its predecessor, Questmap, is a single-user, desktop tool – it does not, by itself, enable the sharing of maps across the enterprise. To be sure, it is possible work around this limitation but the workarounds are somewhat clunky.  A recent advance in IBIS technology addresses this issue rather elegantly: Seven Sigma, an Australian consultancy founded by Paul Culmsee, Chris Tomich and Peter Chow, has developed Glyma (pronounced “glimmer”): a product that makes IBIS available on collaboration platforms like Microsoft SharePoint. This is a game-changer because it enables sharing and searching of IBIS maps across the enterprise. Moreover, as we shall see below, the implications of this go beyond sharing and search.

Full Disclosure: As regular readers of this blog know, Paul is a good friend and we have jointly written a book and a few papers. However, at the time of writing, I have no commercial association with Seven Sigma.  My comments below are based on playing with beta version of the product that Paul was kind enough to give me to access to as well as some discussions that I have had with him.

Figure 3: IBIS in Glyma

Figure 3: IBIS in Glyma (Click to see larger picture)

The look and feel of Glyma is much the same as Compendium (see Fig 3 above) – and the keystrokes and shortcuts are quite similar. I have trialled Glyma for a few weeks and feel that the overall experience is actually a bit better than in Compendium. For example one can navigate through a series of maps and sub-maps using a breadcrumb trail. Another example: documents and videos are embedded within the map – so one does not need to leave the map in order to view tagged media (unless of course one wants to see it at a higher resolution).

I won’t go into any detail about product features etc. since that kind of information is best accessed at source – i.e. the product website and documentation. Instead, I will now focus on how Glyma addresses a longstanding gap in knowledge management systems.

Revisiting the problem of context

In most organisations, knowledge artefacts (such as documents and audio-visual media) are stored in hierarchical or relational structures (for example, a folder within a directory structure or a relational database). To be sure, stored artefacts are often tagged with metadata indicating when, where and by whom they were created, but experience tells me that such metadata is not as useful as it should be.  The problem is that the context in which an artefact was created is typically not captured. Anyone who has read a document and wondered, “What on earth were the authors thinking when they wrote this?” has encountered the problem of missing context.

Context, though hard to capture, is critically important in understanding the content of a knowledge artefact. Any artefact, when accessed without an appreciation of the context in which it was created is liable to be misinterpreted or only partially understood.

 Capturing context in the enterprise

Glyma addresses the issue of context rather elegantly at the level of the enterprise.  I’ll illustrate this point an inspiring case study on the innovative use of SharePoint in education that Paul has written about some time ago.

The case study

Here is the backstory in Paul’s words:

Earlier this year, I met Louis Zulli Jnr – a teacher out of Florida who is part of a program called the Centre of Advanced Technologies. We were co-keynoting at a conference and he came on after I had droned on about common SharePoint governance mistakes…The majority of Lou’s presentation showcased a whole bunch of SharePoint powered solutions that his students had written. The solutions were very impressive…We were treated to examples like:

  • IOS, Android and Windows Phone apps that leveraged SharePoint to display teacher’s assignments, school events and class times;
  • Silverlight based application providing a virtual tour of the campus;
  • Integration of SharePoint with Moodle (an open source learning platform)
  • An Academic Planner web application allowing students to plan their classes, submit a schedule, have them reviewed, track of the credits of the classes selected and whether a student’s selections meet graduation requirements;

All of this and more was developed by 16 to 18 year olds and all at a level of quality that I know most SharePoint consultancies would be jealous of…

Although the examples highlighted by Louis were very impressive, what Paul found more interesting were the anecdotes that Lou related about the dedication and persistence that students displayed in their work. Quoting again from Paul,

So the demos themselves were impressive enough, but that is actually not what impressed me the most. In fact, what had me hooked was not on the slide deck. It was the anecdotes that Lou told about the dedication of his students to the task and how they went about getting things done. He spoke of students working during their various school breaks to get projects completed and how they leveraged each other’s various skills and other strengths. Lou’s final slide summed his talk up brilliantly:

  • Students want to make a difference! Give them the right project and they do incredible things.
  • Make the project meaningful. Let it serve a purpose for the campus community.
  • Learn to listen. If your students have a better way, do it. If they have an idea, let them explore it.
  • Invest in success early. Make sure you have the infrastructure to guarantee uptime and have a development farm.
  • Every situation is different but there is no harm in failure. “I have not failed. I’ve found 10,000 ways that won’t work” – Thomas A. Edison

In brief:  these points highlight the fact that Lou’s primary role as director of the center is to create the conditions that make it possible for students to do great work.  The commercial-level quality of work turned out by students suggests that Lou’s knowledge on how to build high-performing teams is definitely worth capturing.

The question is: what’s the best way to do this (short of getting him to visit you and talk about his experiences)?

Seeing the forest for the trees

Paul recently interviewed Lou with the intent of documenting Lou’s experiences. The conversation was recorded on video and then “Glymafied” it – i.e the video was mapped using IBIS (see Figure 4 below).

Figure 4: Knowledge capture via Glyma

Figure 4: Knowledge capture via Glyma (Click to see larger picture)

There are a few points worth noting here:

  1. The content of the entire conversation is mapped out so one can “see” the conversation at a glance.
  2. The context in which a particular point (i.e. the content of a node) is made is clarified by the connections between a node and its neighbours. Moving left from a node gives a higher level picture, moving right drills down into details.

Of course, the reader will have noted that these are core IBIS capabilities that are available in Compendium (or any other IBIS tool).  Glyma offers much more. Consider the following:

  1. Relevant documents or audio visual media can be tagged to specific nodes to provide supplementary material. In this case the video recording was tagged to specific nodes that related to points made in the video. Clicking on the play icon attached to such a node plays the segment in which the content of the node is being discussed. This is a really nice feature as it saves the user from having to watch the whole video (or play an extended game of ffwd-rew to get to the point of interest). Moreover, this provides additional context that cannot (or is not) captured by in the map. For example, one can attach papers, links to web pages, Slideshare presentations etc. to fill in background and context.
  2. Glyma is integrated with an enterprise content management system by design. One can therefore link map and video content to the powerful built-in search and content aggregation features of these systems. For example, users would be able enter a search from their intranet home page and retrieve not only traditional content such as documents, but also stories, reflections and anecdotes from experts such as Lou.
  3. Another critical aspect to intranet integration is the ability to provide maps as contextual navigation. Amazon’s ability to sell books that people never intended to buy is an example of the power of such navigation. The ability to execute the kinds of queries outlined in the previous point, along with contextual information such as user profile details, previous activity on the intranet and the area of an intranet the user is browsing, makes it possible to present recommendations of nodes or maps that may be of potential interest to users. Such targeted recommendations might encourage users to explore video (and other rich media) content.

Technical Aside: An interesting under-the-hood feature of Glyma is that it uses an implementation of a hypergraph database to store maps. (Note: this is a database that can store representations of graphs (maps) in which an edge can connect to more than 2 vertices). These databases enable the storing of very general graph structures. What this means is that Glyma can be extended to store any kind of map (Mind Maps, Concept Maps, Argument Maps or whatever)…and nodes can be shared across maps. This feature has not been developed as yet, but I mention it because it offers some exciting possibilities in the future.

To summarise: since Glyma stores all its data in an enterprise-class database, maps can be made available across an organization.  It offers the ability to tag nodes with pretty much any kind of media (documents, audio/video clips etc.), and one can tag specific parts of the media that are relevant to the content of the node (a snippet of a video, for example). Moreover, the sophisticated search capabilities of the platform enable context aware search.  Specifically, we can search for nodes by keywords or tags, and once a node of interest is located, we can also view the map(s) in which it appears. The map(s) inform us of the context(s) relating to the node. The ability to display the “contextual environment” of a piece of information is what makes Glyma really interesting.

In metaphorical terms, Glyma enables us to see the forest for the trees.

…and so, to conclude

My aim in this post has been to introduce readers to the IBIS notation and trace its history from its origins in issue mapping to recent developments in knowledge management.  The history of a technique is valuable because it gives insight into the rationale behind its creation, which leads to a better understanding of the different ways in which it can be used. Indeed, it would not be an exaggeration to say that the knowledge management applications discussed above are but an extension of Rittel’s original reasons for inventing IBIS.

I would like to close this piece with a couple of observations from my experience with IBIS:

Firstly, the real surprise for me has been that the technique can capture most written arguments and conversations, despite having only three distinct elements and a very simple grammar. Yes, it does require some thought to do this, particularly when mapping discussions in real time. However, this cognitive overhead is good because it forces the mapper to think about what’s being said instead of just writing it down blind. Thoughtful transcription is the aim of the game. When done right, this results in a map that truly reflects an understanding of a complex issue.

Secondly, although most current discussions of IBIS focus on its applications in dialogue mapping, it has a more important role to play in mapping organizational knowledge. Maps offer a powerful means to navigate the complex network of knowledge within an organisation. The (aspirational) end-goal of such an effort would be a “global” knowledge map that highlights interconnections between different kinds of knowledge that exists within an organization. To be sure, such a map will make little sense to the human eye, but powerful search capabilities could make it navigable. To the extent that this is a feasible direction, I foresee IBIS becoming an important skill in the repertoire of knowledge management professionals.

Emergent design in enterprise IT

with 7 comments

Introduction

Over the last few months, I’ve published a number of posts in which the term emergent design makes a cameo appearance (see this article or this interview for example). Some readers may have noticed that although the term is used in various contexts in the articles/interviews, it is not explicitly defined anywhere. This is deliberate. Emergent design is…well, emergent, so a detailed definition is neither necessary nor useful – providing one can describe a set of guidelines for its practice.  My main aim in this post is to do just that. To keep things concrete I will discuss the guidelines in the context of the often bizarre world of enterprise IT, a domain that epitomizes top-down, plan-based design.

(Note: Before going any further a couple of clarifications are in order. Firstly, the word emergent as used here has nothing to do with emergence in complex systems. Secondly, the guidelines provided here are a starting point, not a comprehensive list.)

The wickedness of enterprise IT

Most IT initiatives in large organisations are planned, designed and executed in a top-down manner, with little or no attempt to understand the existing culture and / or on-the-ground realities. This observation applies not only to enterprise software projects, such as those involving Collaboration or   Customer Relationship Management platforms, but also to design and process-driven IT functions like architecture and service management.

Top down approaches are liable to fail because enterprise IT displays many of the characteristics of wicked problems.  In particular, organization-wide IT initiatives:

  1. Are one-shot operations – for example, an ERP system is simply too expensive to implement over and over again.
  2. Have no stopping rule – enterprise IT systems are never completely done; there are always things to be fixed and additional features to be implemented.
  3. Are highly contentious – whether or not an initiative is good, or even necessary, depends on who you ask.
  4. Could be done in other, possibly “better”, ways – and the problem is that one person’s “better” is another one’s “worse”!
  5. Are essentially unique – and don’t let vendors or Big $$$ consultants tell you otherwise!

These characteristics make enterprise IT a socially complex problem – that is, different stakeholder groups have different perceptions of the problem that the initiative is intended to address.   The most important implication of social complexity is that it cannot be tackled using rational methods of planning, design and implementation that are taught in schools, propagated in books, or evangelized by standards authorities and assorted snake oil salespersons.

Enter emergent design

The term emergent design was coined by David Cavallo in the context of technology-driven education reforms in indigenous cultures (the original paper by David Cavallo can be accessed here). Cavallo observed that traditional systems engineering approaches that attempt to change an educational system in a top-down manner fail primarily because they do not take into account the unique features of local cultures. Instead, he found that using the existing culture as a starting point from which to work towards systemic change offered a much better chance of the new ways taking root. In his words, “[the] adoption and implementation of new methodologies needs to be based in, and grow from, the existing culture.”

Cavallo’s words hold the key to understanding emergent design in the context of enterprise IT. It is that any enterprise IT initiative necessarily affects many stakeholders, and should therefore start by taking their concerns seriously.

“Ah, so it’s like Agile software development,” concludes the Agilista.

Not quite. As I will discuss in the remainder of this post, although emergent design shares a few number features with Agile methods, there’s considerably more to it than that. That said, chances are that good Agile coaches are emergent design practitioners without knowing it. This is something that will become apparent as we go on.

Guidelines for emergent design

I have, for a while, been thinking about what emergent design means in the context of enterprise IT. Among other things, I have been looking at how it might be applied to a wide variety initiatives that are traditionally planned upfront – things such as offshoring and enterprise-wide projects such as data warehouse or enterprise resource planning initiatives.

In one of those serendipitous occurrences, last week I happened re-read an old series of articles entitled Confessions of a post-SharePoint Architect written by  my friend, the ace sensemaker and emergent entrepreneur, Paul Culmsee. Although the series focuses on emergent design principles in the context of the Microsoft SharePoint platform, many of the points that Paul makes apply to enterprise IT in general.  In addition to material drawn from Paul’s blog, I also borrow from a few posts on my blog. In the interests of space I have provided only a brief overview of the points because they have been elaborated elsewhere. The original pieces fill in a lot of relevant detail, so please do follow the links if you have the inclination and the time.

With that said, let’s get to it.

Be a midwife rather than an expert

In a paper entitled, On the planning crisis, Horst Rittel  (the man who coined the term wicked problem) wrote:

You do not learn in school how to deal with wicked problems…expertise and ignorance is distributed over all participants in a wicked problem. There is a symmetry of ignorance among those who participate because nobody knows better by virtue of his degrees or his status. There are no experts (which is irritating for experts), and if experts there are, they are only experts in guiding the process of dealing with a wicked problem, but not for the subject matter of the problem.

The first guideline of emergent design is to realize that no one is an expert – not you nor your Big $$$ consultant. The only way to build a robust and lasting system or process is for everyone to put their heads together and work towards it in a collaborative fashion, dispensing with the pretense that one can outsource one’s thinking to the “expert”. Indeed, the role of the “expert” is to create and foster the conditions for such collaboration to occur. Paul and I elaborate on this point at length in our book and this paper (summarized in this post).

In brief, the knowledge required to successfully implement an enterprise system is distributed across all stakeholders (analysts, consultants, architects and, above all, users). Pulling all this together into a coherent whole has more to do with facilitation and people skills than technology.

Ensure that governance is about enablement rather than control

Most organisations have onerous procedures to ensure that people do the right thing – the poor system lead drowns in a ream of documentation that she is required to “read and understand”; things have to be documented according to certain standards etc. etc. All these procedures are aimed at keeping people on the straight and narrow path of IT righteousness.

I submit that most governance regimes within organisations encourage a checkbox-based compliance mentality aimed at ensuring that people comply in letter, but not necessarily in spirit (actually, never in spirit).  As Paul mentions in this post  governance ought to be about enablement rather than compliance or control.

There’s a very simple test to tell one from the other: when you come across a procedure such as an SOP or a methodology that you are required to follow, ask yourself this question: does this help me do my job?

If the answer positive, the procedure is an enabler; if not, it is likely a control that is primarily intended as a CYA mechanism.

Do not penalize people for learning

The main rationale behind iterative and incremental approaches to software development is that they encourage (and take advantage) of continuous learning. Incremental increases in functionality are easier to test exhaustively and errors are also easier to correct. Reviews and retrospectives also tend to be more focused leading to a better chance of lessons actually being learnt.  Thanks to the Agile movement, this is now well known and understood in mainstream IT.

However, learning is not just a matter of using iterative/incremental methodologies; one also needs to build an environment that encourages it. This is trickier matter because it depends on things that are outside an individual manager’s control; indeed it has more to do with the entire IT function or even the organization. In an organisation with a strong blame culture, the culture tends to win against pretty much any methodology, agile or otherwise. Blame cultures preclude learning because mistakes are punished and people are scapegoated as a result. Check out this article on learning organizations for more on this topic, and this post for a more nuanced (realistic?) view.

With that said for the importance of learning, it is also important to note that there are some situations where learning is less important. This is the case for work for that can be planned and scripted in detail up front.  It is important to be able to distinguish between the two types of situations…which brings us to the next point.

Understand the difference between complicated and complex initiatives

Requirements analysis is one of the first activities in traditional system development. Most enterprise IT initiatives that are driven by a vendor or consultant will have many sessions for this at the front-end of an engagement. Enterprise wisdom tells us that things need to be specified in detail at the start. The rationale behind this is to set requirements in stone so that the entire project can be planned in detail before actual implementation begins.   Such an approach is fine if one knows for sure:

  1. How the future is going to unfold and has appropriate contingencies in place for adverse events;
  2. That users have a clear idea of what they want, and
  3. That requirements will not change (or will change minimally).

It is obvious that this approach will be disastrous if any of the above assumptions are incorrect. Unfortunately it is more often the case that the assumptions do not hold, as evidenced by innumerable IT project that have failed for a lack of adequate risk management, scope clarity and/or uncontrolled change.

So how does one distinguish between initiatives that can be planned in detail upfront and those that can’t?

The distinction is best illustrated via an example: consider a project to replace a fixed line phone system by VoIP versus an ERP project. The first project has a fixed set of requirements across different groups. The second one, in contrast, involves diverse stakeholder groups, each with their own unique expectations of the system. Both projects are complicated from the technology point of view, but the second one has elements of wickedness arising from social complexity.  Consequently, the two projects cannot be run in the same way.  In particular, the first one can be planned in detail upfront while the second one cannot.  Borrowing from David Snowden’s Cynefin framework, we call the first type of project complicated and the second one complex. You need to understand which kind of initiative you are dealing with before deciding which project management approach would be appropriate.

Beware of platitudinous goals

The enterprise IT marketplace is one that is largely buzzword driven. The in-vogue buzzwords at the time this piece was written is the cloud and big data. Buzzwords, while sounding “right”, are actually platitudes – words that are devoid of meaning because different people interpret and use them differently.  The use of platitudes, therefore, results in confusion rather than clarity. For example, your information security guy may be wary of the cloud because he sees it as a potential security risk whereas a business user may view it positively because it liberates her from the clutches of a ponderous IT department. (Check out this video for a cautionary fable regarding a poorly thought out cloud strategy)

People tend to use platitudes as mental shortcuts to avoid thinking things through and coming up with their own opinions. It is therefore pointless to ask a person who uses a platitude to clarify what he or she means: they have not thought it through and will therefore be unable to give you a good answer.

The best way to deconstruct a platitude is via an oblique approach that is best illustrated through an example.

Say someone tells you that they want to improve efficiency (a rather common platitude). Asking them to define efficiency is not a good way to go because the answer you get is likely to be couched in generalities such as higher productivity and performance, words that are world class platitudes in their own right!  Instead, it is better to ask them what difference would be apparent if efficiency were to improve. To answer this question, they would have to come down from platitude-land and start thinking about concrete, measurable outcomes.

Use open questions to broaden perspectives

A more general point to note from the foregoing is that the framing of questions matters, particularly when one wants people to come up with ideas. For example, instead of asking people what they like (or dislike) about a particular approach, it is generally better to ask them what aspects of the approach are important to them. The latter question is neutrally framed so it does not impose any constraints on their thinking

Another good way to get people thinking about possibilities is to ask them what they would like to do if there were no constraints (such as budget or time, say).  Conversely, if you encounter a constraining factor (like a company policy), it is sometimes helpful to ask what the intent behind the policy is.

If posed in the right way and in the right environment, answers to such questions get people to think beyond their immediate concerns and focus on purposes and outcomes instead.

Check out Paul’s posts on powerful questions to find out more about these perspective-expanding questions.

Understand the need for different types of thinking

One of the ironies of enterprise IT initiatives is that the most important decisions on projects have to be made when the information available is the least. As I wrote in the introduction to this paper,

The early stages of projects are fraught with ambiguity. Yet, it is at this “front end” of projects that the most important decisions have to be made. Front-end decisions are hard because there is:

  • uncertainty about scope, i.e. about what needs to be done;
  • uncertainty about rationale, i.e. why it needs to be done; and
  • uncertainty about approach, i.e. how it should be done.

Arguably, the lack of clarity regarding any of these can sow the seeds of failure in the early stages of a project.

The standard approach is to treat uncertainty as a problem that can be solved through convergent thinking – i.e. the kind of thinking that assumes a problem has a single “correct answer.” However, project uncertainty has a social dimension; different people have different perceptions of things like scope and rationale, as well as different coping mechanisms for ambiguity. So, project uncertainty is a wicked problem that has no single “right answer.” This can cause anxiety for some. One therefore needs to begin with divergent thinking, which is largely about generating ideas and options and move to convergent thinking only when:

  1. The group has a shared understanding of the main issues.
  2. An adequate set of options have been generated.

As I alluded to above, people tend to show a preference for one type of thinking over the other. The strength of collaborative problem solving lies precisely in the fact that a group is likely to have a mix of the two types of thinkers.

It is perhaps obvious, but still worth mentioning that the other standard way to deal with uncertainty is to impose a solution by diktat or governance. Clearly such approaches are doomed to fail because they suppress the problem instead of solving it.

Consider long term consequences

It is an unfortunate fact of life that cost tends to be the ultimate arbiter on IT decisions. Vendors know this, and take advantage of it when crafting their proposals. The contract goes to the lowest bidder and the rest, as they say, is tragedy. Although cost is an important criterion in IT decisions, making it the overriding concern is a recipe for future disappointment.

The general lesson to draw from this is that one must consider the longer-term consequences of one’s choices. This can be hard to do because the distant future is less salient than the present or the immediate future. A good way to look beyond immediate concerns (such as cost) is to use the solution after next principle proposed by Gerald Nadler and Shozo Hibino in their book, Breakthrough Thinking. The basic idea behind the principle is to get people to focus on the goals that lie beyond the immediate goal. The process of thinking about and articulating longer-term goals can often provide insights into potential problems with the current goals and/or how they are being achieved.

Build in spare capacity

In his book on antifragility, Nicholas Taleb points out that the opposite of fragility is not robustness or resilience, rather it is the ability to thrive on or benefit from uncertainty. There is no word in the English language to describe such behavior, and that is what led him to coin the term antifragile.

In a post inspired by the book, I outlined the elements of an antifragile IT strategy.  One of the key points of such a strategy is the assumption that, despite our best laid plans, it is inevitable that something important will have been overlooked. It is therefore important to  build in some spare capacity to deal with the unexpected events and demands. Unfortunately, experience tells me that many enterprise IT systems operate at the limits of their capacity, with little or nothing in reserve.  This is a disaster waiting to happen.

Design so as to increase your future choices

This is perhaps the most important point in my list because it encapsulates all the other points. I have adapted it from Heinz von Foerster’s ethical imperative which states that one should always act so as to increase the number of choices in the future.  This principle is useful as a tiebreaker between two designs that are close in all other respects. However, there is more to it than just that.  I have found it particularly useful in making decisions regarding IT outsourcing and software as a service. There is very little critical scrutiny of the benefits of these as claimed by vendors and advisories. This principle can help you see through the fog of marketing rhetoric and advisory hype.

Parting thoughts

One of the paradoxes of life is that the harder we strive for something – money, happiness or whatever – the more unattainable it seems to become. Indeed, some of the most financially successful people (Bill Gates and Warren Buffett, for example) became rich by doing what they loved. Their financial success was a happy byproduct of their engagement in their work. The economist John Kay formalized this paradoxical notion in his concept of obliquity – that certain goals are best attained via indirect means.

If you have been patient enough to read through this piece, you will have noted that some of the guidelines listed above have a hint of obliquity about them. This is no surprise; indeed it is inevitable in a design approach that values people over processes and improvisation (or even serendipity) over planning.

I usually conclude my posts with a summary of the main message. For reasons that should be obvious I will not do that here. Instead, I will end by pointing out yet another feature of emergent design that you have likely figured out already: the guidelines listed above are actually domain neutral; they can be applied to pretty much any area of endeavour. This is no surprise: wicked problems aren’t domain specific and therefore neither are the techniques to deal with them.  For example, see my interview with Neil Preston for a perspective on emergent design in organizational change and <advertisement> my book co-authored with Paul </advertisement> for ways in which it can be applied to domains as diverse as town planning and knowledge management.

…and now I really must stop for I have gone on too long.  Many thanks for your patience!

Acknowledgement

My thanks go out to Paul Culmsee for his feedback on a draft version of this post.

Written by K

October 28, 2014 at 9:04 pm

%d bloggers like this: