Eight to Late

Sensemaking and Analytics for Organizations

Posts Tagged ‘risk management

Scapegoats and systems: contrasting approaches to managing human error in organisations

with 5 comments

Much can be learnt about an organization by observing what management does when things go wrong.  One reaction is to hunt for a scapegoat, someone who can be held responsible for the mess.  The other is to take a systemic view that focuses on finding the root cause of the issue and figuring out what can be done in order to prevent it from recurring.  In a highly cited paper published in 2000, James Reason compared and contrasted the two approaches to error management in organisations. This post is an extensive summary of the paper.

The author gets to the point in the very first paragraph:

The human error problem can be viewed in two ways: the person approach and the system approach. Each has its model of error causation and each model gives rise to quite different philosophies of error management. Understanding these differences has important practical implications for coping with the ever present risk of mishaps in clinical practice.

Reason’s paper was published in the British Medical Journal and hence his focus on the practice of medicine. His arguments and conclusions, however, have a much wider relevance as evidenced by the diverse areas in which his paper has been cited.

The person approach – which, I think is more accurately called the scapegoat approach – is based on the belief that any errors can and should be traced back to an individual or a group, and that the party responsible should then be held to account for the error. This is the approach taken in organisations that are colloquially referred to as having a “blame culture.”

To an extent, looking around for a scapegoat is a natural emotional reaction to an error. The oft unstated reason behind scapegoating, however, is to avoid management responsibility.  As the author tells us:

People are viewed as free agents capable of choosing between safe and unsafe modes of behaviour.  If something goes wrong, it seems obvious that an individual (or group of individuals) must have been responsible. Seeking as far as possible to uncouple a person’s unsafe acts from any institutional responsibility is clearly in the interests of managers. It is also legally more convenient

However, the scapegoat approach has a couple of serious problems that hinder effective risk management.

Firstly, an organization depends on its frontline staff to report any problems or lapses. Clearly, staff will do so only if they feel that it is safe to do so – something that is simply not possible in an organization that takes scapegoat approach. The author suggests that the Chernobyl disaster can be attributed to the lack of a “reporting culture” within the erstwhile Soviet Union.

Secondly, and perhaps more important, is that the focus on a scapegoat leaves the underlying cause of the error unaddressed. As the author puts it, “by focusing on the individual origins of error it [the scapegoat approach] isolates unsafe acts from their system context.” As a consequence, the scapegoat approach overlooks systemic features of errors – for example, the empirical fact that the same kinds of errors tend to recur within a given system.

The system approach accepts that human errors will happen. However, in contrast to the scapegoat approach, it views these errors as being triggered by factors that are built into the system. So, when something goes wrong, the system approach focuses on the procedures that were used rather than the people who were executing them. This difference from the scapegoat approach makes a world of difference.

The system approach looks for generic reasons why errors or accidents occur. Organisations usually have a series of measures in place to prevent errors – e.g. alarms, procedures, checklists, trained staff etc. Each of these measures can be looked upon as a “defensive layer” against error. However, as the author notes, each defensive layer has holes which can let errors “pass through” (more on how the holes arise a bit later).  A good way to visualize this is as a series of slices of Swiss Cheese (see Figure 1).

The important point is that the holes on a given slice are not at a fixed position; they keep opening, closing and even shifting around, depending on the state of the organization.  An error occurs when the ephemeral holes on different layers temporarily line up to “let an error through”.

There are two reasons why holes arise in defensive layers:

  1. Active errors: These are unsafe acts committed by individuals. Active errors could be violations of set procedures or momentary lapses. The scapegoat approach focuses on identifying the active error and the person responsible for it. However, as the author points out, active errors are almost always caused by conditions built into the system, which brings us to…
  2. Latent conditions: These are flaws that are built into the system. The author uses the term resident pathogens to describe these – a nice metaphor that I have explored in a paper review I wrote some years ago. These “pathogens” are usually baked into the system by poor design decisions and flawed procedures on the one hand, and ill-thought-out management decisions on the other. Manifestations of the former include faulty alarms, unrealistic or inconsistent procedures or poorly designed equipment; manifestations of the latter include things such as unrealistic targets, overworked staff and the lack of  funding for appropriate equipment.

The important thing to note is that latent conditions can lie dormant for a long period before they are noticed. Typically a latent condition comes to light only when an error caused by it occurs…and only if the organization does a root cause analysis of the error – something that is simply not done in an organization takes a scapegoat approach.

The author draws a nice analogy that clarifies the link between active errors and latent conditions:

…active failures are like mosquitoes. They can be swatted one by one, but they still keep coming. The best remedies are to create more effective defences and to drain the swamps in which they breed. The swamps, in this case, are the ever present latent conditions.

“Draining the swamp” is not a simple task.  The author draws upon studies of high performance organisations (combat units, nuclear power plants and air traffic control centres) to understand how they minimised active errors by reducing system flaws. He notes that these organisations:

  1. Accept that errors will occur despite standardised procedures, and train their staff to deal with and learn from them.
  2. Practice responses to known error scenarios and try to imagine new ones on a regular basis.
  3. Delegate responsibility and authority, especially in crisis situations.
  4. Do a root cause analysis of any error that occurs and address the underlying problem by changing the system if needed.

In contrast, an organisation that takes a scapegoat approach assumes that standardisation will eliminate errors, ignores the possibility of novel errors occurring, centralises control and, above all, focuses on finding scapegoats instead of fixing the system.

Acknowledgement:

Figure 1 was taken from the Patient Safety Education website of Duke University Hospital.

Further reading:

The Swiss Cheese model was first proposed in 1991. It has since been applied in many areas. Here are a couple of recent applications and extensions of the model to project management:

  1. Stephen Duffield and Jon Whitty use the Swiss Cheese model as a basis for their model of Systemic Lessons Learned and Knowledge Captured (SLLKC model) in projects.
  2. In this post, Paul Culmsee extends the SLLKC model to incorporate aspects relating to teams and collaboration.

Written by K

July 29, 2014 at 8:43 pm

Elephants in the room: seven reasons why project risks are ignored

with 16 comments

Project managers know from experience that projects can go wrong because of events that weren’t foreseen. Some of these may be unforeseeable– that is, they could not have been anticipated given what was known prior to their occurrence. On the other hand  it is surprisingly common  that known risks are ignored.  The metaphor of the elephant in the room is appropriate here because these risks are quite obvious to outsiders, but apparently not to those involved in the project. This is a strange state of affairs because:

  1. Those involved in the project are best placed to “see the elephant”
  2. They are directly affected  when the elephant goes on rampage  –  i.e. the risk eventuates.

This post discusses reasons why these metaphorical pachyderms are ignored by those who need most to recognize their existence .

Let’s get right into it then – seven reasons why risks are ignored on projects:

1. Let sleeping elephants lie: This is a situation in which stakeholders are aware of the risk, but don’t do anything about it in the hope that it will not eventuate. Consequently, they have no idea how to handle it if it does. Unfortunately, as Murphy assures us, sleeping elephants will wake at the most inconvenient moment.

2. It’s not my elephant: This is a situation where no one is willing to take responsibility for managing the risk. This game of “pass the elephant” is resolved by handing charge of the elephant to a reluctant mahout.

3. Deny the elephant’s existence: This often manifests itself as a case of collective (and wilful) blindness to obvious risks.   No one acknowledges the risk, perhaps out of fear of that they will be handed responsibility for it (see point 2 above).

4. The elephant has powerful friends: This is a pathological situation where some stakeholders (often those with clout) actually increase the likelihood of a risk through bad decisions. A common example of this is the imposition of arbitrary deadlines, based on fantasy rather than fact.

5. The elephant might get up and walk away: This is wishful thinking, where the team assumes that the risk will magically disappear. This is the “hope and pray” method of risk management, quite common in some circles.

6. The elephant’s not an elephant: This is a situation where a risk is mistaken for an opportunity. Yes, this does happen. An example is when a new technology is used on a project: some team members may see it as an opportunity, but in reality it may pose a risk.

7. The elephant’s dead: This is exemplified by the response, “that is no longer a problem,” when asked about the status of a risk.  The danger in these situations is that the elephant may only be fast asleep, not dead.

Risks that are ignored are the metaphorical pachyderms in the room.  Ignoring them is easy  because it involves no effort whatsoever. However, it is a strategy that is fraught with danger because once these risks eventuate, they can – like those apparently invisible elephants – run amok and wreak havoc on projects.

Written by K

January 20, 2011 at 10:52 pm

%d bloggers like this: