Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Project Management’ Category

The map and the territory – a project manager’s reflections on the Seven Bridges Walk

with one comment

Korzybski’s aphorism about the gap between the map and the territory tells a truth that is best understood by walking the territory.

The map

Some weeks ago my friend John and I did the Seven Bridges Walk, a 28 km affair organised annually by the NSW Cancer Council.  The route loops around a section of the Sydney shoreline, taking in north shore and city vistas, traversing seven bridges along the way. I’d been thinking about doing the walk for some years but couldn’t find anyone interested enough to commit a Sunday.  A serendipitous conversation with John a few months ago changed that.

John and I are both in reasonable shape as we are keen bushwalkers. However, the ones we do are typically in the 10 – 15 km range.  Seven Bridges, being about double that, presented a higher order challenge.  The best way to allay our concerns was to plan. We duly got hold of a map and worked out a schedule based on an average pace of 5 km per hour (including breaks), a figure that seemed reasonable at the time (Figure 1 – click on images to see full sized versions).

Figure 1:The map, the plan

Some key points:

  1. We planned to start around 7:45 am at Hunters Hill Village and have our first break at Lane Cove Village, around the 5 to 6 km from the starting point. Our estimated time for this section was about an hour.
  2. The plan was to take the longer, more interesting route (marked in green). This covered bushland and parks rather than roads. The detours begin at sections of the walk marked as “Decision Points” in the map, and add at a couple of kilometers to the walk, making it a round 30 km overall.
  3. If needed, we would stop at the 9 or 11 km mark (Wollstonecraft or Milson’s Point) for another break before heading on towards the city.
  4. We figured it would take us 4 to 5 hours (including breaks) to do the 18 km from Hunters Hill to Pyrmont Village in the heart of the city, so lunch would be between noon and 1 pm.
  5. The backend of the walk, the ~ 10 km from Pyrmont to Hunters Hill, would be covered at an easier pace in the afternoon. We thought this section would take us 2.5 to 3 hours giving us a finish time of around 4 pm.

A planned finish time of 4 pm meant we had enough extra time in hand if we needed it. We were very comfortable with what we’d charted out on the map.

The territory

We started on time and made our first crossing at around 8am:  Fig Tree Bridge, about a kilometer from the starting point.  John took this beautiful shot from one end, the yellow paintwork and purple Jacaranda set against the diffuse light off the Lane Cove River.

Figure 2: Lane Cove River from Fig Tree Bridge

Looking city-wards from the middle of the bridge, I got this one of a couple of morning kayakers.

Figure 3: Morning kayakers on the river

Scenes such as these convey a sense of what it was like to experience the territory, something a map cannot do.  The gap between the map and the territory is akin to the one between a plan and a project; the lived experience of a project is very different from the plan, and is also unique to each individual. Jon Whitty and Bronte van der Hoorn explore this at length in a fascinating paper that relates the experience of managing a project to the philosophy of Martin Heidegger.

The route then took us through a number of steep (but mercifully short) sections in the Lane Cove and Wollstonecraft area.  On researching these later, I was gratified to find that three are featured in the Top 10 Hill runs in Lane Cove.   Here’s a Google Street View shot of the top ranked one.  Though it doesn’t look like much, it’s not the kind of gradient you want to encounter in a long walk.

Figure 4: A bit of a climb

As we negotiated these sections, it occurred to me that part of the fun lay in not knowing they were coming up.  It’s often better not to anticipate challenges that are an unavoidable feature of the territory and deal with them as they arise.  Just to be clear, I’m talking  about routine challenges that are part of the territory, not those that are avoidable or have the potential to derail a project altogether.

It was getting to be time for that planned first coffee break. When drawing up our plan, we had assumed that all seven starting points (marked in blue in the map in Figure 1) would have cafes.  Bad assumption: the starting points were set off from the main commercial areas. In retrospect, this makes good sense: you don’t want to have thousands of walkers traipsing through a small commercial area, disturbing the peace of locals enjoying a Sunday morning coffee. Whatever the reason, the point is that a taken-for-granted assumption turned out to be wrong; we finally got our first coffee well past the 10 km mark.

Post coffee, as we continued city-wards through Lavender Street we got this unexpected view:

Figure 5: Harbour Bridge from Lavender St.

The view was all the sweeter because we realised we were close to the Harbour, well ahead of schedule (it was a little after 10 am).

The Harbour Bridge is arguably the most recognisable Sydney landmark.  So instead of yet another stereotypical shot of it, I took one that shows a walker’s perspective while crossing it:

Figure 6: A pedestrian’s view of The Bridge

The barbed wire and mesh fencing detract from what would be an absolutely breathtaking view. According to this report, the fence has been in place for safety reasons since 1934!  And yes, as one might expect, it is a sore point with tourists who come from far and wide to see the bridge.

Descriptions of things – which are but maps of a kind – often omit details that are significant. Sometimes this is done to downplay negative aspects of the object or event in question. How often have you, as a project manager, “dressed-up” reports to your stakeholders?  Not outright lies, but stretching the truth. I’ve done it often enough.

The section south of The Bridge took us through parks surrounding the newly developed Barangaroo precinct which hugs the northern shoreline of the Sydney central business district.  Another kilometer, and we were at crossing # 3, the Pyrmont Bridge in Darling Harbour:

Figure 7: Pyrmont Bridge

Though almost an hour and half ahead of schedule, we took a short break for lunch at Darling Harbour before pressing on to Balmain and Anzac Bridge.  John took this shot looking upward from Anzac Bridge:

Figure: View looking up from Anzac Bridge

Commissioned in 1995,  it replaced the Glebe Island Bridge, an electrically operated swing bridge constructed in 1903, which remained the main route from the city out to the western suburbs for over 90 years! As one might imagine, as the number of vehicles in the city increased many-fold from the 60s onwards, the old bridge became a major point of congestion. The Glebe Island Bridge,  now retired, is a listed heritage site.

Incidentally, this little nugget of history was related to me by John as we walked this section of the route. It’s something I would almost certainly have missed had he not been with me that day. Journeys, real and metaphoric, are often enriched by travelling companions who point out things or fill in context  that would otherwise be passed over.

Once past Anzac Bridge, the route took us off the main thoroughfare through the side streets of Rozelle. Many of these are lined by heritage buildings. Rozelle is in the throes of change as it is going to be impacted by a major motorway project.

The project reflects a wider problem in Australia: the relative neglect of public transport compared to road infrastructure. The counter-argument is that the relatively small population of the country makes the capital investments and running costs of public transport prohibitive. A wicked problem with no easy answers, but I do believe that the more sustainable option, though more expensive initially, will prove to be the better one in the long run.

Wicked problems are expected in large infrastructure projects that affect thousands of stakeholders, many of whom will have diametrically opposing views.  What is less well appreciated is that even much smaller projects – say IT initiatives within a large organisation – can have elements of wickedness that can trip up the unwary.  This is often magnified by management decisions  made on the basis of short-term expediency.

From the side streets of Rozelle, the walk took us through Callan Park, which was the site of a psychiatric hospital from 1878 to 1994 (see this article for a horrifying history of asylums in Sydney).  Some of the asylum buildings are now part of the Sydney College of The Arts.  Pending the establishment of a trust to manage ongoing use of the site, the park is currently managed by the NSW Government in consultation with the local municipality.

Our fifth crossing of the day was Iron Cove Bridge.  The cursory shot I took while crossing it does not do justice to the view; the early afternoon sun was starting to take its toll.

Figure 9: View from Iron Cove Bridge

The route then took us about a kilometer and half through the backstreets of Drummoyne to the penultimate crossing: Gladesville Bridge  whose claim to fame is that it was for many years the longest single span concrete arch bridge in the world (another historical vignette that came to me via John).  It has since been superseded by the Qinglong Railway Bridge in China.

By this time I was feeling quite perky, cheered perhaps by the realisation that we were almost done.  I took time to compose perhaps my best shot of the day as we crossed Gladesville Bridge.

Figure 10: View from Gladesville Bridge

…and here’s one of the aforementioned arch, taken from below the bridge:

Figure 11: A side view of Gladesville Bridge

The final crossing, Tarban Creek Bridge was a short 100 metre walk from the Gladesville Bridge. We lingered mid-bridge to take a few shots as we realised the walk was coming to an end; the finish point was a few hundred metres away.

Figure 12: View from Tarban Creek Bridge

We duly collected our “Seven Bridges Completed” stamp at around 2:30 pm and headed to the local pub for a celebratory pint.

Figure 13: A well-deserved pint

Wrapping up

Gregory Bateson once wrote:

“We say the map is different from the territory. But what is the territory? Operationally, somebody went out with a retina or a measuring stick and made representations which were then put upon paper. What is on the paper map is a representation of what was in the retinal representation of the [person] who made the map; and as you push the question back, what you find is an infinite regress, an infinite series of maps. The territory never gets in at all. The territory is [the thing in itself] and you can’t do anything with it. Always the process of representation will filter it out so that the mental world is only maps of maps of maps, ad infinitum.”

One might think that a solution lies in making ever more accurate representations, but that is an exercise in futility. Indeed, as Borges pointed out in a short story:

“… In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast map was Useless…”

Apart from being impossibly cumbersome, a complete map of a territory is impossible because a representation can never be the real thing. The territory remains forever ineffable; every encounter with it is unique and has the potential to  reveal new perspectives.

This is as true for a project as it is for a walk or any other experience.

Written by K

November 27, 2017 at 1:33 pm

Posted in Project Management

The law of requisite variety and its implications for enterprise IT

with 4 comments

Introduction

There are two  facets to the operation of IT systems and processes in organisations:  governance, the standards and regulations associated with a system or process; and execution, which relates to steering the actual work of the system or process in specific situations.

An example might help clarify the difference:

The purpose of project management is to keep projects on track. There are two aspects to this: one pertaining to the project management office (PMO) which is responsible for standards and regulations associated with managing projects in general, and the other relating to the day-to-day work of steering a particular project.  The two sometimes work at cross-purposes. For example, successful project managers know that much of their work is about navigate their projects through the potentially treacherous terrain of their organisations, an activity that sometimes necessitates working around, or even breaking, rules set by the PMO.

Governance and steering share a common etymological root: the word kybernetes, which means steersman in Greek.  It also happens to be the root word of Cybernetics  which is the science of regulation or control.   In this post,  I  apply a key principle of cybernetics to a couple of areas of enterprise IT.

Cybernetic systems

An oft quoted example of a cybernetic system is a thermostat, a device that regulates temperature based on inputs from the environment.  Most cybernetic systems are way more complicated than a thermostat. Indeed, some argue that the Earth is a huge cybernetic system. A smaller scale example is a system consisting of a car + driver wherein a driver responds to changes in the environment thereby controlling the motion of the car.

Cybernetic systems vary widely not just in size, but also in complexity. A thermostat is concerned only the ambient temperature whereas the driver in a car has to worry about a lot more (e.g. the weather, traffic, the condition of the road, kids squabbling in the back-seat etc.).   In general, the more complex the system and its processes, the larger the number of variables that are associated with it. Put another way, complex systems must be able to deal with a greater variety of disturbances than simple systems.

The law of requisite variety

It turns out there is a fundamental principle – the law of requisite variety– that governs the capacity of a system to respond to changes in its environment. The law is a quantitative statement about the different types of responses that a system needs to have in order to deal with the range of  disturbances it might experience.

According to this paper, the law of requisite variety asserts that:

The larger the variety of actions available to a control system, the larger the variety of perturbations it is able to compensate.

Mathematically:

V(E) > V(D) – V(R) – K

Where V represents variety, E represents the essential variable(s) to be controlled, D represents the disturbance, R the regulation and K the passive capacity of the system to absorb shocks. The terms are explained in brief below:

V(E) represents the set of  desired outcomes for the controlled environmental variable:  desired temperature range in the case of the thermostat,  successful outcomes (i.e. projects delivered on time and within budget) in the case of a project management office.

V(D) represents the variety of disturbances the system can be subjected to (the ways in which the temperature can change, the external and internal forces on a project)

V(R) represents the various ways in which a disturbance can be regulated (the regulator in a thermostat, the project tracking and corrective mechanisms prescribed by the PMO)

K represents the buffering capacity of the system – i.e. stored capacity to deal with unexpected disturbances.

I won’t say any more about the law of requisite variety as it would take me to far afield; the interested and technically minded reader is referred to the link above or this paper for more.

Implications for enterprise IT

In plain English, the law of requisite variety states that only “variety can absorb variety.”  As stated by Anthony Hodgson in an essay in this book, the law of requisite variety:

…leads to the somewhat counterintuitive observation that the regulator must have a sufficiently large variety of actions in order to ensure a sufficiently small variety of outcomes in the essential variables E. This principle has important implications for practical situations: since the variety of perturbations a system can potentially be confronted with is unlimited, we should always try maximize its internal variety (or diversity), so as to be optimally prepared for any foreseeable or unforeseeable contingency.

This is entirely consistent with our intuitive expectation that the best way to deal with the unexpected is to have a range of tools and approaches at ones disposal.

In the remainder of this piece, I’ll focus on the implications of the law for an issue that is high on the list of many corporate IT departments: the standardization of  IT systems and/or processes.

The main rationale behind standardizing an IT  process is to handle all possible demands (or use cases) via a small number of predefined responses.   When put this way, the connection to the law of requisite variety is clear: a request made upon a function such as a service desk or project management office (PMO) is a disturbance and the way they regulate or respond to it determines the outcome.

Requisite variety and the service desk

A service desk is a good example of a system that can be standardized. Although users may initially complain about having to log a ticket instead of calling Nathan directly, in time they get used to it, and may even start to see the benefits…particularly when Nathan goes on vacation.

The law of requisite variety tells us successful standardization requires that all possible demands made on the system be known and regulated by the  V(R)  term in the equation above. In case of a service desk this is dealt with by a hierarchy of support levels. 1st level support deals with routine calls (incidents and service requests in ITIL terminology) such as system access and simple troubleshooting. Calls that cannot be handled by this tier are escalated to the 2nd and 3rd levels as needed.  The assumption here is that, between them, the three support tiers should be able to handle majority of calls.

Slack  (the K term) relates to unexploited capacity.  Although needed in order to deal with unexpected surges in demand, slack is expensive to carry when one doesn’t need it.  Given this, it makes sense to incorporate such scenarios into the repertoire of the standard system responses (i.e the V(R) term) whenever possible.  One way to do this is to anticipate surges in demand and hire temporary staff to handle them. Another way  is to deal with infrequent scenarios outside the system- i.e. deem them out of scope for the service desk.

Service desk standardization is thus relatively straightforward to achieve provided:

  • The kinds of calls that come in are largely predictable.
  • The work can be routinized.
  • All non-routine work – such as an application enhancement request or a demand for a new system-  is  dealt with outside the system via (say) a change management process.

All this will be quite unsurprising and obvious to folks working in corporate IT. Now  let’s see what happens when we apply the law to a more complex system.

Requisite variety and the PMO

Many corporate IT leaders see the establishment of a PMO as a way to control costs and increase efficiency of project planning and execution.   PMOs attempt to do this by putting in place governance mechanisms. The underlying cause-effect assumption is that if appropriate rules and regulations are put in place, project execution will necessarily improve.  Although this sounds reasonable, it often does not work in practice: according to this article, a significant fraction of PMOs fail to deliver on the promise of improved project performance. Consider the following points quoted directly from the article:

  • “50% of project management offices close within 3 years (Association for Project Mgmt)”
  • “Since 2008, the correlated PMO implementation failure rate is over 50% (Gartner Project Manager 2014)”
  • “Only a third of all projects were successfully completed on time and on budget over the past year (Standish Group’s CHAOS report)”
  • “68% of stakeholders perceive their PMOs to be bureaucratic     (2013 Gartner PPM Summit)”
  • “Only 40% of projects met schedule, budget and quality goals (IBM Change Management Survey of 1500 execs)”

The article goes on to point out that the main reason for the statistics above is that there is a gap between what a PMO does and what the business expects it to do. For example, according to the Gartner review quoted in the article over 60% of the stakeholders surveyed believe their PMOs are overly bureaucratic.  I can’t vouch for the veracity of the numbers here as I cannot find the original paper. Nevertheless, anecdotal evidence (via various articles and informal conversations) suggests that a significant number of PMOs fail.

There is a curious contradiction between the case of the service desk and that of the PMO. In the former, process and methodology seem to work whereas in the latter they don’t.

Why?

The answer, as you might suspect, has to do with variety.  Projects and service requests are very different beasts. Among other things, they differ in:

  • Duration: A project typically goes over many months whereas a service request has a lifetime of days,
  • Technical complexity: A project involves many (initially ill-defined) technical tasks that have to be coordinated and whose outputs have to be integrated.  A service request typically consists one (or a small number) of well-defined tasks.
  • Social complexity: A project involves many stakeholder groups, with diverse interests and opinions. A service request typically involves considerably fewer stakeholders, with limited conflicts of opinions/interests.

It is not hard to see that these differences increase variety in projects compared to service requests. The reason that standardization (usually) works for service desks  but (often) fails for PMOs is that the PMOs are subjected a greater variety of disturbances than service desks.

The key point is that the increased variety in the case of the PMO precludes standardisation.  As the law of requisite variety tells us, there are two ways to deal with variety:  regulate it  or adapt to it. Most PMOs take the regulation route, leading to over-regulation and outcomes that are less than satisfactory. This is exactly what is reflected in the complaint about PMOs being overly bureaucratic. The solution simple and obvious solution is for PMOs to be more flexible– specifically, they must be able to adapt to the ever changing demands made upon them by their organisations’ projects.  In terms of the law of requisite variety, PMOs need to have the capacity to change the system response, V(R), on the fly. In practice this means recognising the uniqueness of requests by avoiding reflex, cookie cutter responses that characterise bureaucratic PMOs.

Wrapping up

The law of requisite variety is a general principle that applies to any regulated system.  In this post I applied the law to two areas of enterprise IT – service management and project governance – and  discussed why standardization works well  for the former but less satisfactorily for the latter. Indeed, in view of the considerable differences in the duration and complexity of service requests and projects, it is unreasonable to expect that standardization will work well for both.  The key takeaway from this piece is therefore a simple one: those who design IT functions should pay attention to the variety that the functions will have to cope with, and bear in mind that standardization works well only if variety is known and limited.

Written by K

December 12, 2016 at 9:00 pm

Improving decision-making in projects

with 5 comments

An irony of organisational life is that the most important decisions on projects (or any other initiatives) have to be made at the start, when ambiguity is at its highest and information availability lowest. I recently gave a talk at the Pune office of BMC Software on improving decision-making in such situations.

The talk was recorded and simulcast to a couple of locations in India. The folks at BMC very kindly sent me a copy of the recording with permission to publish it on Eight to Late. Here it is:


Based on the questions asked and the feedback received, I reckon that a number of people found the talk  useful. I’d welcome your comments/feedback.

Acknowledgements: My thanks go out to Gaurav Pal, Manish Gadgil and Mrinalini Wankhede for giving me the opportunity to speak at BMC, and to Shubhangi Apte for putting me in touch with them. Finally, I’d like to thank the wonderful audience at BMC for their insightful questions and comments.

The Risk – a dialogue mapping vignette

with one comment

Foreword

Last week, my friend Paul Culmsee conducted an internal workshop in my organisation on the theme of collaborative problem solving. Dialogue mapping is one of the tools he introduced during the workshop.  This piece, primarily intended as a follow-up for attendees,  is an introduction to dialogue mapping via a vignette that illustrates its practice (see this post for another one). I’m publishing it here as I thought it might be useful for those who wish to understand what the technique is about.

Dialogue mapping uses a notation called Issue Based Information System (IBIS), which I have discussed at length in this post. For completeness, I’ll begin with a short introduction to the notation and then move on to the vignette.

A crash course in IBIS

The IBIS notation consists of the following three elements:

  1. Issues(or questions): these are issues that are being debated. Typically, issues are framed as questions on the lines of “What should we do about X?” where X is the issue that is of interest to a group. For example, in the case of a group of executives, X might be rapidly changing market condition whereas in the case of a group of IT people, X could be an ageing system that is hard to replace.
  2. Ideas(or positions): these are responses to questions. For example, one of the ideas of offered by the IT group above might be to replace the said system with a newer one. Typically the whole set of ideas that respond to an issue in a discussion represents the spectrum of participant perspectives on the issue.
  3. Arguments: these can be Pros (arguments for) or Cons (arguments against) an issue. The complete set of arguments that respond to an idea represents the multiplicity of viewpoints on it.

Compendium is a freeware tool that can be used to create IBIS maps– it can be downloaded here.

In Compendium, IBIS elements are represented as nodes as shown in Figure 1: issues are represented by blue-green question markspositions by yellow light bulbspros by green + signs and cons by red – signs.  Compendium supports a few other node types, but these are not part of the core IBIS notation. Nodes can be linked only in ways specified by the IBIS grammar as I discuss next.

Figure 1: Elements of IBIS

Figure 1: IBIS node types

The IBIS grammar can be summarized in three simple rules:

  1. Issues can be raised anew or can arise from other issues, positions or arguments. In other words, any IBIS element can be questioned.  In Compendium notation:  a question node can connect to any other IBIS node.
  2. Ideas can only respond to questions– i.e. in Compendium “light bulb” nodes can only link to question nodes. The arrow pointing from the idea to the question depicts the “responds to” relationship.
  3. Arguments  can only be associated with ideas– i.e. in Compendium “+” and “–“  nodes can only link to “light bulb” nodes (with arrows pointing to the latter)

The legal links are summarized in Figure 2 below.

Figure 2: Legal links in IBIS

Figure 2: Legal links in IBIS

 

…and that’s pretty much all there is to it.

The interesting (and powerful) aspect of IBIS is that the essence of any debate or discussion can be captured using these three elements. Let me try to convince you of this claim via a vignette from a discussion on risk.

 The Risk – a Dialogue Mapping vignette

“Morning all,” said Rick, “I know you’re all busy people so I’d like to thank you for taking the time to attend this risk identification session for Project X.  The objective is to list the risks that we might encounter on the project and see if we can identify possible mitigation strategies.”

He then asked if there were any questions. The head waggles around the room indicated there were none.

“Good. So here’s what we’ll do,”  he continued. “I’d like you all to work in pairs and spend 10 minutes thinking of all possible risks and then another 5 minutes prioritising.  Work with the person on your left. You can use the flipcharts in the breakout area at the back if you wish to.”

Twenty minutes later, most people were done and back in their seats.

“OK, it looks as though most people are done…Ah, Joe, Mike have you guys finished?” The two were still working on their flip-chart at the back.

“Yeah, be there in a sec,” replied Mike, as he tore off the flip-chart page.

“Alright,” continued Rick, after everyone had settled in. “What I’m going to do now is ask you all to list your top three risks. I’d also like you tell me why they are significant and your mitigation strategies for them.” He paused for a second and asked, “Everyone OK with that?”

Everyone nodded, except Helen who asked, “isn’t it important that we document the discussion?”

“I’m glad you brought that up. I’ll make notes as we go along, and I’ll do it in a way that everyone can see what I’m writing. I’d like you all to correct me if you feel I haven’t understood what you’re saying. It is important that  my notes capture your issues, ideas and arguments accurately.”

Rick turned on the data projector, fired up Compendium and started a new map.  “Our aim today is to identify the most significant risks on the project – this is our root question”  he said, as he created a question node. “OK, so who would like to start?”

 

 

Fig 3: The root question

Figure 3: The root question

 

“Sure,” we’ll start, said Joe easily. “Our top risk is that the schedule is too tight. We’ll hit the deadline only if everything goes well, and everyone knows that they never do.”

“OK,” said Rick, “as he entered Joe and Mike’s risk as an idea connecting to the root question. “You’ve also mentioned a point that supports your contention that this is a significant risk – there is absolutely no buffer.” Rick typed this in as a pro connecting to the risk. He then looked up at Joe and asked,  “have I understood you correctly?”

“Yes,” confirmed Joe.

 

Fig 4: Map in progress

Figure 4: Map in progress

 

“That’s pretty cool,” said Helen from the other end of the table, “I like the notation, it makes reasoning explicit. Oh, and I have another point in support of Joe and Mike’s risk – the deadline was imposed by management before the project was planned.”

Rick began to enter the point…

“Oooh, I’m not sure we should put that down,” interjected Rob from compliance. “I mean, there’s not much we can do about that can we?”

…Rick finished the point as Rob was speaking.

 

Fig 4: Map in progress

Figure 5: Two pros for the idea

 

“I hear you Rob, but I think  it is important we capture everything that is said,” said Helen.

“I disagree,” said Rob. “It will only annoy management.”

“Slow down guys,” said Rick, “I’m going to capture Rob’s objection as “this is a management imposed-constraint rather than risk. Are you OK with that, Rob?”

Rob nodded his assent.

 

Fig 6: A con enters the picture

Fig 6: A con enters the picture

I think it is important we articulate what we really think, even if we can’t do anything about it,” continued Rick. There’s no point going through this exercise if we don’t say what we really think. I want to stress this point, so I’m going to add honesty  and openness  as ground rules for the discussion. Since ground rules apply to the entire discussion, they connect directly to the primary issue being discussed.”

Figure 7: A "criterion" that applies to the analysis of all risks

Figure 7: A “criterion” that applies to the analysis of all risks

 

“OK, so any other points that anyone would like to add to the ones made so far?” Queried Rick as he finished typing.

He looked up. Most of the people seated round the table shook their heads indicating that there weren’t.

“We haven’t spoken about mitigation strategies. Any ideas?” Asked Rick, as he created a question node marked “Mitigation?” connecting to the risk.

 

Figure 8: Mitigating the risk

Figure 8: Mitigating the risk

“Yeah well, we came up with one,” said Mike. “we think the only way to reduce the time pressure is to cut scope.”

“OK,” said Rick, entering the point as an idea connecting to the “Mitigation?” question. “Did you think about how you are going to do this? He entered the question “How?” connecting to Mike’s point.

Figure 9: Mitigating the risk

Figure 9: Mitigating the risk

 

“That’s the problem,” said Joe, “I don’t know how we can convince management to cut scope.”

“Hmmm…I have an idea,” said Helen slowly…

“We’re all ears,” said Rick.

“…Well…you see a large chunk of time has been allocated for building real-time interfaces to assorted systems – HR, ERP etc. I don’t think these need to be real-time – they could be done monthly…and if that’s the case, we could schedule a simple job or even do them manually for the first few months. We can push those interfaces to phase 2 of the project, well into next year.”

There was a silence in the room as everyone pondered this point.

“You know, I think that might actually work, and would give us an extra month…may be even six weeks for the more important upstream stuff,” said Mike. “Great idea, Helen!”

“Can I summarise this point as – identify interfaces that can be delayed to phase 2?” asked Rick, as he began to type it in as a mitigation strategy. “…and if you and Mike are OK with it, I’m going to combine it with the ‘Cut Scope’ idea to save space.”

“Yep, that’s fine,” said Helen. Mike nodded OK.

Rick deleted the “How?” node connecting to the “Cut scope” idea, and edited the latter to capture Helen’s point.

Figure 10: Mitigating the risk

Figure 10: Mitigating the risk

“That’s great in theory, but who is going to talk to the affected departments? They will not be happy.” asserted Rob.  One could always count on compliance to throw in a reality check.

“Good point,”  said Rick as he typed that in as a con, “and I’ll take the responsibility of speaking to the department heads about this,” he continued entering the idea into the map and marking it as an action point for himself. “Is there anything else that Joe, Mike…or anyone else would like to add here,” he added, as he finished.

Figure 11: Completed discussion of first risk (click to see full size

Figure 11: Completed discussion of first risk (click to view larger image)

“Nope,” said Mike, “I’m good with that.”

“Yeah me too,” said Helen.

“I don’t have anything else to say about this point,” said Rob, “ but it would be great if you could give us a tutorial on this technique. I think it could be useful to summarise the rationale behind our compliance regulations. Folks have been complaining that they don’t understand the reasoning behind some of our rules and regulations. ”

“I’d be interested in that too,” said Helen, “I could use it to clarify user requirements.”

“I’d be happy to do a session on the IBIS notation and dialogue mapping next week. I’ll check your availability and send an invite out… but for now, let’s focus on the task at hand.”

The discussion continued…but the fly on the wall was no longer there to record it.

Afterword

I hope this little vignette illustrates how IBIS and dialogue mapping can aid collaborative decision-making / problem solving by making diverse viewpoints explicit. That said, this is a story, and the problem with stories is that things  go the way the author wants them to.  In real life, conversations can go off on unexpected tangents, making them really hard to map. So, although it is important to gain expertise in using the software, it is far more important to practice mapping live conversations. The latter is an art that requires considerable practice. I recommend reading Paul Culmsee’s series on the practice of dialogue mapping or <advertisement> Chapter 14 of The Heretic’s Guide to Best Practices</advertisement> for more on this point.

That said, there are many other ways in which IBIS can be used, that do not require as much skill. Some of these include: mapping the central points in written arguments (what’s sometimes called issue mapping) and even decisions on personal matters.

To sum up: IBIS is a powerful means to clarify options and lay them out in an easy-to-follow visual format. Often this is all that is required to catalyse a group decision.

Sherlock Holmes and the case of the management fetish

with 2 comments

As narrated by Dr. John Watson, M.D.

As my readers are undoubtedly aware,  my friend Sherlock Holmes is widely feted for his powers of logic and deduction.  With all due modesty, I can claim to have played a small part in publicizing his considerable talents, for I have a sense for what will catch the reading public’s fancy and, perhaps more important, what will not. Indeed, it could be argued  that his fame is in no small part due to the dramatic nature of the exploits which I have chosen to publicise.

Management consulting, though far more lucrative than criminal investigation, is not nearly as exciting.  Consequently my work has become that much harder since Holmes reinvented himself as a management expert.  Nevertheless, I am firmly of the opinion that the long-standing myths  exposed by  his  recent work more than make up for any lack of suspense or drama.

A little known fact is that many of Holmes’ insights into flawed management practices have come after the fact, by discerning common themes that emerged from different cases. Of course this makes perfect sense:  only after seeing the same (or similar) mistake occur in a variety of situations can one begin to perceive an underlying pattern.

The conversation I had with him last night  is an excellent illustration of this point.

We were having dinner at Holmes’ Baker Street abode  when, apropos of nothing, he remarked, “It’s a strange thing, Watson, that our lives are governed by routine. For instance, it is seven in the evening, and here we are having dinner, much like we would on any other day.”

“Yes, it is,” I said, intrigued by his remark.  Dabbing my mouth with a napkin, I put down my fork and waited for him to say more.

He smiled. “…and do you think that is a good thing?”

I thought about it for a minute before responding. “Well, we follow routine because we like…or need… regularity and predictability,” I said. “Indeed, as a medical man, I know well that our bodies have built in clocks that drive us to do things – such as eat and sleep – at regular intervals.  That apart, routines give us a sense of comfort and security in an unpredictable world. Even those who are adventurous have routines of their own. I don’t think we have a choice in the matter, it’s the way humans are wired.” I wondered where the conversation was going.

Holmes cocked an eyebrow. “Excellent, Watson!” he said. “Our propensity for routine is quite possibly a consequence of our need for security and comfort ….but what about the usefulness of routines – apart from the sense of security we get from them?”

“Hmmm…that’s an interesting question. I suppose a routine must have a benefit, or at least a perceived benefit…else it would not have been made into a routine.”

“Possibly,” said Holmes, “ but let me ask you another question.  You remember the case of the failed projects do you not?”

“Yes, I do,” I replied. Holmes’ abrupt conversational U-turns no longer disconcert me, I’ve become used to them over the years. I remembered the details of the case like it had happened yesterday…indeed I should, as it was I who wrote the narrative!

“Did anything about the case strike you as strange?” he inquired.

I mulled over the case, which (in hindsight) was straightforward enough. Here are the essential facts:

The organization suffered from a high rate of project failure (about 70% as I recall). The standard prescription – project post-mortems followed by changes in processes aimed at addressing the top issues revealed – had failed to resolve the issue. Holmes’ insightful diagnosis was that the postmortems identified symptoms, not causes.  Therefore the measures taken to fix the problems didn’t work because they did not address the underlying cause. Indeed, the measures were akin to using brain surgery to fix a headache.  In the end, Holmes concluded that the failures were a consequence of flawed organizational structures and norms.

Of course flawed structures and norms are beyond the purview of a mere project or  program manager. So Holmes’ diagnosis, though entirely correct, did not help Bryant (the manager who had consulted us).

Nothing struck me as unduly strange as  went over the facts mentally. No,” I replied, “but what on earth does that have to do with routine?”

He smiled. “I will explain presently, but I have yet another question for you before I do so.  Do you remember one of our earliest management consulting cases – the affair of the terminated PMO?”

I replied in the affirmative.

“Well then,  you see the common thread running through the two cases, don’t you?” Seeing my puzzled look, he added, “think about it for a minute, Watson, while I go and fetch dessert.”

He went into the kitchen, leaving me to ponder his question.

The only commonality I could see was the obvious one – both cases were related to the failure of PMOs. (Editor’s note: PMO = Project Management Office)

He returned with dessert a few minutes later. “So, Watson,” he said as he sat down, “have you come up with anything?

I told him what I thought.

“Capital, Watson! Then you will, no doubt, have asked yourself the obvious next question. ”

I saw what he was getting at. “Yes!  The question is: can this observation be generalised?  Do majority of PMOs fail? ”

“Brilliant, Watson.  You are getting better at this by the day.” I know Holmes  does not intend to sound condescending, but the sad fact is that he often does.  “Let me tell you,” he continued, “Research   suggests that 50% of PMOs fail within three years of being set up. My hypothesis is that failure rate would be considerably higher if the timeframe is increased to five or seven years. What’s even more interesting is that there is a single overriding complaint about PMOs:  the majority of stakeholders surveyed felt that their PMOs are overly bureaucratic, and generally hinder project work.”

“But isn’t that contrary to the aim of a  PMO – which, as I understand, is to facilitate project work?” I queried.

“Excellent, my dear Watson. You are getting close to the heart of the matter.

“I am?”  To be honest, I was a little lost.

“Ah Watson, don’t tell me you do not see it,” said Holmes exasperatedly.

“I’m afraid you’ll have to explain,” I replied curtly. Really, he could insufferable at times.

“I shall do my best. You see, there is a fundamental contradiction between the stated mission and actual operation of a typical PMO.  In theory, they are supposed to facilitate projects, but as far as executive management is concerned this is synonymous with overseeing and controlling projects. What this means is that in practice, PMOs inevitably end up policing project work rather than facilitating it.”

I wasn’t entirely convinced.  “May be the reason that  PMOs fail is that organisations do not implement them correctly,” I said.

“Ah, the famous escape clause used by purveyors of best practices – if our best practice doesn’t work, it means you aren’t implementing it correctly. Pardon me while I choke on my ale, because that is utter nonsense.”

“Why?”

“Well, one would expect after so many years, these so-called implementation errors would have been sorted out. Yet we see the same poor outcomes over and over again,” said Holmes.

“OK,  but then why are PMOs are still so popular with management?”

“Now we come to the crux of matter, Watson,” he said, a tad portentously, “They are popular for reasons we spoke of at the start of this conversation – comfort and security.”

“Comfort and security? I have no idea what you’re talking about.”

“Let me try explaining this in another way,” he said. “When you were a small child, you must have had some object that you carried around everywhere…a toy, perhaps…did you not?”

“I’m not sure I should tell you this Holmes  but, yes, I had a blanket”

“A security blanket, I would never have guessed, Watson,” smiled Holmes. “…but as it happens that’s a perfect example because PMOs and the methodologies they enforce are  security blankets. They give executives and frontline managers a sense that they are doing something concrete and constructive to manage uncertainty…even though they actually aren’t.   PMOs are popular , not because they work (and indeed, we’ve seen they don’t)  but because they help managers contain their anxiety about whether things will turn out right. I would not be exaggerating if I said that  PMOs and the methodologies they evangelise are akin to lucky charms or fetishes.”

“That’s a strong a statement to make on rather slim grounds,” I said dubiously.

“Is it? Think about it, Watson,” he shot back, with a flash of irritation. “Many (though I should admit, not all) PMOs and methodologies prescribe excruciatingly detailed procedures to follow and templates to fill when managing projects. For many (though again, not all) project managers, managing a project is synonymous with following these rituals. Such managers attempt to force-fit  reality into standardised procedures and documents. But tell me, Watson – how can such project management by ritual work  when no two projects are the same?”

“Hmm….”

“That is not all, Watson,” he continued, before I could respond, “PMOs and methodologies enable people to live in a fantasy world where everything seems to be under control. Methodology fetishists will not see the gap between their fantasy world and reality, and will therefore miss opportunities to learn. They follow rituals that give them security and an illusion of efficiency, but at the price of a genuine engagement with people and projects.”

“ I’ll have to think about it,” I said.

“You do that,” he replied , as he pushed back his chair and started to clear the table. Unlike him, I had a lot more than dinner to digest. Nevertheless, I rose to help him as I do every day.

Evening conversations at 221B Baker Street are seldom boring. Last night was no exception.

Acknowledgement:

This tale was inspired David Wastell’s brilliant paper, The fetish of technique: methodology as social defence (abstract only).

Written by K

April 29, 2015 at 8:37 pm

The danger within: internally-generated risks in projects

with 2 comments

Introduction

In their book, Waltzing with Bears, Tom DeMarco and Timothy Lister coined the phrase, “risk management is project management for adults”.  Twenty years on, it appears that their words have been taken seriously: risk management now occupies a prominent place in BOKs, and has also become a key element of project management practice.

On the other hand, if the evidence  is to be believed (as per the oft quoted Chaos Report, for example), IT projects continue to fail at an alarming rate. This is curious because one would have expected that a greater focus on risk management ought to have resulted in better outcomes.  So, is it possible at all that risk management (as it is currently preached and practiced in IT project management) cannot address certain risks…or, worse, that there are certain risks are simply not recognized as risks?

Some time ago, I came across a paper by Richard Barber that sheds some light on this very issue. This post elaborates on the nature and importance of such “hidden” risks by drawing on Barber’s work as well as my experiences and those of my colleagues with whom I have discussed the paper.

What are internally generated risks?

The standard approach to risk is based on the occurrence of events. Specifically, risk management is concerned with identifying potential adverse events and taking steps to  reduce either their probability of occurrence or their impact. However, as Barber points out, this is a limited view of risk because it overlooks adverse conditions that are built into the project environment. A good example of this is an organizational norm that centralizes decision making at the corporate or managerial level. Such a norm would discourage a project manager from taking appropriate action when confronted with an event that demands an on-the-spot decision.  Clearly, it is wrong-headed to attribute the risk to the event because the risk actually has its origins in the norm. In other words, it is an internally generated risk.

(Note: the notion of an internally generated risk is akin to the risk as a pathogen concept that I discussed in this post many years ago.)

Barber defines an internally generated risk as one that has its origin within the project organisation or its host, and arises from [preexisting] rules, policies, processes, structures, actions, decisions, behaviours or cultures. Some other examples of such risks include:

  • An overly bureaucratic PMO.
  • An organizational culture that discourages collaboration between teams.
  • An organizational structure that has multiple reporting lines – this is what I like to call a pseudo-matrix organization 🙂

These factors are similar to those that I described in my post on the systemic causes of project failure. Indeed, I am tempted to call these systemic risks because they are related to the entire system (project + organization). However, that term has already been appropriated by the financial risk community.

Since the term is relatively new, it is important to draw distinctions between internally generated and other types of risks. It is easy to do so because the latter (by definition) have their origins outside the hosting organization. A good example of the latter is the risk of a vendor not delivering a module on time or worse, going into receivership prior to delivering the code.

Finally, there are certain risks that are neither internally generated nor external. For example, using a new technology is inherently risky simply because it is new. Such a risk is inherent rather than internally generated or external.

Understanding the danger within

The author of the paper surveyed nine large projects with the intent of getting some insight into the nature of internally generated risks.  The questions he attempted to address are the following:

  1. How common are these risks?
  2. How significant are they?
  3. How well are they managed?
  4. What is the relationship between the ability of an organization to manage such risks and the organisation’s project management maturity level (i.e. the maturity of its project management processes)

Data was gathered through group workshops and one-on-one interviews in which the author asked a number of questions that were aimed at gaining insight into:

  1. The key difficulties that project managers encountered on the projects.
  2. What they perceived to be the main barriers to project success.

The aim of the one-on-one interviews was to allow for a more private setting in which sensitive issues (politics, dysfunctional PMOs and brain-dead rules / norms) could be freely discussed.

The data gathered was studied in detail, with the intent of identifying internally generated risks. The author describes the techniques he used to minimize subjectivity and to ensure that only significant risks were considered. I will omit these details here, and instead focus on his findings as they relate to the questions listed above.

Commonality of internally generated risks

Since organizational rules and norms are often flawed, one might expect that internally generated risks would be fairly common in projects. The author found that this was indeed the case with the projects he surveyed: in his words, the smallest number of non-trivial internally generated risks identified in any of the nine projects was 15, and the highest was 30!  Note: the identification of non-trivial risks was done by eliminating those risks that a wide range of stakeholders agreed as being unimportant.

Unfortunately, he does not explicitly list the most common internally-generated risks that he found. However, there are a few that he names later in the article. These are:

I suspect that experienced project managers would be able to name many more.

Significance of internally generated risks

Determining the significance of these risks is tricky because one has to figure out their probability of occurrence.  The impact is much easier to get a handle on, as one has a pretty good idea of the consequences of such risks should they eventuate. (Question: What happens if there is inadequate sponsorship? Answer: the project is highly likely to fail!).   The author attempted to get a qualitative handle on the probability of occurrence by asking relevant stakeholders to estimate the likelihood of occurrence. Based on the responses received, he found that a large fraction of the internally-generated risks are significant (high probability of occurrence and high impact).

Management of internally generated risks

To identify whether internally generated risks are well managed, the author asked relevant project teams to look at all the significant internal risks on their project and classify them as to whether or not they had been identified by the project team prior to the research. He found that in over half the cases, less than 50% of the risks had been identified. However, most of the risks that were identified were not managed!

The relationship between project management maturity and susceptibility to internally generated risk

Project management maturity refers to the level of adoption of  standard good practices within an organization. Conventional wisdom tells us that there should be an inverse correlation between maturity levels and susceptibility to internally generated risk –  the higher the maturity level, the lower the susceptibility.

The author assessed maturity levels by interviewing various stakeholders within the organization and also by comparing the processes used within the organization to well-known standards.  The results indicated a weak negative correlation – that is, organisations with a higher level of maturity tended to have a smaller number of internally generated risks. However, as the author admits, one cannot read much into this finding as the correlation was weak.

Discussion

The study suggests that internally generated risks are common and significant on projects. However, the small sample size also suggests that more comprehensive surveys are needed.  Nevertheless, anecdotal evidence from colleagues who I spoke with suggests that the findings are reasonably robust. Moreover, it is also clear (both, from the study and my conversations) that these risks are not very well managed.  There is a good reason for this:  internally generated risks originate in human behavior and / or dysfunctional structures. These tend to be a difficult topic to address in an organizational setting because people are unlikely to tell those above them in the hierarchy that they (the higher ups) are the cause of a problem.  A classic example of such a risk is estimation by decree – where a project team is told to just get it done by a certain date. Although most project managers are aware of such risks, they are reluctant to name them for obvious reasons.

Conclusion

I suspect most project managers who work in corporate environments will have had to grapple with internally generated risks in one form or another.  Although traditional risk management does not recognize these risks as risks, seasoned project managers know  from experience that people,  politics or even processes can pose major problems to smooth working of projects.  However, even when recognised for what they are, these risks can be hard to tackle because they  lie outside a project manager’s sphere of influence.  They therefore tend to become those proverbial pachyderms in the room – known to all but never discussed, let alone documented….and therein lies the danger within.

Written by K

December 10, 2014 at 7:23 pm

Scapegoats and systems: contrasting approaches to managing human error in organisations

with 5 comments

Much can be learnt about an organization by observing what management does when things go wrong.  One reaction is to hunt for a scapegoat, someone who can be held responsible for the mess.  The other is to take a systemic view that focuses on finding the root cause of the issue and figuring out what can be done in order to prevent it from recurring.  In a highly cited paper published in 2000, James Reason compared and contrasted the two approaches to error management in organisations. This post is an extensive summary of the paper.

The author gets to the point in the very first paragraph:

The human error problem can be viewed in two ways: the person approach and the system approach. Each has its model of error causation and each model gives rise to quite different philosophies of error management. Understanding these differences has important practical implications for coping with the ever present risk of mishaps in clinical practice.

Reason’s paper was published in the British Medical Journal and hence his focus on the practice of medicine. His arguments and conclusions, however, have a much wider relevance as evidenced by the diverse areas in which his paper has been cited.

The person approach – which, I think is more accurately called the scapegoat approach – is based on the belief that any errors can and should be traced back to an individual or a group, and that the party responsible should then be held to account for the error. This is the approach taken in organisations that are colloquially referred to as having a “blame culture.”

To an extent, looking around for a scapegoat is a natural emotional reaction to an error. The oft unstated reason behind scapegoating, however, is to avoid management responsibility.  As the author tells us:

People are viewed as free agents capable of choosing between safe and unsafe modes of behaviour.  If something goes wrong, it seems obvious that an individual (or group of individuals) must have been responsible. Seeking as far as possible to uncouple a person’s unsafe acts from any institutional responsibility is clearly in the interests of managers. It is also legally more convenient

However, the scapegoat approach has a couple of serious problems that hinder effective risk management.

Firstly, an organization depends on its frontline staff to report any problems or lapses. Clearly, staff will do so only if they feel that it is safe to do so – something that is simply not possible in an organization that takes scapegoat approach. The author suggests that the Chernobyl disaster can be attributed to the lack of a “reporting culture” within the erstwhile Soviet Union.

Secondly, and perhaps more important, is that the focus on a scapegoat leaves the underlying cause of the error unaddressed. As the author puts it, “by focusing on the individual origins of error it [the scapegoat approach] isolates unsafe acts from their system context.” As a consequence, the scapegoat approach overlooks systemic features of errors – for example, the empirical fact that the same kinds of errors tend to recur within a given system.

The system approach accepts that human errors will happen. However, in contrast to the scapegoat approach, it views these errors as being triggered by factors that are built into the system. So, when something goes wrong, the system approach focuses on the procedures that were used rather than the people who were executing them. This difference from the scapegoat approach makes a world of difference.

The system approach looks for generic reasons why errors or accidents occur. Organisations usually have a series of measures in place to prevent errors – e.g. alarms, procedures, checklists, trained staff etc. Each of these measures can be looked upon as a “defensive layer” against error. However, as the author notes, each defensive layer has holes which can let errors “pass through” (more on how the holes arise a bit later).  A good way to visualize this is as a series of slices of Swiss Cheese (see Figure 1).

The important point is that the holes on a given slice are not at a fixed position; they keep opening, closing and even shifting around, depending on the state of the organization.  An error occurs when the ephemeral holes on different layers temporarily line up to “let an error through”.

There are two reasons why holes arise in defensive layers:

  1. Active errors: These are unsafe acts committed by individuals. Active errors could be violations of set procedures or momentary lapses. The scapegoat approach focuses on identifying the active error and the person responsible for it. However, as the author points out, active errors are almost always caused by conditions built into the system, which brings us to…
  2. Latent conditions: These are flaws that are built into the system. The author uses the term resident pathogens to describe these – a nice metaphor that I have explored in a paper review I wrote some years ago. These “pathogens” are usually baked into the system by poor design decisions and flawed procedures on the one hand, and ill-thought-out management decisions on the other. Manifestations of the former include faulty alarms, unrealistic or inconsistent procedures or poorly designed equipment; manifestations of the latter include things such as unrealistic targets, overworked staff and the lack of  funding for appropriate equipment.

The important thing to note is that latent conditions can lie dormant for a long period before they are noticed. Typically a latent condition comes to light only when an error caused by it occurs…and only if the organization does a root cause analysis of the error – something that is simply not done in an organization takes a scapegoat approach.

The author draws a nice analogy that clarifies the link between active errors and latent conditions:

…active failures are like mosquitoes. They can be swatted one by one, but they still keep coming. The best remedies are to create more effective defences and to drain the swamps in which they breed. The swamps, in this case, are the ever present latent conditions.

“Draining the swamp” is not a simple task.  The author draws upon studies of high performance organisations (combat units, nuclear power plants and air traffic control centres) to understand how they minimised active errors by reducing system flaws. He notes that these organisations:

  1. Accept that errors will occur despite standardised procedures, and train their staff to deal with and learn from them.
  2. Practice responses to known error scenarios and try to imagine new ones on a regular basis.
  3. Delegate responsibility and authority, especially in crisis situations.
  4. Do a root cause analysis of any error that occurs and address the underlying problem by changing the system if needed.

In contrast, an organisation that takes a scapegoat approach assumes that standardisation will eliminate errors, ignores the possibility of novel errors occurring, centralises control and, above all, focuses on finding scapegoats instead of fixing the system.

Acknowledgement:

Figure 1 was taken from the Patient Safety Education website of Duke University Hospital.

Further reading:

The Swiss Cheese model was first proposed in 1991. It has since been applied in many areas. Here are a couple of recent applications and extensions of the model to project management:

  1. Stephen Duffield and Jon Whitty use the Swiss Cheese model as a basis for their model of Systemic Lessons Learned and Knowledge Captured (SLLKC model) in projects.
  2. In this post, Paul Culmsee extends the SLLKC model to incorporate aspects relating to teams and collaboration.

Written by K

July 29, 2014 at 8:43 pm

%d bloggers like this: