Eight to Late

Sensemaking and Analytics for Organizations

Archive for June 2010

Doing the right project is as important as doing the project right

with 6 comments

Introduction

Many high profile projects fail because they succeed. This paradoxical statement is true because many projects are ill-conceived efforts directed at achieving goals that have little value or relevance to their host organisations.  Project management focuses on ensuring that the project goals are achieved in an efficient manner. The goals themselves are often “handed down from above”, so the relevance or appropriateness of these is “out of scope” for the discipline of project management.  Yet, the prevalence of projects of dubious value suggests that more attention needs to be paid to “front-end” decision making in projects – that is, decision making in the early stages, in which the initiative is just an idea.  A paper by Terry Williams and Knut Samset entitled, Issues in front-end decision making on Projects looks at the problems associated with formulating the “right” project. This post is a summary and review of the paper.

What is the front-end phase of the project?  According to Williams and Samset, “The front-end phase commences when the initial idea is conceived and proceeds to generate information, consolidate stakeholders’ views and positions, and arrive at the final decision as to whether or not to finance the project.”

Decisions made in the early stages of a project are usually more consequential than those made later on. Most major parameters – scope, funding, timelines etc. are more or less set in stone by the time a project is initiated. The problem is that these decisions are made at a time when the availability of relevant information is at its lowest.  This highlights the role of sound judgement and estimation in decision making.  Furthermore, these decisions may have long-term consequences for the organisation, so due care needs to be given to alignment of the project concept with the organisation’s strategic goals.   Finally, as the cliché goes, the only constant is change: organisations exist in ever-changing environments, so projects need to have the right governance structures in place to help navigate through this turbulence. The paper is an exploration of some of these issues as they relate to front-end decision making in projects.

Defining the project concept

Williams and Samset define the terms concept as a mental construction that outlines how a problem will be solved or a need satisfied.  Note that although the definition seems to imply that the term concept equates to technical approach, it is more than that. The project concept also includes considerations of the outcomes and their impact on the organisation and its environment.

The authors emphasise that organisations should to conceive several distinct concepts prior to initiating the project.  To this end, they recommend having a clearly defined “concept definition phase” where the relevant stakeholders create and debate different concepts. Choosing the right concept is critical because it determines how the project will be carried out, what the end result is and how it affects the organisation. The authors emphasise that the concept should be determined on the basis of the required outcome rather than the current (undesirable) situation.

When success leads to failure

This is the point alluded to in the introduction: a project may produce the required outcomes, but still be considered a failure because the outcomes are not aligned with the organisation’s strategy.  Such situations almost always arise because the project concept was not right. The authors describe an interesting example of such a project, which I’ll quote directly from the paper:

One such example is an onshore torpedo battery built inside the rocks on the northern coast of Norway in 2004 (Samset, 2008a). The facility was huge and complex, designed to accommodate as many as 150 military personnel for up to three months at a time. It was officially opened as planned and without cost overrun. It was closed down one week later by Parliamentary decision. Clearly, a potential enemy would not expose its ships to such an obvious risk: the concept had long since been overtaken by political, technological, and military development. What was quite remarkable was that this project, which can only be characterized as strategic failure, got little attention in the media, possibly because it was a success in tactical terms.

A brilliant example of a successful project that failed! The point, of course, is that although the strategic aspects of projects are considered to be outside the purview of project management,  they must be given due consideration when the project is conceptualized. The result of a project must be effective for the organisation, the efficiency of project execution actually matters less.

Shooting straight – aligning the project to strategic goals

Aligning projects with strategic goals is a difficult because the organizational and social ramifications of a project are seldom clear at the start. This is because the problem may be inherently complex – for example, no one can foresee the implications of an organizational restructure (no, not even those expensive consultants who claim to be able to).  Further, and perhaps more important, is the issue of social complexity:  stakeholders have diverse, often irreconcilable, views on what needs to be done, let alone how it should be done.  These two factors combine to make most organizational issues wicked problems.

Wicked problems have no straightforward solutions, so it is difficult if not impossible to ensure alignment to organizational strategy. There are several techniques that can be used to make sense of wicked problem. I’ve discussed one of these – dialogue mapping – in several prior posts. Paul Culmsee and I have elaborated on this  and other techniques to manage wicked problems in our book, The Heretic’s Guide to Best Practices.

One has to also recognize that the process of alignment is messier still because politics and self interest  play a role, particularly when the stakes are high. Further, at the individual level, decisions are never completely objective and are also subject to cognitive bias – which brings me to the next point…

Judgement and the formulation of organizational strategy

Formulating organizational strategy depends on making informed and accurate judgements about the future. Further, since strategies typically cover the mid to long term, one has to also allow some flexibility for adjustments along the way as one’s knowledge improves.

That’s all well and good, but it doesn’t take into account the fact that decision making isn’t a wholly rational process – humans who make the decisions are, at best, boundedly rational (sometimes rational, sometimes not).  Bounded rationality manifests itself through cognitive biases – errors of perception that can lead us to making incorrect judgements. See my post on the role of cognitive bias in project failure for more on how these biases have affected high profile projects.

The scope for faulty decision making (via cognitive bias or any other mechanism) is magnified when one deals with wicked problems. There are a number of reasons for this including:

  1. Cause-effect relationships are often unclear.
  2. No one has complete understanding of the problem (the problem itself is often unclear).
  3. Social factors come into play (Is it possible make an “unbiased” decision about a proposed project that is going to affect one’s livelihood?)
  4. Consequent from points 1 through 3,  stakeholders perceive the problem (and its solution) differently.

It is worth pointing out that project planning is generally “less wicked” than strategy formulation because the former involves more clear cut goals (even though they may be wrong-headed). There is more scope for wickedness in the latter because there are many more unknowns and “unknowables.”

Why estimates are incorrect

Cost is a major factor in deciding whether or not a project should go-ahead. Unfortunately, this is another front-end decision; one which needs to be made when there isn’t enough information available. In his book, Facts and Fallacies of Software Engineering, Robert Glass names poor estimation as one of the top causes of project failure.  This is not to say that things haven’t improved. For example, Agile methods which advocate incremental/iterative development continually refine initial estimates based on actual, measurable progress.

Techniques such as reference class forecasting have been proposed to improve estimation for projects where incremental approaches are not possible (infrastructural projects, for example). However, this technique is subject to the reference class problem.

Finally, all the aforementioned techniques assume that reliable information on which estimates can be based is a) available and b) used correctly.  But this is just where the problem lies: the two key factors that lead to poor estimation are a) the lack of knowledge about what exactly the work entails and b) the fact that people may misunderstand or even misrepresent the information if it is available.

Governance in an ever-changing environment

A negative consequence of the quest for organizational agility and flexibility is that organizational environments are turbulent. The main point of the paper is that traditional project management  (as laid out in frameworks such as PMBOK) Is ill-suited to such environments. As they state:

The key point in this article, however, is that the environment in which most projects operate is complex and turbulent, and conventional project management is not well suited to such conditions, despite the attraction of project organization to companies in fast-moving environments seeking agility and responsiveness…

Yet, ironically, this uncertainty is the reason for the spectacular growth in adoption of project management methodologies (see this post for a discussion of a relevant case study).

For project management to be truly useful, it must be able to cope with and adapt to turbulent environments. To this end, it may be best to view project management as a set of activities that emerge from a real need rather than an arbitrary imposition dictated by methodologies that are divorced from reality. This is nothing new: iterative/incremental methods, which advocate adaptation of methods to suit the environment, are a step in this direction.

Adaptive methods are obviously easier to apply on smaller projects than larger ones. However, one could argue that the need for flexibility and adaptability is even greater on massive megaprojects than smaller ones. A major consequence of a changing environment is that people’s views on what needs to be done diverge. Recent work in applying dialogue mapping to large project environments shows that it is possible to reduce this divergence. Getting people on the “same page” is, I believe, the first step to successful governance, particular in unstable environments.

Lack of information

The most important decisions on projects have to be made upfront, when little or no reliable information is available. The authors argue that the lack of information can actually be a benefit in front-end decision for the following reasons:

  1. Too much information can lead to confusion and analysis paralysis.
  2. Information can get out of date quickly – forecasts based on outdated information can be worse than useless because they can mislead.
  3. It is often hard to tell between important and irrelevant information. The distinction may only become clear as the project proceeds.

Be that as it may, one cannot deny that front-end decision making is hampered by the lack of relevant information. The real problem, though, is that decisions are often made by those who cannot tell the difference between what’s important and what’s not.

Conclusion

The article is an excellent summary of the major impediments in front-end decision making on projects. Such decisions have a major impact on how the project unfolds, yet they are often made with little or no consideration of what exactly the project will do for the organisation, or what its impact will be.

In my experience, front-end decisions are invariably made in an ad-hoc manner, rooted more in hope and fantasy than reality.  A first step to ensuring that organizations do the right project is to ensure that all stakeholders have a common understanding of the goals of the project – that is, what needs to be done. The next is to ensure a common understanding of how those goals will be achieved. Such stakeholder alignment is best achieved using communication-centric, collaborative techniques such as dialogue mapping. Only then,  after ensuring that one is doing the right project,  does it make sense to focus on doing the project right.

Beyond Best Practices: a paper review and the genesis of a collaboration

with 10 comments

Introduction

The fundamental premise behind best practices  is that it is possible to reproduce the successes of those who excel by imitating them. At first sight this assumption seems obvious and uncontroversial. However, most people who have lived through an implementation of a best practice know that following such prescriptions does not guarantee success. Actually, anecdotal evidence suggests the contrary:  that most attempts at implementing best practices fail. This paradox remains unnoticed by managers and executives who continue to commit their organisations to implementing best practices that are, at best, of dubious value.

Why do best practices fail?   There has been a fair bit of research on the shortcomings of best practices, and the one thing it tells us is that there is no simple answer to this question.  In this post I’ll discuss this issue, drawing upon an old (but still very relevant) paper by Jonathan Wareham and Han Cerrits entitled, De-Contextualising Competence: Can Best Practice be Bundled and Sold.   Note that I will not cover the paper in its entirety;  my discussion will focus only on those aspects that relate to the question raised above.

I may as well say it here:  I have a secondary aim  (or more accurately,  a vested interest)  in discussing this paper.  Over the last few months Paul Culmsee and I have been working on a book that discusses reasons why best practices fail and proposes some practical techniques to address their shortcomings.  I’ll end this post with a brief discussion of the background and content of the book (see this post for Paul’s take on the book). But let’s look at the paper first…

Background

On the first page of the paper the authors state:

Although the concept of ‘imitating excellent performers’ may seem quite banal at first glance, the issue, as we will argue, is not altogether that simple after deeper consideration. Accordingly, the purpose of the paper is to explore many of the fundamental, often unquestioned, assumptions which underlie the philosophy and application of Business Best Practice transfer. In illuminating the central empirical and theoretical problems of this emerging discipline, we hope to refine our expectations of what the technique can yield, as well as contribute to theory and the improvement of practice.

One of the most valuable aspects of the paper is that it lists some of the implicit assumptions that are often glossed over by consultants and  others who sell  and implement best practice methodologies.  It turns out that these assumptions are not valid in most practical situations, which renders the practices themselves worthless.

The implicit assumptions

According to Wareham and Cerrits, the unstated premises behind best practices include:

  1. Homogeneity of organisations: Most textbooks and courses on best practices present the practices as though they have an existence that is independent of organizational context.  Put another way: they assume that all organisations are essentially the same. Clearly, this isn’t the case – organisations are defined by their differences.
  2. Universal yardstick: Best practices assume that there is a universal definition of what’s best, that what’s best for one is best for all others. This assumption is clearly false as organisations have different (dare I say, unique) environments, objectives and strategies. How can a universal definition of “best” fit all?
  3. Transferability:  Another tacit assumption in the best practice business is that practices can be transplanted on to receiving organisations wholesale. Sure, in recent years it has been recognized that such transplants are successful only if a) the recipient organisation undertakes the changes necessary for the transplant to work and b) the practice itself is adapted to the recipient organisation. The point is in most successful cases, the change or adaptation is so great that it no longer resembles that original  best practice. This is an important point – to have a hope in hell of working, best practices have to be adapted extensively. It is also worth mentioning that  such adaptations will succeed only if they are  made in consultation with those who will be affected by the practices. I’ll say more about this later in this post
  4. Alienability and stickiness: These are concepts that relate to the possibility of extracting relevant knowledge pertaining to a best practice from a source and transferring it without change to a recipient. Alienability refers to the possibility of extracting relevant knowledge from the source. Alienability is difficult because best practice knowledge is often tacit, and is therefore difficult to codify. Stickiness refers to the willingness of the recipient to learn this knowledge, and his or her ability to absorb it. Stickiness highlights the importance of obtaining employee buy-in before implementing best practices. Unfortunately most best practice implementations gloss over the issues of alienability and stickiness.
  5. Validation: Wareham and Cerrits contend that best practices are rarely validated. More often than not,  recipient organisations simply believe that they will work, based on their consultants’ marketing spiel. See this short piece by Paul Strassman for more on the dangers of doing so.

What does  “best” mean anyway?

After listing the implicit assumptions, Wareham and Cerrits argue that the conceptual basis for defining a particular practice as being “best” is weak.  Their argument hinges on the observation that it is impossible to attribute the superior performance of a firm to specific managerial practices. Why? Well, because one cannot perform a control experiment to see what would happen if those practices weren’t used.

Related to the above is the somewhat subtle point that it is impossible to say, with certainty, whether practices, as they exist within model organisations, are consequences of well-thought out managerial action or whether they are merely adaptations to changing environments. If the latter were true, then there is no best practice, because the practices as they exist in model organisations are essentially random responses to organizational stimuli.

Wareham and Cerrits also present an economic perspective on best practice acquisition and transfer, but I’ll omit this as it isn’t of direct relevance to the question of why best practices fail.

Implications

The authors draw the following conclusions from their analysis:

  1. The very definition of best practices is fraught with pitfalls.
  2. Environmental factors have a significant effect on the evolution and transfer(ability) of “best” practices. Consequently, what works in one organisation may not work in another.

So, can anything be salvaged?  Wareham and Cerrits think so. They suggest an expanded view of best practices which includes things such as:

  1. Using best practices as guides for learning new technologies or new ways of working.
  2. Using best practices to generate creative insight into how business processes work in practice.
  3. Using best practices as a guide for change – that is, following the high-level steps, but not necessarily the detailed prescriptions.

These are indeed sensible and reasonable statements. However, they  are much weaker than the usual hyperbole-laden claims that accompany best practices.

Discussion

Cerrits and Johnson focus on the practices themselves, not the problems they are used to solve. In my opinion, another key reason why best practices fail is that they are applied without a comprehensive understanding of the problem that they are intended to address.

I’ll clarify this using an example:  in a quest to improve efficiency an organisation might go through a major restructure. All too often, such organisations will not think through all the consequences of the restructuring (what are the long-term consequences of outsourcing certain functions, for instance). The important point to realize is that a comprehensive understanding of the consequences is possible only if all stakeholders – management and employees – are involved in planning the restructure.  Unfortunately, such a bottom-up approach is rarely taken because of the effort involved, and the  wrong-headed perception that chaos may ensue from management actually talking to people on the metaphorical shop floor.   So  most organizations take a top-down approach, dictating what will be done, with little or no employee involvement.

Organisations focus on how to achieve a particular end. The end itself, the reasons for wanting to achieve it and the consequences of doing so remain unexplored; it is assumed that these are obvious to all stakeholders. To put it in aphoristically: organizations focus on the “how” not the on the “what” or why.”

The heart of the matter

The key to understanding why best practices do not work is to realise that many organizational problems are wicked problems: i.e., problems that are hard to define, let alone solve’s  (see this paper for a comprehensive discussion of wicked problems).  Let’s look at organizational efficiency, for example.   What does it really mean to improve organizational efficiency?   More to the point, how can one arrive at a generally agreed way to improve organizational efficiency?  By generally agreed, I mean a measure that all stakeholders understand and agree on. Note that “efficiency “is just an example here – the same holds for most other matters of strategic importance to organizations: organisational strategy is a wicked problem.

Since wicked problems are hard to pin down (because they mean different things to different people), the first step to solving them is to ensure that all stakeholders have a common (or shared) understanding of what the problem is. The next step  is to achieve a shared commitment to solving that problem. Any technique that could help achieve a shared understanding of wicked problems and commitment to solving them  would  truly deserve to be called the one best practice to rule them all.

The genesis of a collaboration

About a year ago, in a series of landmark posts entitled The One Best Practice to Rule Them All, Paul Culmsee wrote about his search for a practical method to manage wicked problems.  In the articles he made a convincing case that dialogue mapping can help a diverse group of stakeholders achieve a shared understanding of such problems.  Paul’s writings inspired me to learn dialogue mapping and use it at work. I was impressed – here, finally, was a technique that didn’t claim to be a best practice, but had the potential to address some of the really complex problems that organisations face.

Since then, Paul and I have had several conversations about the failure of best practices in to tackling issues ranging from organizational change to project management. Paul is one of those rare practitioners with an excellent grounding in theory and practice.  I learnt a lot from him in those conversations. Among other things, he told me about his experiences in using dialogue mapping to tackle apparently intractable problems (see this case study from Paul’s company, for example).

Late last year, we thought of writing up some of the things we’d been talking about in a series of joint blog posts. Soon we realised that we had much more to say than would fit into a series of posts – we probably had  enough for a book.  We’re a few months into writing that book, and are quite pleased with the way it’s turning out.

Here’s a very brief summary of the book. The first part analyses why best practices fail.  Our analysis  touches  upon diverse areas like organizational rhetoric, cognitive bias, memetics and scientific management (topics that both Paul and I have written about on our blogs).  The second part of the book presents a series of case studies that illustrate some techniques that address complex problems that organizations face. The case studies are based on our experiences in using dialogue mapping and other techniques to tackle wicked problems relating to organizational strategy and project management.  The techniques we discuss go beyond the rhetoric of best practices – they work because they use a bottom-up approach that takes into account the context and environment in which the problems live.

Now, Paul writes way better than I do. For one, his writing is laugh-out-loud funny, mine  isn’t. Those who have read his work and mine may be wondering how our very different styles will combine.  I’m delighted to report that the book is way more conversational and entertaining than my blog posts.  However, I should also emphasise that we are trying to be as rigorous as we can by backing up our claims by references to research papers and/or case studies.

We’re learning a lot in the process of writing, and are enthused and excited about the book . Please stay tuned – we’ll post occasional updates on how it is progressing.

Update (16 June 2010):

An excerpt from the book has been published here.

Update (27 Nov 2011):

The book, which has a new title, is currently in the final round of proofs. Hopefully it will be available for pre-order in a month or two.

Update (05 Dec 2011):

It’s out!

Get your copy via Amazon or Book Depository.

The e-book can be obtained from iUniverse (PDF or Epub formats) or Amazon (Kindle).

 

Operational and strategic risks on projects

with 4 comments

Introduction

Risk management is an important component of all project management frameworks and methodologies, so most project managers are well aware of the need to manage risks on their projects. However, most books and training courses offer little or no guidance about the relative importance of different categories of risks.   One useful way to look at risks is by whether they pose operational or strategic threats. The former category includes risks that impact project execution and the latter those that affect project goals. A recent paper entitled, Categorising Risks in Seven Large Projects – Which Risks do the Projects Focus On?, looks at how strategic and operational risks are treated in typical, real-life projects. This post is a summary and review of the paper.

Operational and strategic risks

For the purpose of their study, the authors of the paper categorise risks as follows:

  1. Operational risk: A risk that affects a project deliverable.
  2. Short term strategic risk: A risk that impacts an expected outcome of the project. That is, the results expected directly from a deliverable. For example, an order processing system (deliverable) might be expected to reduce processing time by 50% on average (outcome).
  3. Long term strategic risk: A risk that affects the strategic goal that the project is intended to address. For example, an expected  strategic outcome of a new order processing system  might be to boost sales by 25% over the next 2 years.

It is also necessary to define unambiguous criteria by which risks can be assigned to one of the above categories. The authors use the following criteria to classify risks:

  1. A risk is an operational risk if it can impact a deliverable that is set out in the project definition (scope document, charter etc.) or delivery contract.
  2. A risk is a short-term strategic if it can have an effect on functionality that is not clearly mentioned in the project documentation, but is required in order to achieve the project objectives.
  3. A risk is considered to be a long-term strategic if it affects the long-term goals of the project and does not fall into the prior two categories.

The authors use the third category as a catch-all bucket for risks that do not fall into the first two categories.

Methodology

The authors collected data from the risk registers of seven large projects. Prior to data collection, the conducted interviews with relevant project personnel to get an understanding of the goals and context of the project. Further interviews were conducted, as needed, mainly to clarify points that came up in the analysis.

A point to note is that the projects studied were all in progress, but in different phases ranging from initiation to  closure.

Results and discussion

The authors’ findings can be summed up in a line: the overwhelming majority of risks were operational. The fraction of risks that were classified as long-term strategic was less than 0.5 % of the total (with over 1300 risks were classified in all).

Why is the number of strategic risks so low? The authors offer the following reasons:

  1. Strategic risks do not occur while a project is in progress: The authors argue that this is plausible because strategic risks are (or should be) handled prior to a project being given the go-ahead.  This makes sense, so in a well-vetted project strategic risks will occur only if there are substantial changes in the hosting organisation and/or its environment.
  2. Long term strategic risks are not the project’s responsibility: This is a view taken by most project management methodologies: a project exists only to achieve its stated objectives; its long-term impact is irrelevant.  Put another way, the focus is on efficiency, not (organisational) effectiveness (I’ll say more about this in a future post).  The authors recommend that project risk managers need to be aware of strategic issues, even though these are traditionally out of the purview of the project. Why? Well, because such issues  can have a  major impact on how the project is perceived by the organisation.
  3. Strategic risks are mainly the asset owner’s (or sponsor’s) responsibility: According to conventional management wisdom strategic risks are the responsibility of management, not the project team. In contrast, the authors suggest that the project team is perhaps better placed to identify some strategic risks long before they come to management’s attention.  From personal experience I can vouch that this is true, but would add that it can be difficult to raise awareness of these risks in a politically acceptable way.

Conclusion

The main point that the article makes is that strategic risks, though often ignored, can have a huge effect on projects and how they are viewed by the larger organisation.  It is therefore in important that these risks are identified and escalated to sponsors and other decision makers in a timely manner.  This is a message that organisations would do well to heed, particularly those that have a “shoot the messenger” culture which discourages honest and open communication about such risks.

Written by K

June 2, 2010 at 10:29 pm

%d bloggers like this: