Eight to Late

Sensemaking and Analytics for Organizations

Archive for December 2011

Models and messes in management – from best practices to appropriate practices

with 5 comments

Scientific models and management

Physicists build mathematical models that represent selected aspects of reality. These models are based on a mix of existing knowledge, observations, intuition and mathematical virtuosity.  A good example of such a model is  Newton’s law of gravity  according to which the gravitational force between two objects (planets,  apples or whatever) varies in inverse proportion to the square of the distance between them. The model was a brilliant generalization based on observations made by Newton and others (Johannes Kepler, in particular), supplemented by Newton’s insight that the force that keeps the planets revolving round the sun is the same as the one that made that mythical apple  fall to earth.   In essence Newton’s law tells us that planetary motions are caused by gravity and it tells us – very precisely – the effects of the cause.  In short: it embodies a cause-effect relationship.

[Aside: The validity of a physical model depends on how well it stands up to the test of reality.  Newton’s law of gravitation is remarkably successful in this regard:  among many other things, it is the basis of orbital calculations for all space missions.  The mathematical model expressed by Newton’s law is thus an established scientific principle. That said, it should be noted that models of the physical world are always subject to revision in the light of new information.  For example, Newton’s law of gravity has been superseded by Einstein’s general theory of relativity.  Nevertheless for most practical applications it remains perfectly adequate.]

Given the spectacular success of modeling in the physical and natural sciences, it is perhaps unsurprising that early management theorists attempted to follow the same approach. Fredrick Taylor stated this point of view quite clearly in the introduction to his classic monograph, The Principles of Scientific Management.   Here are the relevant lines:

This paper has been written…to prove that the best management is a true science, resting upon clearly defined laws, rules and principles, as a foundation. And further to show that the fundamental principles of scientific management  are applicable to all human activities, from our simplest individual activities to the work of great corporations, which call for the most elaborate cooperation. And briefly, through a series of illustrations, to convince the reader that whenever these principles are correctly applied, results must follow which are truly astounding…

From this it appears that Taylor’s intent was to prove that management could be reduced to a set of principles that govern all aspects of work in organizations.

The question is: how well did it work?

The origin of best practices

Over time, Taylor’s words were used to justify the imposition of one-size-fits-all management practices that ignored human individuality and uniqueness of organisations. Although, Taylor was aware of these factors, he believed commonalities were more important than differences.  This thinking is well and alive to this day: although Taylor’s principles are no longer treated as gospel, their spirit lives on in the notion of standardized best practices.

There are now a plethora of standards or best practices for just about any area of management. They are often sold using scientific language, terms such as principles and proof.   Consider the following passage taken from from the Official PRINCE2 site:

Because PRINCE2 is generic and based on proven principles, organisations adopting the method as a standard can substantially improve their organisational capability and maturity across multiple areas of business activity – business change, construction, IT, mergers and acquisitions, research, product development and so on.

There are a couple of other things worth noting in the above passage. First, there is an implied cause-effect relationship between the “proven principles” and improvements in “organizational capability and maturity across multiple areas of business activity.”    Second, as alluded to above, the human factor is all but factored out – there is an implication that this generic standard can be implemented by anyone anywhere and the results will inevitably be as “truly astounding” as Taylor claimed.

Why best practices are not the best

There are a number of problems with the notion of a best practice.  I discuss these briefly below.

First, every organisation is unique. Yes, much is made of commonalities between organisations, but it is the differences that make them unique. Arguably, it is also the differences that give organisations their edge. As Stanley Deetz mentioned in his 2003 Becker lecture:

In today’s world unless you have exceptionally low labor costs, competitive advantage comes from high creativity, highly committed employees and the ability to customize products.  All require a highly involved, participating workforce.  Creativity requires letting differences make a difference.  Most high-end companies are more dependent on the social and intellectual capital possessed by employees than financial investment.

Thoughtless standardization through the use of best practices is a sure way to lose those differences that could make a difference.

Second, in their paper entitled,  De-Contextualising Competence: Can Business Best Practice be Bundled and Sold, Jonathan Wareham and Han Gerrits pointed out that organisations operate in vastly varying cultural and social environments. It is difficult to see how best practice approaches with their one-and-a-half-size –fits-all approach would work.

Third , Wareham and Gerrits also pointed out that best practice is often tacit and socially embedded. This invalidates the notion that it can be transferred from an organization in which it works and to another without substantial change.  Context is all important.

Lastly, best practices are generally implemented in response to a perceived problem.  However, they often address the   symptoms rather than the root cause of the problem. For example, a project management process may attempt to improve delivery by better estimation and planning. However, the underlying cause – which may be poor communication or a dysfunctional relationship between users and the IT department –remains unaddressed.

In his 2003 Becker lecture, Stanley Deetz illustrated this point via the following fable:

… about a company formed by very short people.  Since they were all short and they wanted to be highly efficient and cut costs, they chose to build their ceiling short and the doorways shorter so that they could have more work space in the same building.  And, they were in fact very successful.  As they became more and more successful, however, it became necessary for them to start hiring taller people. And, as they hired more and more tall people, they came to realize that tall people were at a disadvantage at this company because they had to walk around stooped over.  They had to duck to go through the doorways and so forth.  Of course, they hired organizational consultants to help them with the problem.

Initially they had time-and-motion experts come in. These experts taught teams of people how to walk carefully.  Tall members learned to duck in stride so that going through the short doors was minimally inconvenient. And they became more efficient by learning how to walk more properly for their environment. Later, because this wasn’t working so well, they hired psychological consultants.  These experts taught greater sensitivity to the difficulties of tall members of the organization.   Long-term short members learned tolerance knowing that the tall people would come later to meetings, would be somewhat less able to perform their work well.  They provided for tall people networks for support…

The parable is an excellent illustration of how best practices can  end up addressing symptoms rather than causes.

Ambiguity + the human factor = a mess

Many organisational problems are ambiguous in that cause-effect relationships are unclear. Consequently, different stakeholders can have wildly different opinions as to what the root cause of a problem is. Moreover, there is no way to conclusively establish the validity of a particular point of view. For example, executives may see a delay in a project as being due to poor project management whereas the project manager might see it as being a consequence of poor scope definition or unreasonable timelines.  The cause depends on who you ask and there is no way to establish who is right! Unlike problems in physics, organisational problems have a social dimension.

The visionary Horst Rittel coined the evocative term wicked problem to describe problems that involve many  stakeholder groups with diverse and often conflicting perspectives. This makes such problems messy. Indeed, Russell Ackoff referred to wicked problems as messes. In his words, “every problem interacts with other problems and is therefore part of a set of interrelated problems, a system of problems…. I choose to call such a system a mess

Consider an example that is quite common in organisations:  the question of how to improve efficiency. Management may frame this issue in terms of tighter managerial control and launch a solution that involves greater oversight.  In contrast, a workgroup within the organisation may see their efficiency being impeded by bureaucratic control that results from increased oversight, and  thus may believe that the road to efficiency lies in giving workgroups greater autonomy.  In this case there is a clear difference between the aims of management (to exert greater control) and  those of workgroups (to work autonomously). Ideally, the two ought to talk it over and come up with a commonly agreed approach. Unfortunately they seldom do.  The power structure in organisations being what it is, management’s solution usually prevails and, as a consequence, workgroup morale plummets. See this post for an interesting case study on one such situation.

Summing up: a need for appropriate practice, not best practice

The great attraction of best practices, and one of the key reasons for their popularity, is that they offer apparently straightforward solutions to complex problems. However, such problems typically have a social dimension because they affect different stakeholders in different ways.   They are messes whose definition depends on who you ask. So there is no agreement on what the problem is, let alone its solution.  This fact by itself limits the utility of the best practice approach to organisational problem solving. Purveyors of best practices may use terms like “proven”, “established”, “measurable” etc. to lend an air of scientific respectability to their wares, but the truth is that unless all stakeholders have a shared understanding of the problem and a shared commitment to solving it, the practice will fail.

In our recently published book entitled, The Heretic’s Guide to Best Practices, Paul Culmsee and I  describe in detail the issues with the best practice approach to organisational problem-solving. More important, we provide a practical approach that can help you work with stakeholders to achieve a shared understanding of a problem and a shared commitment to a commonly agreed course of action.  The methods we discuss can be used in small settings or larger one,  so you will find the book useful regardless of where you sit in your organisation’s hierarchy. In essence our book is a manifesto for replacing the concept of best practice with that of appropriate practice –  practice with a human face that is appropriate for you in your organisation and particular situation.

Book Announcement: The Heretic’s Guide to Best Practices

with 4 comments

After two years of chewing up most of my free time, I’m delighted and relieved (in about equal measure) to announce that the book I’ve written with Paul Culmsee is finally out: The Heretics Guide to Best Practices is now available through Amazon (also on Kindle) and iUniverse.

In this post I present the first couple of paragraphs from the book to give you a flavour of the content and style. I also include some quotes from reviewers who read drafts of the book.

The first few paragraphs

The following lines are taken from the start of the book:

Have you ever noticed that infomercials trying to sell you the latest ab-sculpting, fat burning, home fitness device with three easy credit card payments, always start with questions designed in such a way that the answer is invariably “Yes”? We have too—so as a tribute to these infomercials, we are starting this book with some seriously loaded questions.

  • Have you ever had the feeling that something is not quite right in your workplace, yet you cannot articulate why?
  • Are you required to perform tasks that you instinctively feel are of questionable value?
  •  Have you ever questioned an approach, only to be told that it is a best practice and therefore cannot be questioned?
  •   Have you ever sighed and blamed the ills of your organisation on “culture” or “that’s just the way things are done here”?
  •  Have you ever lamented to others that “If only we got ourselves organised”, we would stop chasing our tails and being so reactive?

If you answered “No” to these questions then, seriously—you are holding the wrong book. What’s more, if you manage staff and you answered ”No” to these questions, then chances are  your staff gave you this book to read in the hope that you might learn a few home truths.

For those who said a hopefully emphatic “Yes!”—and we are hoping that’s a fair chunk of our readers—this book might offer you some answers and put some names to some of the things that make your organisational “spider senses” tingle. Bear in mind, you are not going to get any glib “Seven Steps to Organisational Nirvana” type stuff here. Instead, you are about to undertake a varied and, at times, heretical journey into the fun-filled world of organisational problem solving. Not only will this book provide you with some juicy ammunition in relation to organisational debates about the validity of best practices, but the practical tools and approaches that we cover might also give you some insights in how to improve things.

Quotes from reviewers

Here are some pre-publication quotes from reviewers who read draft versions of the book:

“In Paul and Kailash I have found kindred spirits who understand how messed up most organizations are, and how urgent it is that organizations discover what Buddhists call ‘expedient means’—not more ‘best practices’ or better change management for the enterprise, but transparent methods and theories that are simple to learn and apply, and that foster organizational intelligence as a natural expression of individual intelligence. This book is a bold step forward on that path, and it has the wonderful quality, like a walk at dawn through a beautiful park, of presenting profound insights with humor, precision, and clarity.”

Jeff Conklin, Director, Cognexus Institute

Hugely enjoyable, deeply reflective, and intensely practical. This book is about weaving human artistry and improvisation, with appropriate methods and technologies, in order to pool collective intelligence and wisdom under pressure.”

Simon Buckingham Shum, Knowledge Media Institute, The Open University, UK.

“This is a terrific piece of work: important, insightful, and very entertaining. Culmsee and Awati have produced a refreshing take on the problems that plague organisations, the problems that plague attempts to fix organisations, and what can be done to make things better. If you’re trying to deal with wicked problems in your organisation, then drop everything and read this book.”

Tim Van Gelder, Principal Consultant, Austhink Consulting

“This book has been a brilliantly fun read. Paul and Kailash interweave forty years of management theory using entertaining and engaging personal stories.  These guys know their stuff and demonstrate how it can be used via real world examples.

As a long time blogger, lecturer and consultant/practitioner I have always been served well by contrarian approaches, and have sought stories and case studies to understand the reasons why my methods have worked.  This book has helped me understand why I have been effective in dealing with complex business problems. Moreover, it has encouraged me to delve into the foundations of various management practices…”

Craig Brown, Director, Evaluator

“Paul and Kailash have written a book that largely mirrors what I have learned through my own (sometimes painful) experience: at the foundation of every technical solution there should be a clear understanding of the business problem. It amazes me how many projects proceed without this basic planning building block, and yet the percentage of projects that fail remains fairly constant. This book provides an informative and entertaining look at the role of the business analyst, with guidance on how to improve your problem-solving skills.”

Christian Buckley, Director, Product Evangelism at Axceler

“Paul and Kailash have done a fantastic job of pulling together many areas of research and presenting this in an accessible and compelling way.  They walk you through their discovery process,  helping you gain a real understanding of way things work but more importantly why things work, and then apply these in the real world.  If you have ever been told something is a best practice, you owe it to yourself to read this book.”

Andrew Woodward,  Founder and CEO,  21Apps

We think there is something is the book for most professionals, regardless of where they sit in their organisation’s hierarchy.  But in the end it is your opinion that counts. We would love to know what you think of our effort, and look forward to hearing from you – either via a comment on our blogs or a review on bookseller websites.

Note (added on 31 Jan 2012):

Check out the customer reviews on Amazon.

On the accuracy of group estimates

with 19 comments

Introduction

The essential idea behind group estimation is that an estimate made by a group is likely to be more accurate than one made by an individual in the group. This notion is the basis for the Delphi method and its variants. In this post, I use arguments involving probabilities to gain some insight into the conditions under which group estimates are more accurate than individual ones.

An insight from conditional probability

Let’s begin with a simple group estimation scenario.

Assume we have two individuals of similar skill who have been asked to provide independent estimates of some quantity, say  a project task duration. Further, let us assume that each individual has a probability p of making a correct estimate.

Based on the above, the probability that they both make a correct estimate, P(\textnormal{both correct}),  is:

P(\textnormal{both correct}) = p*p = p^2,

This is a consequence of our assumption that the individual estimates are independent of each other.

Similarly,  the probability that they both get it wrong, P(\textnormal{both wrong}), is:

P(\textnormal{both wrong}) = (1-p)*(1-p) = (1-p)^2,

Now we can ask the following question:

What is the probability that both individuals make the correct estimate if we know that they have both made the same estimate?

This can be figured out using Bayes’ Theorem, which in the context of the question can be stated as follows:

P(\textnormal{both correct\textbar same estimate})= \displaystyle{\frac{ P(\textnormal{same estimate\textbar both correct})*P(\textnormal{both correct})}{ P(\textnormal{same estimate})}}

In the above equation, P(\textnormal{both correct\textbar same estimate}) is the probability that both individuals get it right given that they have made the same estimate (which is  what we want to figure out). This is an example of a conditional probability – i.e.  the probability that an event occurs given that another, possibly related event has already occurred.  See this post for a detailed discussion of conditional probabilities.

Similarly, P(\textnormal{same estimate\textbar both correct}) is the conditional probability that both estimators make the same estimate given that they are both correct. This probability is 1.

Question: Why? 

Answer: If both estimators are correct then they must have made the same estimate (i.e. they must both within be an acceptable range of the right answer).

Finally, P(\textnormal{same estimate}) is the probability that both make the same estimate. This is simply the sum of the probabilities that both get it right and both get it wrong. Expressed in terms of p this is, p^2+(1-p)^2.

Now lets apply Bayes’ theorem to the following two cases:

  1. Both individuals are good estimators – i.e. they have a high probability of making a correct estimate. We’ll assume they both have a 90% chance of getting it right (p=0.9).
  2. Both individuals are poor estimators – i.e. they have a low probability of making a correct estimate. We’ll assume they both have a 30% chance of getting it right (p=0.3)

Consider the first case. The probability that both estimators get it right given that they make the same estimate is:

P(\textnormal{both correct\textbar same estimate})= \displaystyle\frac{1*0.9*0.9}{0.9*0.9+0.1*0.1}= \displaystyle \frac{0.81}{0.82}= 0.9878

Thus we see that the group estimate has a significantly better chance of being right than the individual ones:  a probability of 0.9878 as opposed to 0.9.

In the second case, the probability that both get it right is:

P(\textnormal{both correct\textbar same estimate})= \displaystyle \frac{1*0.3*0.3}{0.3*0.3+0.7*0.7}= \displaystyle \frac{0.09}{0.58}= 0.155

The situation is completely reversed: the group estimate has a much smaller chance of being right than an  individual estimate!

In summary:  estimates provided by a group consisting of individuals of similar ability working independently are more likely to be right (compared to individual estimates) if the group consists of  competent estimators and more likely to be wrong (compared to individual estimates) if the group consists of  poor estimators.

Assumptions and complications

I have made a number of simplifying assumptions in the above argument. I discuss these below with some commentary.

  1. The main assumption is that individuals work independently. This assumption is not valid for many situations. For example, project estimates are often made  by a group of people working together.  Although one can’t work out what will happen in such situations using the arguments of the previous section, it is reasonable to assume that given the right conditions, estimators will use their collective knowledge to work collaboratively.   Other things being equal,  such collaboration would lead a group of skilled estimators to reinforce each others’ estimates (which are likely to be quite similar) whereas less skilled ones may spend time arguing over their (possibly different and incorrect) guesses.  Based on this, it seems reasonable to conjecture that groups consisting of good estimators will tend to make even better estimates than they would individually whereas those consisting of poor estimators have a significant chance of making worse ones.
  2. Another assumption is that an estimate is either good or bad. In reality there is a range that is neither good nor bad, but may be acceptable.
  3. Yet another assumption is that an estimator’s ability can be accurately quantified using a single numerical probability.  This is fine providing the number actually represents the person’s estimation ability for the situation at hand. However, typically such probabilities are evaluated on the basis of  past estimates. The problem is, every situation is unique and history may not be a good guide to the situation at hand. The best way to address this is to involve people with diverse experience in the estimation exercise.  This will almost often lead to  a significant spread of estimates which may then have to be refined by debate and negotiation.

Real-life estimation situations have a number of other complications.  To begin with, the influence that specific individuals have on the estimation process may vary – a manager who is  a poor estimator may, by virtue of his position, have a greater influence than others in a group. This will skew the group estimate by a factor that cannot be estimated.  Moreover, strategic behaviour may influence estimates in a myriad other ways. Then there is the groupthink factor  as well.

…and I’m sure there are many others.

Finally I should mention that group estimates can depend on the details of the estimation process. For example, research suggests that under certain conditions competition can lead to better estimates than cooperation.

Conclusion

In this post I have attempted to make some general inferences regarding the validity of group estimates based on arguments involving conditional probabilities. The arguments suggest that, all other things being equal, a collective estimate from a bunch of skilled estimators will generally be better than their individual estimates whereas an estimate from a group of less skilled estimators will tend to be worse than their individual estimates. Of course, in real life, there are a host of other factors  that can come into play:  power, politics and biases being just a few. Though these are often hidden, they can  influence group estimates in inestimable ways.

Acknowledgement

Thanks go out to George Gkotsis and Craig Brown for their comments which inspired this post.

Written by K

December 1, 2011 at 5:16 am

%d bloggers like this: