Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Causality’ Category

On the shortcomings of cause-effect based models in management

with 4 comments

Introduction

Business schools perpetuate the myth that the outcomes of changes in organizations can be managed using  models that  are rooted in the scientific-rational mode of enquiry. In essence, such models assume that all important variables that affect an outcome  (i.e. causes) are known and that the relationship between these variables and the outcomes (i.e.  effects) can be represented accurately by simple models.   This is the nature of explanation in the hard sciences such as physics and is pretty much the official line adopted by mainstream management research and teaching – a point I have explored at length in an earlier post.

Now it is far from obvious that a mode of explanation that works for physics will also work for management. In fact, there is enough empirical evidence that most cause-effect based management models do not work in the real world.  Many front-line employees and middle managers need no proof because they have likely lived through failures of such models in their organisations- for example,  when the unintended consequences of  organisational change swamp its intended (or predicted) effects.

In this post I look at the missing element in management models – human intentions –  drawing on this paper by Sumantra Ghoshal which explores  three different modes of explanation that were elaborated by Jon Elster in this book.  My aim in doing this is to highlight the key reason why so many management initiatives fail.

Types of explanations

According to Elster, the nature of what we can reasonably expect from an explanation differs in the natural and social sciences. Furthermore, within the natural sciences, what constitutes an explanation differs in the physical and biological sciences.

Let’s begin with the difference between physics and biology first.

The dominant mode of explanation in physics (and other sciences that deal with inanimate matter) is causal – i.e. it deals with causes and effects as I have described in the introduction. For example, the phenomenon of gravity is explained as being caused by the presence of  matter, the precise relationship being expressed via Newton’s Law of Gravitation (or even more accurately, via Einstein’s General Theory of Relativity).  Gravity is  “explained” by these models because they tell us that it is caused by the presence of matter. More important, if we know the specific configuration of matter in a particular problem, we can accurately predict the effects of gravity – our success in sending unmanned spacecraft to Saturn or Mars depends rather crucially on this.

In biology, the nature of explanation is somewhat different. When studying living creatures we don’t look for causes and effects. Instead we look for explanations based on function. For example,  zoologists do not need to ask how amphibians came to have webbed feet; it is enough for them to know that webbed feet are an adaptation that affords amphibians a survival advantage. They need look no further than this explanation because it is consistent with the Theory of Evolution – that changes in organisms occur by chance, and those that survive do so because they offer the organism a survival advantage. There is no need to look for a deeper explanation in terms of cause and effect.

In social sciences the situation is very  different indeed. The basic unit of explanation in the social sciences is the individual. But an individual is different from an inanimate object or even a non-human organism that reacts to specific stimuli in predictable ways. The key difference is that human actions are guided by intentions, and any explanation of social phenomena ought to start from these intentions.

For completeness I should mention that functional and causal explanations are sometimes possible within the social sciences and management. Typically functional explanations are possible in tightly controlled environments.  For example,  the behaviour and actions of people working within large bureaucracies or assembly lines can be understood on the basis of function. Causal explanations are even rarer, because they are possible only when  focusing on the collective behaviour of large, diverse populations in which the effects of individual intentions are swamped by group diversity. In such special cases, people can indeed be treated as molecules or atoms.

Implications for management

There a couple of interesting implications of restoring intentionality to its rightful place in management studies.

Firstly, as Ghoshal states in his paper:

Management theories at present are overwhelmingly causal or functional in their modes of explanation. Ethics or morality, however, are mental phenomena. As a result they have had to be excluded from our theories and from the practices that such theories have shaped.  In other words, a precondition for making business studies a science as well as a consequence of the resulting belief in determinism has been the explicit denial of any role of moral or ethical considerations in the practice of management

Present day management studies exclude considerations of morals and ethics, except, possibly, as a separate course that has little relation to the other subjects that form a part of the typical business school curriculum. Recognising the role of intentionality restores ethical and moral considerations where they belong – on the centre-stage of management theory and practice.

Secondly, recognizing the role of intentions in determining peoples’ actions helps us see that organizational changes that “start from where people are”  have a much better chance of succeeding than those that are initiated top-down with little or no consultation with rank and file employees. Unfortunately the large majority of organizational change initiatives still start from the wrong place – the top.

Summing up

Most management practices that are taught in business schools and practiced by the countless graduates of these programs are rooted in the belief that certain actions (causes) will lead to specific, desired outcomes (effects). In this article I have discussed how explanations based on cause-effect models, though good for understanding the behaviour of molecules and possibly even mice, are misleading in the world of humans. To achieve sustainable and enduring outcomes  in organisation one has to start from where people are,  and to do that one has to begin by taking their opinions and aspirations seriously.

Written by K

January 3, 2013 at 9:46 pm

Free Will – a book review

with 12 comments

Did I write this review because I wanted to, or is it because my background and circumstances compelled me to?

Some time ago, the answer to this question would have been obvious to me but after reading Free Will by Sam Harris, I’m not so sure.

In brief: the book makes the case that the widely accepted notion of free will is little more than an illusion because our (apparently conscious) decisions originate in causes that lie outside of our conscious control.

Harris begins by noting that the notion of free will is based on the following assumptions:

  1. We could have behaved differently than we actually did in the past.
  2. We are the originators of our present thoughts and actions.

Then, in the space of eighty odd pages (perhaps no more than 15,000 words), he argues that the assumptions are incorrect and looks into some of the implications of his arguments.

The two assumptions are actually interrelated:  if it is indeed true that we are not the originators of our present thoughts and actions then it is unlikely that we could have behaved differently than we did in the past.

A key part of Harris’ argument is the scientifically established fact that we are consciously aware of only a small fraction of the activity that takes place in our brains. This has been demonstrated (conclusively?) by some elegant experiments in neurophysiology.   For example:

  • Activity in the brain’s motor cortex can be detected 300 milliseconds before a person “decides” to move, indicating that the thought about moving arises before the subject is aware of it.
  • Magnetic resonance scanning of certain brain regions can reveal the choice that will be made by a person 7 to 10 seconds before the person consciously makes the decision.

These and other similar experiments pose a direct challenge to the notion of free will: if  my  brain has  already decided on a course before I am aware of it , how can I  claim to be the author of my decisions and, more broadly, my destiny? As Harris puts it:

…I cannot decide what I will think next or intend until a thought or intention arises. What will my next mental state be? I do not know – it just happens. Where is the freedom in that?

The whole notion of free will, he argues, is based on the belief that we control our thoughts and actions.   Harris notes that although we may feel that are in control of the decisions we make, this is but an illusion: we feel that we are free, but this freedom is illusory because our actions are already “decided” before they appear in our consciousness.  To be sure, there are causes underlying our thoughts and actions, but the majority of these lie outside our awareness.

If we accept the above then the role that luck plays in determining our genes, circumstances, environment and attitudes cannot be overstated. Although we may choose to believe that we make our destinies, in reality we don’t.  Some people may invoke demonstrations of willpower – conscious mental effort to do certain things – as proof against Harris’ arguments. However, as Harris notes,

You can change your life and yourself through effort and discipline – but you have whatever capacity for effort and discipline you have in this moment, and not a scintilla more (or less). You are either lucky in this department or you aren’t – and you can’t make your own luck.

Although I may choose to believe that I made the key decisions in my life, a little reflection reveals the tenuous nature of this belief.  Sure,   some decisions I have made resulted in experiences that I would not have had otherwise.   Some of those experiences undoubtedly changed my outlook on life, causing me to do things I would not have done had I not undergone those experiences.  So to that extent, those original choices changed my life.

The question is: could I have decided differently when making those original choices?

Or, considering an even more immediate example:   could I have chosen not to write this review? Or, having written it, could I have chosen not to publish it?

Harris tells us that this question is misguided because you will do what you do. As he states,

…you can do what you decide to do – but you cannot decide what you will decide to do.

We feel that we are free to decide, but the decision we make is the one we make. If we choose to believe that we are free to decide, we are free to do so. However, this is an illusion because our decisions arise from causes that we are unaware of. This is the central point of Harris’ argument.

There are important moral and ethical implications of the loss of free will. For example what happens to the notion of moral responsibility for actions that might harm others? Harris argues that we do not need to invoke the notion of free will in order to see that this is not right – as he tells us, what we condemn in others is the conscious intent to do harm.

Harris is careful to note that his argument against free will does not amount to a laissez-faire approach wherein people are free to do whatever comes to their minds, regardless of consequences for society. As he writes:

….we must encourage people to work to the best of their abilities and discourage free riders wherever we can. And it is wise to hold people responsible for their actions when doing so influences their behavior and brings benefits to society….[however this does not need the] illusion of free will. We need only acknowledge that efforts matter and that people can change. [However] we do not change ourselves precisely – because we have only ourselves with which to do the changing -but we continually influence, and are influenced by, the world around us and the world within us. [italics mine]

Before closing I should mention some shortcomings of the book:

Firstly, Harris does not offer a detailed support for his argument.  Much of what he claims depends on the results of experiments research in neurophysiology that demonstrate the lag between the genesis of a thought in our brains and our conscious awareness of it, yet he describes only a handful experiments detail. That said there are references to many others in the notes.

Secondly, those with training in philosophy may find the book superficial as Harris does not discuss of alternate perspectives on free will.  Such a discussion would have provided much needed balance that some critics have taken him to task for (see this analysis  or this review  for example).

Although the book has the shortcomings I’ve noted, I have to say I enjoyed it because it made me think.  More specifically, it made me think about the way I think.  Maybe it will do the same for you, maybe not  – what happens in your case  may depend on thoughts that are beyond your control.

Written by K

October 28, 2012 at 9:45 pm

On the nonlinearity of organisational phenomena

with 5 comments

Introduction

Some time ago I wrote a post entitled, Models and Messes – from best practices to appropriate practices, in which I described the deep connection between the natural sciences and 20th century management.  In particular, I discussed how early management theorists took inspiration from physics. Quoting from that post:

Given the spectacular success of mathematical modeling in the physical and natural sciences, it is perhaps unsurprising that early management theorists attempted to follow the same approach. Fredrick Taylor stated this point of view quite clearly in the introduction to his classic monograph, The Principles of Scientific Management…Taylor’s intent was to prove that management could be reduced to a set of principles that govern all aspects of work in organizations.

In Taylor’s own words, his goal was to “prove that the best management is a true science, resting upon clearly defined laws, rules and principles, as a foundation. And further to show that the fundamental principles of scientific management  are applicable to all human activities…

In the earlier post I discussed how organisational problems elude so-called scientific solutions because they are ambiguous and have a human dimension.  Now I continue the thread, introducing a concept from physics that has permeated much of management thinking, much to the detriment of managerial research and practice. The concept is that of linearity. Simply put, linearity is a mathematical expression of the idea that complex systems can be analysed in terms of their (simpler) components.  I explain this notion in more detail in the following sections.

The post is organised as follows: I begin with a brief introduction to linearity in physics and then describe its social science equivalent.  Following this, I discuss a paper that points out some pitfalls of linear thinking in organisational research and (by extrapolation) to management practice.

Linearity in physics and mathematics

A simplifying assumption underlying much of classical physics is that of equilibrium or stability. A characteristic of a system in equilibrium is that it tends to resist change.  Specifically, if such a system is disturbed, it tends to return to its original state. Of course, physics also deals with systems that are not in equilibrium – the weather, or  a spacecraft on its way to Mars  are examples of such systems.  In general, non-equilibrium systems are described by more complex mathematical models than equilibrium systems.

Now, complex mathematical models – such as those describing the dynamics of weather or even the turbulent flow of water-  can only be solved numerically using computers.  The key complicating factor in such models is that they consist of many interdependent variables that are combined in complex ways. 19th  and early 20th century physicists who had no access to computers had to resort to some tricks in order to make the mathematics of such systems tractable. One of the most common simplifying tricks was to treat the system as being  linear.   Linear systems have mathematical properties that roughly translate to the following in physical terms:

  1. Cause is proportional effect (or output is proportional to input).  This property is called homogeneity.
  2. Any complex effect can be expressed as a sum of a well defined number of simpler effects.  This property is often referred to as additivity, but I prefer the term decomposability.  This notion of decomposability  is also called the principle of superposition.

In contrast, real-life systems (such as the weather) tend to be described by mathematical equations that do not satisfy the above conditions. Such systems are called nonlinear.

Linear systems are well-understood, predictable and frankly, a bit boring –   they hold no surprises and cannot display novel behaviour. The evolution of linear systems is constrained by the equations and initial conditions (where they start from). Once these are known, their future state is completely determined.  Linear systems  cannot display the  range of behaviours that are typical of complex systems. Consequently, when a complex system is converted into a linear one by simplifying the mathematical model, much of the interesting behaviour of the system is lost.

Linearity in organisational theories

It turns out that many organizational theories are based on assumptions of equilibrium (i.e. that organisations are stable) and linearity (i.e. that the socio-economic forces on the organisation are small) . Much like the case of physical systems, such models will predict only small changes about the stable state – i.e. that “business as usual” will continue indefinitely. In a paper published in 1988, Andrew Abbott coined the term General Linear Reality (GLR) to describe this view of reality. GLR is based on the following assumptions:

  1. The world consists of unchanging entities which have variable attributes (eg: a fixed organisation with a varying number of employees)
  2. Small changes to attributes can have only small effects, and effects are manifested as changes to existing attributes.
  3. A given attribute can have only one causal effect – i.e. a single cause has a single effect.
  4. The sequence of events has no effect on the outcome.
  5. Entities and attributes are independent of each other (i.e. no correlation)

The connection between GLR and linearity in physics is quite evident in these assumptions.

The world isn’t linear

But reality isn’t linear – it is very non-linear as many managers learn the hard way. The problem is that the tools they are taught in management schools do not equip them to deal with situations that have changing entities due to feedback effects and  disproportionately large effects from small causes (to mention just a couple of common non-linear effects).

Nevertheless, management research is catching up with reality. For example, in a paper entitled Organizing Far From Equilibriium: Nonlinear changes in organizational fields,  Allan Meyer, Vibha Gaba and Kenneth Collwell highlight limitations of the GLR paradigm. The paper describes three research projects that were aimed at studying how large organisations adapt to change.  Typically when researchers plan such studies, they tacitly make GLR  assumptions regarding cause-effect, independence etc. In the words of Meyer, Gaba and Collwell:

In accord with the canons of general linear reality, as graduate students each of us learned to partition the research process into sequential stages: conceptualizing, designing, observing, analyzing, and reporting. During the conceptual and design stages, researchers are enjoined to make choices that will remain in effect throughout the inquiry. They are directed, for instance, to identify theoretical models, select units and levels of analysis, specify dependent and independent variables, choose sampling frames, and so forth. During the subsequent stages of observation, analysis, and reporting, these parameters are immutable. To change them on the fly could contaminate data or be interpreted as scientific fraud. Stigma attached to “post hoc theorizing,” “data mining” and “dust-bowl empiricism” are handed down from one generation of GLR researchers to the next.

Whilst the studies were in progress, however, each of the organisations that they were studying underwent large, unanticipated changes: in one case employees went on mass strike; in another, the government changed regulations regarding competition; and in the third boom-bust cycles caused massive changes in the business environment. The important point is that these changes invalidated  GLR assumptions completely.  When such “game-changing” forces are in play, it is all but impossible to define a sensible equilibrium state to which organisations can adapt.

In the last two decades, there is a growing body of research which shows that organizations are complex systems that display emergent behaviour.  Mainstream management practice is yet to catch up with these new developments, but the signs are good: in the last few years there have been articles dealing with some of these issues in management journals which often grace the bookshelves of CEOs and senior executives.

To conclude

Mainstream management principles are based on a linear view of reality, a view that is inspired by scientific management and 19th century physics.  In reality, however, organisations evolve in ways that are substantially different from those implied by simplistic cause-effect relationships embodied in linear models.  The sciences have moved on, recognizing that most real-world phenomena are nonlinear, but much of organisational research and management practice remains mired in a linear world.  In view of this it isn’t surprising that many management “best” practices taught in business schools don’t work in the real world.

Related posts:

Models and messes – from best practices to appropriate practices

Cause and effect in management

On the origin of power laws in organizational phenomena

Written by K

July 10, 2012 at 10:48 pm

Models and messes in management – from best practices to appropriate practices

with 5 comments

Scientific models and management

Physicists build mathematical models that represent selected aspects of reality. These models are based on a mix of existing knowledge, observations, intuition and mathematical virtuosity.  A good example of such a model is  Newton’s law of gravity  according to which the gravitational force between two objects (planets,  apples or whatever) varies in inverse proportion to the square of the distance between them. The model was a brilliant generalization based on observations made by Newton and others (Johannes Kepler, in particular), supplemented by Newton’s insight that the force that keeps the planets revolving round the sun is the same as the one that made that mythical apple  fall to earth.   In essence Newton’s law tells us that planetary motions are caused by gravity and it tells us – very precisely – the effects of the cause.  In short: it embodies a cause-effect relationship.

[Aside: The validity of a physical model depends on how well it stands up to the test of reality.  Newton’s law of gravitation is remarkably successful in this regard:  among many other things, it is the basis of orbital calculations for all space missions.  The mathematical model expressed by Newton’s law is thus an established scientific principle. That said, it should be noted that models of the physical world are always subject to revision in the light of new information.  For example, Newton’s law of gravity has been superseded by Einstein’s general theory of relativity.  Nevertheless for most practical applications it remains perfectly adequate.]

Given the spectacular success of modeling in the physical and natural sciences, it is perhaps unsurprising that early management theorists attempted to follow the same approach. Fredrick Taylor stated this point of view quite clearly in the introduction to his classic monograph, The Principles of Scientific Management.   Here are the relevant lines:

This paper has been written…to prove that the best management is a true science, resting upon clearly defined laws, rules and principles, as a foundation. And further to show that the fundamental principles of scientific management  are applicable to all human activities, from our simplest individual activities to the work of great corporations, which call for the most elaborate cooperation. And briefly, through a series of illustrations, to convince the reader that whenever these principles are correctly applied, results must follow which are truly astounding…

From this it appears that Taylor’s intent was to prove that management could be reduced to a set of principles that govern all aspects of work in organizations.

The question is: how well did it work?

The origin of best practices

Over time, Taylor’s words were used to justify the imposition of one-size-fits-all management practices that ignored human individuality and uniqueness of organisations. Although, Taylor was aware of these factors, he believed commonalities were more important than differences.  This thinking is well and alive to this day: although Taylor’s principles are no longer treated as gospel, their spirit lives on in the notion of standardized best practices.

There are now a plethora of standards or best practices for just about any area of management. They are often sold using scientific language, terms such as principles and proof.   Consider the following passage taken from from the Official PRINCE2 site:

Because PRINCE2 is generic and based on proven principles, organisations adopting the method as a standard can substantially improve their organisational capability and maturity across multiple areas of business activity – business change, construction, IT, mergers and acquisitions, research, product development and so on.

There are a couple of other things worth noting in the above passage. First, there is an implied cause-effect relationship between the “proven principles” and improvements in “organizational capability and maturity across multiple areas of business activity.”    Second, as alluded to above, the human factor is all but factored out – there is an implication that this generic standard can be implemented by anyone anywhere and the results will inevitably be as “truly astounding” as Taylor claimed.

Why best practices are not the best

There are a number of problems with the notion of a best practice.  I discuss these briefly below.

First, every organisation is unique. Yes, much is made of commonalities between organisations, but it is the differences that make them unique. Arguably, it is also the differences that give organisations their edge. As Stanley Deetz mentioned in his 2003 Becker lecture:

In today’s world unless you have exceptionally low labor costs, competitive advantage comes from high creativity, highly committed employees and the ability to customize products.  All require a highly involved, participating workforce.  Creativity requires letting differences make a difference.  Most high-end companies are more dependent on the social and intellectual capital possessed by employees than financial investment.

Thoughtless standardization through the use of best practices is a sure way to lose those differences that could make a difference.

Second, in their paper entitled,  De-Contextualising Competence: Can Business Best Practice be Bundled and Sold, Jonathan Wareham and Han Gerrits pointed out that organisations operate in vastly varying cultural and social environments. It is difficult to see how best practice approaches with their one-and-a-half-size –fits-all approach would work.

Third , Wareham and Gerrits also pointed out that best practice is often tacit and socially embedded. This invalidates the notion that it can be transferred from an organization in which it works and to another without substantial change.  Context is all important.

Lastly, best practices are generally implemented in response to a perceived problem.  However, they often address the   symptoms rather than the root cause of the problem. For example, a project management process may attempt to improve delivery by better estimation and planning. However, the underlying cause – which may be poor communication or a dysfunctional relationship between users and the IT department –remains unaddressed.

In his 2003 Becker lecture, Stanley Deetz illustrated this point via the following fable:

… about a company formed by very short people.  Since they were all short and they wanted to be highly efficient and cut costs, they chose to build their ceiling short and the doorways shorter so that they could have more work space in the same building.  And, they were in fact very successful.  As they became more and more successful, however, it became necessary for them to start hiring taller people. And, as they hired more and more tall people, they came to realize that tall people were at a disadvantage at this company because they had to walk around stooped over.  They had to duck to go through the doorways and so forth.  Of course, they hired organizational consultants to help them with the problem.

Initially they had time-and-motion experts come in. These experts taught teams of people how to walk carefully.  Tall members learned to duck in stride so that going through the short doors was minimally inconvenient. And they became more efficient by learning how to walk more properly for their environment. Later, because this wasn’t working so well, they hired psychological consultants.  These experts taught greater sensitivity to the difficulties of tall members of the organization.   Long-term short members learned tolerance knowing that the tall people would come later to meetings, would be somewhat less able to perform their work well.  They provided for tall people networks for support…

The parable is an excellent illustration of how best practices can  end up addressing symptoms rather than causes.

Ambiguity + the human factor = a mess

Many organisational problems are ambiguous in that cause-effect relationships are unclear. Consequently, different stakeholders can have wildly different opinions as to what the root cause of a problem is. Moreover, there is no way to conclusively establish the validity of a particular point of view. For example, executives may see a delay in a project as being due to poor project management whereas the project manager might see it as being a consequence of poor scope definition or unreasonable timelines.  The cause depends on who you ask and there is no way to establish who is right! Unlike problems in physics, organisational problems have a social dimension.

The visionary Horst Rittel coined the evocative term wicked problem to describe problems that involve many  stakeholder groups with diverse and often conflicting perspectives. This makes such problems messy. Indeed, Russell Ackoff referred to wicked problems as messes. In his words, “every problem interacts with other problems and is therefore part of a set of interrelated problems, a system of problems…. I choose to call such a system a mess

Consider an example that is quite common in organisations:  the question of how to improve efficiency. Management may frame this issue in terms of tighter managerial control and launch a solution that involves greater oversight.  In contrast, a workgroup within the organisation may see their efficiency being impeded by bureaucratic control that results from increased oversight, and  thus may believe that the road to efficiency lies in giving workgroups greater autonomy.  In this case there is a clear difference between the aims of management (to exert greater control) and  those of workgroups (to work autonomously). Ideally, the two ought to talk it over and come up with a commonly agreed approach. Unfortunately they seldom do.  The power structure in organisations being what it is, management’s solution usually prevails and, as a consequence, workgroup morale plummets. See this post for an interesting case study on one such situation.

Summing up: a need for appropriate practice, not best practice

The great attraction of best practices, and one of the key reasons for their popularity, is that they offer apparently straightforward solutions to complex problems. However, such problems typically have a social dimension because they affect different stakeholders in different ways.   They are messes whose definition depends on who you ask. So there is no agreement on what the problem is, let alone its solution.  This fact by itself limits the utility of the best practice approach to organisational problem solving. Purveyors of best practices may use terms like “proven”, “established”, “measurable” etc. to lend an air of scientific respectability to their wares, but the truth is that unless all stakeholders have a shared understanding of the problem and a shared commitment to solving it, the practice will fail.

In our recently published book entitled, The Heretic’s Guide to Best Practices, Paul Culmsee and I  describe in detail the issues with the best practice approach to organisational problem-solving. More important, we provide a practical approach that can help you work with stakeholders to achieve a shared understanding of a problem and a shared commitment to a commonly agreed course of action.  The methods we discuss can be used in small settings or larger one,  so you will find the book useful regardless of where you sit in your organisation’s hierarchy. In essence our book is a manifesto for replacing the concept of best practice with that of appropriate practice –  practice with a human face that is appropriate for you in your organisation and particular situation.

Processes and intentions: a note on cause and effect in project management

with 9 comments

Introduction

In recent years the project work-form has become an increasingly popular mode of organizing activities in many industries. As the projectization bandwagon has gained momentum there have been few questions asked about the efficacy of project management methodologies. Most practitioners take this as a given and move on to seek advice on how best to implement project management processes. Industry analysts and consultants are, of course, glad to oblige with reams of white papers, strategy papers or whatever else they choose to call them (see this paper by Gartner Research, for example).

Although purveyors of methodologies  do not claim their methods guarantee project success, they imply that there is a positive relationship between the two.  For example, the PMBOK Guide tells us that, “…the application of appropriate knowledge, processes, skills, tools and techniques can have a significant impact on project success”  (see page 4 of the 4th Edition).  This post is a brief exploration of the cause-effect relationship between the two.

Necessary but not sufficient

In a post on cause and effect in management, I discussed how  the link between high-level management actions and their claimed outcomes is tenuous. The basic reason for this is that there are several factors that can affect organizational outcomes and it is impossible to know beforehand which of these factors are significant. The large number of  factors, coupled with the fact that  controlled experimentation is impossible in organizations, makes it impossible to establish with certainty that a particular managerial action will lead to a desired outcome.

Most project managers are implicitly aware of this – they know that factors extrinsic to their projects can often spell the difference between success and failure. A mid-project change in organizational priorities is a classic example of such a factor. The effect of such factors can be accounted for using a concept of causality proposed by the philosopher Edgar Singer, and popularised by the management philosopher Russell Ackoff. In a paper entitled Systems, Messes and Interactive Planning, Ackoff had this to say about cause and effect in systems – i.e. entities that interact with each other and their environment, much like  organizations and projects do:

It will be recalled that in the Machine Age cause-effect was the central relationship in terms of which all actions and interactions were explained. At the turn of this century the distinguished American philosopher of science E.A. Singer, Jr.noted that cause-effect was used in two different senses. First… a cause is a necessary and sufficient condition for its effect. Second, it was also used when one thing was taken to be necessary but not sufficient for the other. To use Singer’s example, an acorn is necessary but not sufficient for an oak; various soil and weather conditions are also necessary. Similarly, a parent is necessary but not sufficient for his or her child. Singer referred to this second type of cause-effect as producer-product. It has also been referred to since as probabilistic or nondeterministic cause effect.

The role of intentions

A key point is that one cannot ignore the role of human intentions in management. As Sumantra Ghoshal wrote  in a classic paper:

 The basic building block in the social sciences, the elementary unit of explanation is individual action guided by some intention. In the presence of such intentionality functional [and causal] theories are suspect, except under some special and relatively rare circumstances, because there is no general law in the social sciences comparable to [say] the law of natural selection in biology

A producer-product view has room for human intentions and choices. As Ackoff stated in the his paper on systems and messes,

Singer went on to show why studies that use the producer-product relationship were compatible with, but richer than, studies that used only deterministic cause-effect. Furthermore, he showed that a theory of explanation based on producer-product permitted objective study of functional, goal-seeking and purposeful behavior. The concepts free will and choice were no longer incompatible with mechanism; hence they need no longer be exiled from science.

A producer-product view of cause and effect in project management recognizes that there will be a host of factors that affect project outcomes, many of which are beyond a manager’s ken and control. Further, and possibly more important, it acknowledges the role played by intentions of individuals who work on projects.  Let’s take a closer look at these two points.

Processes and intentions

To begin, it is worth noting that that project management lore is rife with projects that failed despite the use of formal project management processes. Worse, in many cases it appears that failure is a consequence of over-zealous adherence to methodology (see my post entitled The myth of the lonely project for a detailed example).  In such cases the failure is often attributed to causes other than the processes being used –   common reasons include lack of organizational readiness and/or improper implementation of methodology from which the processes are derived.   However, these causes  can usually be traced back to  lack of  employee buy-in,  i.e.  of getting front-line project teams and managers to believe in the utility of the proposed processes and to make them want to use them. It stands to reason that people will use processes only if they  believe they will  help. So the first action should be to elicit buy-in from people who will be required to use the proposed processes. The most obvious way to do this is by seeking their input in formulating the processes.

Most often though, processes are “sold” to employees in a very superficial way (see this post for a case in point).  Worse still, many times  organizations do not even bother getting buy-in: the powers that be simply mandate the process with employees having little or no say in how processes are to be used or implemented. This approach is doomed to fail because – as I have discussed in this post – standards, methodologies and best practices capture only superficial aspects of processes.  The details required to make processes work can be provided only by project managers and others who work at the coal-face of projects. Consequently employee buy-in shouldn’t be an afterthought, it should be the centerpiece of any strategy to implement a methodology. Buy-in and input are essential to gaining employee commitment, and employee commitment is absolutely essential for the processes to take root.

..and so to sum up

Project management processes are necessary for project success, but they are far from sufficient. Employee intentions and buy-in are critical factors that are often overlooked when implementing these.  As a first step to addressing this, project management processes should be considered and implemented in a way that makes sense to those who work on projects.  Those who miss this point are setting themselves up for failure.

Written by K

September 1, 2011 at 3:12 am

Cause and effect in management

with 2 comments

Introduction

Management schools and gurus tell us that specific managerial actions will lead to desirable consequences – witness the prescriptions for success in books such as Good to Great or In Search of Excellence. But can one really attribute success (or failure) to specific actions?  A cause-effect relationship is often assumed, but in reality the causal connection between strategic management actions and organisational outcomes is tenuous.  This post, based on a paper by Glenn Shafer entitled Causality and Responsibility, is an exploration of the causal connection between managerial actions and their (assumed) consequences.

Note that the discussion below applies to strategic – or “big picture” – management decisions, not operational ones. In the latter, cause and effect is generally quite clear cut. For example, the decision to initiate a project sets in motion several processes that have fairly predictable outcomes.  However, taking a big picture view,  initiating a project (or even the successful completion of one) does not imply that the strategic aims of the project will be met. It is the latter point that is of interest here – the causal connection between a strategic decision and its assumed outcome.

Shafer’s paper deals with causality and responsibility in legal deliberations: specifically, the process by which judges and juries reach their verdict as to whether the accused (person or entity) is actually responsible (in a causal sense) for the outcome they are charged with. In short, did the actions of the accused cause the outcome?  The arguments Shafer makes are quite general, and have applicability to any discipline. In the following paragraphs I’ll look at a couple of the key points he makes and outline their implications for cause and effect in management actions.

Deterministic cause-effect relationships

The first point that Shafer makes is that we should infer that a particular action causes a particular outcome only if it is improbable that the outcome could have happened without the action preceding it. In Shafer’s words:

…we are on safe ground in attributing responsibility if we do so based on our knowledge of impossibilities. It is not surprising, therefore, that the classical legal concept of cause – necessary and sufficient cause – is defined in terms of impossibility. According to this concept, an action causes an event if the event must happen (it is impossible for it not to happen) when the action is taken and cannot happen (it is impossible for it to happen) if the action is not taken.

This is, in fact, what legal arguments attempt to do: they attempt to prove, beyond reasonable doubt, that the crime occurred because of the defendants actions.

The reason that impossibilities are a better way of “proving” causal relationships is that such relationships cannot be invalidated as our knowledge of the situation increases providing the knowledge that we already have is valid.  In other words,  once something is deemed impossible (using valid knowledge) then it remains so even if we get to know more about the situation. In contrast, if something is deemed possible in the light of existing knowledge, it can be rendered false by a single contradictory fact.

The implication of the above for cause and effect in management is clear: a manager can (should!) claim responsibility for a particular outcome only if:

  1. The outcome must (almost always) happen if the managerial action occurs.
  2. It is highly unlikely that the outcome could have occurred without the action occurring prior to it.

Seen in this light, many of the prescriptions laid out in management bestsellers are little better than herpetological oleum.

Probabilistic cause-effect relationships

Of course, deterministic cause-effect relationships aren’t the norm in management –  only the supremely confident (foolhardy?) would claim that a specific managerial action will always lead to a specific organisational outcome.  This begs the question: what about probabilistic relationships? That is, what can we say about claims that a particular action results in a particular effect, but only in a fraction of the instances in which the action occurs?

To address this question, Shafer makes the important point that probabilities not close to zero or one have no meaning in isolation. They have meaning only in a system, and their meaning derives from the impossibility of a successful gambling strategy—the probability close to one that no one can make a substantial amount of money betting at the odds given by the probabilities.  The last part of the previous statement is a consequence of how probabilities are validated empirically. In Shafer’s words:

We validate a system of probabilities empirically by performing statistical tests. Each such test checks whether observations have some overall property that the system says they are practically certain to have. It checks, in other words, on whether observations diverge from the probabilistic model in a way that the model says is practically (approximately) impossible. In Probability and Finance: It’s Only a Game, Vovk and I argue that both the applications of probability and the classical limit theorems (the law of large numbers, the central limit theorem, etc.) can be most clearly understood and most elegantly explained if we treat these asserted practical impossibilities as the basic meaning of a probabilistic or statistical model, from which all other mathematical and practical conclusions are to be derived.  I cannot go further into the argument of the book here, but I do want to emphasize one of its consequences: because the empirical validity of a system of probabilities involves only the approximate impossibilities it implies, it is only these approximate impossibilities that we should expect to see preserved in a deeper causal structure. Other probabilities, those not close to zero or one, may not be preserved and hence cannot claim the causal status.

An implication of the above is that probabilities not close to zero or one are not fundamental properties of the system/situation; they are subject to change as our knowledge of the situation/system improves. A simple example may serve to explain this point.  Consider the following hypothetical claim from a software vendor:

“80% of our customers experience an increase in sales after implementing our software system.”

Presumably, the marketing department responsible for this statement has the data to back it up. Despite that, the increase in sales for a particular customer cannot (should not!) be attributed to the software. Why?  Well, for the following reasons:

  1. The particular customer may differ in important ways from those used in estimating the probability.This is a manifestation of the reference class problem.
  2. Most statistical studies of the kind used in marketing or management studies are enumerative, not analytical – i.e they can be used to classify data, but not to establish cause-effect relationships. See my post entitled Enumeration or Analysis for  more onthe differences between enumerative and analytical studies.

There is an underlying reason for the above which I’ll discuss next.

The root of the problem  – too many variables

The points made above – that outcomes cannot be attributed to actions unless the probabilities involved are close to zero or one – is a consequence of the fact that most organisational outcomes are results of several factors.   Therefore it is incorrect to attribute the outcome to a single factor (such as farsighted managerial action). Nancy Cartwright makes this point in her paper entitled Causal Laws and Effective Strategies, where she states that a cause ought to increase the frequency of its purported outcome,  but this increase can be masked by other causal factors that have not been taken into account. She uses the somewhat dated and therefore incorrect example of the relationship between smoking and heart disease. However, it serves to illustrate the point, so I’ll quote it below:

…a cause ought to increase the frequency of its effect. But this fact may not show up in the probabilities if other causes are at work. Background correlations between the purported cause and other causal factors may conceal the increase in probability which would otherwise appear. A simple example will illustrate. It is generally supposed that smoking causes heart disease. Thus, we may expect that the probability of heart disease on smoking is greater than otherwise (K’s note: i.e. the conditional probability of heart disease given that the person is a smoker, P(H/S),  is greater than the probability of heart disease in the general population, P(H)). This expectation is mistaken, however. Even if it is true that smoking causes heart disease, the expected increase in probability will not appear if smoking is correlated with a sufficiently strong preventative, say exercising. To see why this is so, imagine that exercising is more effective at preventing heart disease than smoking at causing it. Then in any population where smoking and exercising are highly enough correlated,  it can be true that P(H/S) = P(H), or even P(H/S) < P(H).  For the population of smokers also contains a good many exercisers, and when the two are in combination, the exercising tends to dominate….

In the case of strategic outcomes, it is impossible to know all the factors involved. Moreover, the factors are often interdependent and subject to positive feedback (see my previous post for more on this). So the problem is even worse than implied by Cartwright’s example.

Conclusions

The implications of the above can be summarised as follows: the efficacy of most strategic managerial actions is questionable because the probabilities involved in such claims are rarely close to zero or one. This shouldn’t be a surprise:  most organisational outcomes are consequences of several factors acting in concert, many of which combine in unpredictable ways.   Given this  is unreasonable to expect that   managerial actions will result in predictable organisational outcomes.  That said, it is only natural to claim responsibility for desirable outcomes and shift the blame for undesirable ones, as it is to seek simplistic solutions to difficult organisational problems. Hence the insatiable market for management snake oil.

To make matters worse, the factors are often interdependent (see my previous post for more on this).

Written by K

August 5, 2010 at 10:09 pm

On the origin of power laws in organizational phenomena

with 4 comments

Introduction

Uncertainty is a fact of organizational life –   managers often have to make decisions based on uncertain or incomplete information. Typically such decisions are based on a mix of intuition, experience and blind guesswork or “gut feel”.  In recent years, probabilistic (or statistical) techniques have entered mainstream organizational practice. These have enabled managers to base their decisions and consequent actions on something more than mere subjective judgement – or so the theory goes.

Much of the statistical analysis in organisational theory and research is based  on the assumption that the variables of interest have a Normal (aka Gaussian) distribution. That is, the probability of a variable taking on a particular value can be reckoned from the familiar  bell-shaped curve.  In a paper entitled Beyond Gaussian averages: redirecting organizational science towards extreme events and power laws, Bill McKelvey and Pierpaolo Andriani, suggest that many (if not most) organizational variables aren’t normally distributed, but are better described by power law or   fat-tailed (aka long-tailed or heavy-tailed) distributions. If correct, this has major consequences for quantitative analysis in many areas of organizational theory and practice. To quote from their paper:

Quantitative management researchers tend to presume Gaussian (normal) distributions with matching statistics – for evidence, study any random sample of their current research. Suppose this premise is mostly wrong. It follows that (1) publication decisions based on Gaussian statistics could be mistaken, and (2) advice to managers could be misguided.

Managers generally assume that their actions will not have extreme outcomes. However, if organisational phenomena exhibit power law behaviour, it is possible that seemingly minor actions could have disproportionate results. It is therefore important to understand how such  extreme outcomes  can come about. This post, based on the aforementioned paper and some of the references therein discusses a couple of general mechanisms via which power laws can  arise in organizational phenomena.

I’ll begin by outlining the main differences between normal and power law distributions, and then present a few social phenomena that display power law behaviour. Following that, I get to my main point – a discussion of general mechanisms that underlie power-law type behaviour in organisational phenomena. I conclude by outlining the implication of power-law phenomena for managerial actions and their (intended) outcomes.

Power laws vs. the Normal distribution

Probabilistic variables that are described by the normal distributions tend to take on values that cluster around the average, with the probability dropping off to zero rapidly on either side of the average. In contrast, for long –tailed distributions, there is a small but significant probability that the variable will take on a value that is very far from the average (what is sometimes called a black swan event).  Long-tailed distributions are often  described by power laws.  In such cases, the probability of variable taking a value x is described by a function like x^{-\alpha}  where \alpha is called the power law exponent .  A well-known power law distribution in business and marketing theory is the Pareto distribution.  An important characteristic of power law distributions  is that they have infinite variances and unstable means, implying that outliers cannot be ignored and that averages are meaningless.

Power laws in social phenomena

In their paper Mckelvey and Andriani mention a number  of examples of power laws in natural and social phenomena.  Examples of the latter include:

  1. The sizes of US firms : the probability that a firm is greater than size N (where N is the number of employees), is inversely proportional to N .
  2. The number of criminal acts committed by individuals: the frequency of conviction is a power law function of the ranked number of convictions.
  3. Information access on the Web: The access rate of new content on the web decays with time according to a power law.
  4. Frequency of family names: Frequency of family names has a power law dependence on family size (number of people with the same family name).

Given the ubiquity of power laws in social phenomena, Mckelvey and Adriani suggest that they may be common in organizational phenomena as well.  If this is so, managerial decisions based on the assumption of normality could be wildly incorrect. In effect, such an assumption treats extreme events as aberrations and ignores them. But extreme events have extreme business implications and hence must be factored in to any sensible analysis.

If power laws are indeed as common as claimed, there must be some common underlying mechanism(s) that give rise to them.  We look at a couple of these in the following sections.

Positive feedback

In a classic paper entitled, The Second Cybernetics: Deviation-Amplifying Mutual Causal Processes, published in 1963, Magoroh Maruyama pointed out that small causes can have disproportionate effects if they are amplified through positive feedback.   Audio feedback is a well known example of this process.  What is, perhaps, less well appreciated is that mutually dependent deviation-amplifying processes can cause qualitative changes in the phenomenon of interest. A classic example is the phenomenon of a run on a bank : as people withdraw money in bulk, the likelihood of bank insolvency increases thus causing more people to make withdrawals. The qualitative change at the end of this positive feedback cycle is, of course, the bank going bust.

Maruyama also draws attention to the fact that the law of causality – that similar causes lead to similar effects – needs to be revised in light of positive feedback effects. To quote from his paper:

A sacred law of causality in the classical philosophy stated that similar conditions produce similar effects. Consequently, dissimilar results were attributed to dissimilar conditions. Many scientific researches were dictated by this philosophy. For example, when a scientist tried to find out why two persons under study were different, he looked for a difference in their environment or in their heredity. It did not occur to him that neither environment nor heredity may be responsible for the difference – He overlooked the possibility that some deviation-amplifying interactional process in their personality and in their environment may have produced the difference.

In the light of the deviation-amplifying mutual causal process, the law of causality is now revised to state that similar conditions may result in dissimilar products. It is important to note that this revision is made without the introduction of indeterminism and probabilism. Deviation-amplifying mutual causal processes are possible even within the deterministic universe, and make the revision of the law of causality even within the determinism. Furthermore, when the deviation-amplifying mutual causal process is combined with indeterminism, here again a revision of a basic law becomes necessary. The revision states:

A small initial deviation, which is within the range of high probability, may develop into a deviation of very low probability or more precisely, into a deviation which is very improbable within the framework of probabilistic unidirectional causality.

The effect of positive feedback can be further amplified if the variable of interest is made up of several interdependent (rather than independent) effects. We’ll look at what this means next.

Interdependence, not independence

Typically we invoke probabilities when we are uncertain about outcomes. As an example from project management, the uncertainty in the duration of a project task can be modeled using a probability distribution.  In this case the probability distribution is a characterization of our uncertainty regarding how long it is going to take to complete the task. Now, the accuracy of one’s predictions depends on whether the probability distribution is a good representation of (the yet to materialize) reality.  Where does the distribution come from? Generally one fits the data to an assumed distribution.  This is an important point: the fit is an assumption – one can fit historical data to any reasonable distribution, but one can never be sure that it is the right one. To get the form of the distribution from first principles one has to understand the mechanism behind the quantity  of interest. To do that one has to first figure out what the quantity depends on .  It is hard to do this for  organisational phenomena  because they depend on several factors.

I’ll explain using an example: what does a  project task duration depend on?  There are several possibilities – developer productivity, technology used, working environment or even the quality of the coffee!  Quite possibly it depends on  all of the above and many more factors. Further still, the variables that affect task duration can depend on each other – i.e. they can be correlated.  An example of correlation is the link between productivity and working environment. Such dependencies are  a key difference between Normal and power law distributions. To quote from the paper:

The difference lies in assumptions about the correlations among events. In a Gaussian distribution the data points are assumed to be independent and additive. Independent events generate normal distributions, which sit at the heart of modern statistics. When causal elements are independent-multiplicative they produce a lognormal distribution (see this paper for several examples drawn from science), which turns into a Pareto distribution as the causal complexity increases. When events are interdependent, normality in distributions is not the norm. Instead Paretian distributions dominate because positive feedback processes leading to extreme events occur more frequently than ‘normal’, bell-shaped Gaussian-based statistics lead us to expect. Further, as tension imposed on the data points increases to the limit, they can shift from independent to interdependent.

So, variables that are made up of many independent causes will be normally distributed whereas those that are made up of many interdependent (or correlated) variables will have a power law distribution, particularly if the variables display a positive feedback effect.  See my posts entitled,  Monte Carlo simulation of multiple project tasks and the effect of task duration correlations on project schedules for illustrations of the effects of interdependence and correlations on variables.

Wrapping up

We’ve looked at a couple of general mechanisms which can give rise to power laws in organisations.  In particular, we’ve seen that power laws may lurk in phenomena that are subject to positive feedback and correlation effects. It is important to note that these effects are quite general, so they can apply to diverse organizational phenomena.  For such phenomena, any analysis based on the assumption of Normal statistics will be flawed.

Most management theories assume  simple cause-effect relationships between managerial actions and macro-level outcomes.  This assumption is flawed because  positive feedback effects can cause  qualitative changes in the phenomena studied. Moreover,  it is often difficult to know with certainty all the factors that affect a macro-level quantity becasues  such quantities are typically composed of  several interdependent factors.  In view of this it’s no surprise that managerial actions sometimes lead to unexpected  extreme consequences.

Interdependence, not independence

Typically we invoke probabilities when we are uncertain about outcomes. As an example from project management, the uncertainty in the duration of a project task can be modeled using a probability distribution.  In this case the probability distribution is a characterization of our uncertainty regarding how long it is going to take to complete the task. Now, the accuracy of one’s predictions depends on whether the probability distribution is a good representation of (the yet to materialize) reality.  But where does the distribution itself come from? Generally one fits the data to an assumed distribution.  This is an important point: the fit is an assumption – one can fit historical data to any reasonable distribution, but one can never be sure that it is the right one. To get the form of the distribution from first principles one has to understand the mechanism behind the quantity  of interest. To do that one has to first figure out what the quantity depends on .  It is hard to do this for  organisational phenomena,  which generally cannot be studied in controlled conditions.

To take a concrete example: what does a  project task duration depend on?  Developer competence? Technology used? Autonomy? Quality of the coffee??  Quite possibly it depends on all of the above. But even further, the variables that make up the quantity of interest can depend on each other – i.e. the can be correlated. This is a key difference between Normal and power law distributions. To quote from the paper:

The difference lies in assumptions about the correlations among events. In a Gaussian distribution the data points are assumed to be independent and additive. Independent events generate normal distributions, which sit at the heart of modern statistics. When causal elements are independent-multiplicative they produce a lognormal distribution (see this paper for examples drawn from science), which turns into a Pareto distribution as the causal complexity increases. When events are interdependent, normality in distributions is not the norm. Instead Paretian distributions dominate because positive feedback processes leading to extreme events occur more frequently than ‘normal’, bell-shaped Gaussian-based statistics lead us to expect. Further, as tension imposed on the data points increases to the limit, they can shift from independent to interdependent.

So, variables that are made up of many independent causes will be normally distributed whereas those that are made up of many interdependent (or correlated) variables will have a power law distribution, particularly if the variables display a positive feedback effect.  See my posts entitled,  Monte Carlo simulation of multiple project tasks and the effect of task duration correlations on project schedules for illustrations of the effects of interdependence and correlations on variables.

Scientific management theories assume a simple cause-effect relationship between managerial actions and macro-level outcomes.  In reality however , it is difficult to know with certainty all the factors that affect a macro-level quantity; it is typically influenced by several interdependent factors.  In view of this it’s no surprise that simplistic prescriptions hawked by management gurus and bestsellers seldom help in fixing organisational problems.

Written by K

July 28, 2010 at 11:43 pm

%d bloggers like this: