Cause and effect in management
Management schools and gurus tell us that specific managerial actions will lead to desirable consequences – witness the prescriptions for success in books such as Good to Great or In Search of Excellence. But can one really attribute success (or failure) to specific actions? A cause-effect relationship is often assumed, but in reality the causal connection between strategic management actions and organisational outcomes is tenuous. This post, based on a paper by Glenn Shafer entitled Causality and Responsibility, is an exploration of the causal connection between managerial actions and their (assumed) consequences.
Note that the discussion below applies to strategic – or “big picture” – management decisions, not operational ones. In the latter, cause and effect is generally quite clear cut. For example, the decision to initiate a project sets in motion several processes that have fairly predictable outcomes. However, taking a big picture view, initiating a project (or even the successful completion of one) does not imply that the strategic aims of the project will be met. It is the latter point that is of interest here – the causal connection between a strategic decision and its assumed outcome.
Shafer’s paper deals with causality and responsibility in legal deliberations: specifically, the process by which judges and juries reach their verdict as to whether the accused (person or entity) is actually responsible (in a causal sense) for the outcome they are charged with. In short, did the actions of the accused cause the outcome? The arguments Shafer makes are quite general, and have applicability to any discipline. In the following paragraphs I’ll look at a couple of the key points he makes and outline their implications for cause and effect in management actions.
Deterministic cause-effect relationships
The first point that Shafer makes is that we should infer that a particular action causes a particular outcome only if it is improbable that the outcome could have happened without the action preceding it. In Shafer’s words:
…we are on safe ground in attributing responsibility if we do so based on our knowledge of impossibilities. It is not surprising, therefore, that the classical legal concept of cause – necessary and sufficient cause – is defined in terms of impossibility. According to this concept, an action causes an event if the event must happen (it is impossible for it not to happen) when the action is taken and cannot happen (it is impossible for it to happen) if the action is not taken.
This is, in fact, what legal arguments attempt to do: they attempt to prove, beyond reasonable doubt, that the crime occurred because of the defendants actions.
The reason that impossibilities are a better way of “proving” causal relationships is that such relationships cannot be invalidated as our knowledge of the situation increases providing the knowledge that we already have is valid. In other words, once something is deemed impossible (using valid knowledge) then it remains so even if we get to know more about the situation. In contrast, if something is deemed possible in the light of existing knowledge, it can be rendered false by a single contradictory fact.
The implication of the above for cause and effect in management is clear: a manager can (should!) claim responsibility for a particular outcome only if:
- The outcome must (almost always) happen if the managerial action occurs.
- It is highly unlikely that the outcome could have occurred without the action occurring prior to it.
Seen in this light, many of the prescriptions laid out in management bestsellers are little better than herpetological oleum.
Probabilistic cause-effect relationships
Of course, deterministic cause-effect relationships aren’t the norm in management – only the supremely confident (foolhardy?) would claim that a specific managerial action will always lead to a specific organisational outcome. This begs the question: what about probabilistic relationships? That is, what can we say about claims that a particular action results in a particular effect, but only in a fraction of the instances in which the action occurs?
To address this question, Shafer makes the important point that probabilities not close to zero or one have no meaning in isolation. They have meaning only in a system, and their meaning derives from the impossibility of a successful gambling strategy—the probability close to one that no one can make a substantial amount of money betting at the odds given by the probabilities. The last part of the previous statement is a consequence of how probabilities are validated empirically. In Shafer’s words:
We validate a system of probabilities empirically by performing statistical tests. Each such test checks whether observations have some overall property that the system says they are practically certain to have. It checks, in other words, on whether observations diverge from the probabilistic model in a way that the model says is practically (approximately) impossible. In Probability and Finance: It’s Only a Game, Vovk and I argue that both the applications of probability and the classical limit theorems (the law of large numbers, the central limit theorem, etc.) can be most clearly understood and most elegantly explained if we treat these asserted practical impossibilities as the basic meaning of a probabilistic or statistical model, from which all other mathematical and practical conclusions are to be derived. I cannot go further into the argument of the book here, but I do want to emphasize one of its consequences: because the empirical validity of a system of probabilities involves only the approximate impossibilities it implies, it is only these approximate impossibilities that we should expect to see preserved in a deeper causal structure. Other probabilities, those not close to zero or one, may not be preserved and hence cannot claim the causal status.
An implication of the above is that probabilities not close to zero or one are not fundamental properties of the system/situation; they are subject to change as our knowledge of the situation/system improves. A simple example may serve to explain this point. Consider the following hypothetical claim from a software vendor:
“80% of our customers experience an increase in sales after implementing our software system.”
Presumably, the marketing department responsible for this statement has the data to back it up. Despite that, the increase in sales for a particular customer cannot (should not!) be attributed to the software. Why? Well, for the following reasons:
- The particular customer may differ in important ways from those used in estimating the probability.This is a manifestation of the reference class problem.
- Most statistical studies of the kind used in marketing or management studies are enumerative, not analytical – i.e they can be used to classify data, but not to establish cause-effect relationships. See my post entitled Enumeration or Analysis for more onthe differences between enumerative and analytical studies.
There is an underlying reason for the above which I’ll discuss next.
The root of the problem – too many variables
The points made above – that outcomes cannot be attributed to actions unless the probabilities involved are close to zero or one – is a consequence of the fact that most organisational outcomes are results of several factors. Therefore it is incorrect to attribute the outcome to a single factor (such as farsighted managerial action). Nancy Cartwright makes this point in her paper entitled Causal Laws and Effective Strategies, where she states that a cause ought to increase the frequency of its purported outcome, but this increase can be masked by other causal factors that have not been taken into account. She uses the somewhat dated and therefore incorrect example of the relationship between smoking and heart disease. However, it serves to illustrate the point, so I’ll quote it below:
…a cause ought to increase the frequency of its effect. But this fact may not show up in the probabilities if other causes are at work. Background correlations between the purported cause and other causal factors may conceal the increase in probability which would otherwise appear. A simple example will illustrate. It is generally supposed that smoking causes heart disease. Thus, we may expect that the probability of heart disease on smoking is greater than otherwise (K’s note: i.e. the conditional probability of heart disease given that the person is a smoker, P(H/S), is greater than the probability of heart disease in the general population, P(H)). This expectation is mistaken, however. Even if it is true that smoking causes heart disease, the expected increase in probability will not appear if smoking is correlated with a sufficiently strong preventative, say exercising. To see why this is so, imagine that exercising is more effective at preventing heart disease than smoking at causing it. Then in any population where smoking and exercising are highly enough correlated, it can be true that P(H/S) = P(H), or even P(H/S) < P(H). For the population of smokers also contains a good many exercisers, and when the two are in combination, the exercising tends to dominate….
In the case of strategic outcomes, it is impossible to know all the factors involved. Moreover, the factors are often interdependent and subject to positive feedback (see my previous post for more on this). So the problem is even worse than implied by Cartwright’s example.
The implications of the above can be summarised as follows: the efficacy of most strategic managerial actions is questionable because the probabilities involved in such claims are rarely close to zero or one. This shouldn’t be a surprise: most organisational outcomes are consequences of several factors acting in concert, many of which combine in unpredictable ways. Given this is unreasonable to expect that managerial actions will result in predictable organisational outcomes. That said, it is only natural to claim responsibility for desirable outcomes and shift the blame for undesirable ones, as it is to seek simplistic solutions to difficult organisational problems. Hence the insatiable market for management snake oil.