Eight to Late

Sensemaking and Analytics for Organizations

Archive for July 2010

On the origin of power laws in organizational phenomena

with 4 comments

Introduction

Uncertainty is a fact of organizational life –   managers often have to make decisions based on uncertain or incomplete information. Typically such decisions are based on a mix of intuition, experience and blind guesswork or “gut feel”.  In recent years, probabilistic (or statistical) techniques have entered mainstream organizational practice. These have enabled managers to base their decisions and consequent actions on something more than mere subjective judgement – or so the theory goes.

Much of the statistical analysis in organisational theory and research is based  on the assumption that the variables of interest have a Normal (aka Gaussian) distribution. That is, the probability of a variable taking on a particular value can be reckoned from the familiar  bell-shaped curve.  In a paper entitled Beyond Gaussian averages: redirecting organizational science towards extreme events and power laws, Bill McKelvey and Pierpaolo Andriani, suggest that many (if not most) organizational variables aren’t normally distributed, but are better described by power law or   fat-tailed (aka long-tailed or heavy-tailed) distributions. If correct, this has major consequences for quantitative analysis in many areas of organizational theory and practice. To quote from their paper:

Quantitative management researchers tend to presume Gaussian (normal) distributions with matching statistics – for evidence, study any random sample of their current research. Suppose this premise is mostly wrong. It follows that (1) publication decisions based on Gaussian statistics could be mistaken, and (2) advice to managers could be misguided.

Managers generally assume that their actions will not have extreme outcomes. However, if organisational phenomena exhibit power law behaviour, it is possible that seemingly minor actions could have disproportionate results. It is therefore important to understand how such  extreme outcomes  can come about. This post, based on the aforementioned paper and some of the references therein discusses a couple of general mechanisms via which power laws can  arise in organizational phenomena.

I’ll begin by outlining the main differences between normal and power law distributions, and then present a few social phenomena that display power law behaviour. Following that, I get to my main point – a discussion of general mechanisms that underlie power-law type behaviour in organisational phenomena. I conclude by outlining the implication of power-law phenomena for managerial actions and their (intended) outcomes.

Power laws vs. the Normal distribution

Probabilistic variables that are described by the normal distributions tend to take on values that cluster around the average, with the probability dropping off to zero rapidly on either side of the average. In contrast, for long –tailed distributions, there is a small but significant probability that the variable will take on a value that is very far from the average (what is sometimes called a black swan event).  Long-tailed distributions are often  described by power laws.  In such cases, the probability of variable taking a value x is described by a function like x^{-\alpha}  where \alpha is called the power law exponent .  A well-known power law distribution in business and marketing theory is the Pareto distribution.  An important characteristic of power law distributions  is that they have infinite variances and unstable means, implying that outliers cannot be ignored and that averages are meaningless.

Power laws in social phenomena

In their paper Mckelvey and Andriani mention a number  of examples of power laws in natural and social phenomena.  Examples of the latter include:

  1. The sizes of US firms : the probability that a firm is greater than size N (where N is the number of employees), is inversely proportional to N .
  2. The number of criminal acts committed by individuals: the frequency of conviction is a power law function of the ranked number of convictions.
  3. Information access on the Web: The access rate of new content on the web decays with time according to a power law.
  4. Frequency of family names: Frequency of family names has a power law dependence on family size (number of people with the same family name).

Given the ubiquity of power laws in social phenomena, Mckelvey and Adriani suggest that they may be common in organizational phenomena as well.  If this is so, managerial decisions based on the assumption of normality could be wildly incorrect. In effect, such an assumption treats extreme events as aberrations and ignores them. But extreme events have extreme business implications and hence must be factored in to any sensible analysis.

If power laws are indeed as common as claimed, there must be some common underlying mechanism(s) that give rise to them.  We look at a couple of these in the following sections.

Positive feedback

In a classic paper entitled, The Second Cybernetics: Deviation-Amplifying Mutual Causal Processes, published in 1963, Magoroh Maruyama pointed out that small causes can have disproportionate effects if they are amplified through positive feedback.   Audio feedback is a well known example of this process.  What is, perhaps, less well appreciated is that mutually dependent deviation-amplifying processes can cause qualitative changes in the phenomenon of interest. A classic example is the phenomenon of a run on a bank : as people withdraw money in bulk, the likelihood of bank insolvency increases thus causing more people to make withdrawals. The qualitative change at the end of this positive feedback cycle is, of course, the bank going bust.

Maruyama also draws attention to the fact that the law of causality – that similar causes lead to similar effects – needs to be revised in light of positive feedback effects. To quote from his paper:

A sacred law of causality in the classical philosophy stated that similar conditions produce similar effects. Consequently, dissimilar results were attributed to dissimilar conditions. Many scientific researches were dictated by this philosophy. For example, when a scientist tried to find out why two persons under study were different, he looked for a difference in their environment or in their heredity. It did not occur to him that neither environment nor heredity may be responsible for the difference – He overlooked the possibility that some deviation-amplifying interactional process in their personality and in their environment may have produced the difference.

In the light of the deviation-amplifying mutual causal process, the law of causality is now revised to state that similar conditions may result in dissimilar products. It is important to note that this revision is made without the introduction of indeterminism and probabilism. Deviation-amplifying mutual causal processes are possible even within the deterministic universe, and make the revision of the law of causality even within the determinism. Furthermore, when the deviation-amplifying mutual causal process is combined with indeterminism, here again a revision of a basic law becomes necessary. The revision states:

A small initial deviation, which is within the range of high probability, may develop into a deviation of very low probability or more precisely, into a deviation which is very improbable within the framework of probabilistic unidirectional causality.

The effect of positive feedback can be further amplified if the variable of interest is made up of several interdependent (rather than independent) effects. We’ll look at what this means next.

Interdependence, not independence

Typically we invoke probabilities when we are uncertain about outcomes. As an example from project management, the uncertainty in the duration of a project task can be modeled using a probability distribution.  In this case the probability distribution is a characterization of our uncertainty regarding how long it is going to take to complete the task. Now, the accuracy of one’s predictions depends on whether the probability distribution is a good representation of (the yet to materialize) reality.  Where does the distribution come from? Generally one fits the data to an assumed distribution.  This is an important point: the fit is an assumption – one can fit historical data to any reasonable distribution, but one can never be sure that it is the right one. To get the form of the distribution from first principles one has to understand the mechanism behind the quantity  of interest. To do that one has to first figure out what the quantity depends on .  It is hard to do this for  organisational phenomena  because they depend on several factors.

I’ll explain using an example: what does a  project task duration depend on?  There are several possibilities – developer productivity, technology used, working environment or even the quality of the coffee!  Quite possibly it depends on  all of the above and many more factors. Further still, the variables that affect task duration can depend on each other – i.e. they can be correlated.  An example of correlation is the link between productivity and working environment. Such dependencies are  a key difference between Normal and power law distributions. To quote from the paper:

The difference lies in assumptions about the correlations among events. In a Gaussian distribution the data points are assumed to be independent and additive. Independent events generate normal distributions, which sit at the heart of modern statistics. When causal elements are independent-multiplicative they produce a lognormal distribution (see this paper for several examples drawn from science), which turns into a Pareto distribution as the causal complexity increases. When events are interdependent, normality in distributions is not the norm. Instead Paretian distributions dominate because positive feedback processes leading to extreme events occur more frequently than ‘normal’, bell-shaped Gaussian-based statistics lead us to expect. Further, as tension imposed on the data points increases to the limit, they can shift from independent to interdependent.

So, variables that are made up of many independent causes will be normally distributed whereas those that are made up of many interdependent (or correlated) variables will have a power law distribution, particularly if the variables display a positive feedback effect.  See my posts entitled,  Monte Carlo simulation of multiple project tasks and the effect of task duration correlations on project schedules for illustrations of the effects of interdependence and correlations on variables.

Wrapping up

We’ve looked at a couple of general mechanisms which can give rise to power laws in organisations.  In particular, we’ve seen that power laws may lurk in phenomena that are subject to positive feedback and correlation effects. It is important to note that these effects are quite general, so they can apply to diverse organizational phenomena.  For such phenomena, any analysis based on the assumption of Normal statistics will be flawed.

Most management theories assume  simple cause-effect relationships between managerial actions and macro-level outcomes.  This assumption is flawed because  positive feedback effects can cause  qualitative changes in the phenomena studied. Moreover,  it is often difficult to know with certainty all the factors that affect a macro-level quantity becasues  such quantities are typically composed of  several interdependent factors.  In view of this it’s no surprise that managerial actions sometimes lead to unexpected  extreme consequences.

Interdependence, not independence

Typically we invoke probabilities when we are uncertain about outcomes. As an example from project management, the uncertainty in the duration of a project task can be modeled using a probability distribution.  In this case the probability distribution is a characterization of our uncertainty regarding how long it is going to take to complete the task. Now, the accuracy of one’s predictions depends on whether the probability distribution is a good representation of (the yet to materialize) reality.  But where does the distribution itself come from? Generally one fits the data to an assumed distribution.  This is an important point: the fit is an assumption – one can fit historical data to any reasonable distribution, but one can never be sure that it is the right one. To get the form of the distribution from first principles one has to understand the mechanism behind the quantity  of interest. To do that one has to first figure out what the quantity depends on .  It is hard to do this for  organisational phenomena,  which generally cannot be studied in controlled conditions.

To take a concrete example: what does a  project task duration depend on?  Developer competence? Technology used? Autonomy? Quality of the coffee??  Quite possibly it depends on all of the above. But even further, the variables that make up the quantity of interest can depend on each other – i.e. the can be correlated. This is a key difference between Normal and power law distributions. To quote from the paper:

The difference lies in assumptions about the correlations among events. In a Gaussian distribution the data points are assumed to be independent and additive. Independent events generate normal distributions, which sit at the heart of modern statistics. When causal elements are independent-multiplicative they produce a lognormal distribution (see this paper for examples drawn from science), which turns into a Pareto distribution as the causal complexity increases. When events are interdependent, normality in distributions is not the norm. Instead Paretian distributions dominate because positive feedback processes leading to extreme events occur more frequently than ‘normal’, bell-shaped Gaussian-based statistics lead us to expect. Further, as tension imposed on the data points increases to the limit, they can shift from independent to interdependent.

So, variables that are made up of many independent causes will be normally distributed whereas those that are made up of many interdependent (or correlated) variables will have a power law distribution, particularly if the variables display a positive feedback effect.  See my posts entitled,  Monte Carlo simulation of multiple project tasks and the effect of task duration correlations on project schedules for illustrations of the effects of interdependence and correlations on variables.

Scientific management theories assume a simple cause-effect relationship between managerial actions and macro-level outcomes.  In reality however , it is difficult to know with certainty all the factors that affect a macro-level quantity; it is typically influenced by several interdependent factors.  In view of this it’s no surprise that simplistic prescriptions hawked by management gurus and bestsellers seldom help in fixing organisational problems.

Written by K

July 28, 2010 at 11:43 pm

On the relationship between projects and project managers

leave a comment »

Introduction

Those who run projects often spend a large part of their waking hours working or thinking about work.  It is therefore no surprise that the self images and identities of such individuals are affected (if not defined) by their work roles.  In a sense, their identities are colonized (in the sense of “taken over” or “largely defined”) by the project and the larger, permanent organization which hosts the project. A paper entitled, Who is colonizing whom? Intertwined identities in product development projects,  by Thomas Andersson and Mikael Wickelgren explores this issue via a longitudinal case study that was carried out within new product development teams at Volvo. This post is a summary and review of the paper.

From the title of the paper it is evident that the identity issue is more than just a simple   “the project leader (or team’s) identity is defined by project work”  argument . Indeed, those who lead  product development projects  are themselves involved in creating at least three identities: those of the product, project and organization. Further, as I have pointed out in my post on project management in post-bureaucratic organizations, there are contradictions in the way in which project management operates. On the one hand it is seen as a means to direct innovative efforts (such as product development initiatives); on the other it is an essentially top-down,  bureaucratic means of control. Project teams often operate in such contradictory, tension-filled environments.  Although, most project leaders believe they work on projects by choice, it could be that the  not-so-subtle pressures of expectation and social/ professional norms force the choice upon them.

However, as the authors point out, identity construction isn’t enough to explain why folks are willing to work insane hours; there’s something more going on. Indeed there is research that supports the notion that employees identities are moulded  by organizations to suit organizational ends –  in other words, employees identities are colonized by organisations. Andersson and Wickelgren  suggest that the process of  colonization isn’t as straightforward as it seems because employees often actively seek demanding roles. So, who is colonizing whom? Both parties are complicit in the colonization process.  The main aim of the authors is to describe identity construction processes in project work with a focus on how, via the process  proving their competence in project work, project leaders form their  own self identities.

My review follows the format of the paper: I’ll begin  with an overview of the relevant conceptual and theoretical background and then get into the case study.

Identity construction in project management

Individual identities are defined by how people relate to (professional and social) situations. The process of identity construction is an ongoing process of making sense of situations from a personal perspective. An individual is typically subject to many different identities at an aggregate level – for example the professional identity of a project manager, the identity of a parent etc. These can be thought of as certain norms or ways in which people are expected to behave. Individuals identify with aspects of these roles and  act them out, thus  integrating them into  their own (self) identities. The point is that one’s professional identity is a part of one’s self identity.

The process of identity construction is a discursive one: i.e. it depends on how the situations in which the individual finds himself or herself unfold.  In this connection the authors mention two important concepts, identity work and identity regulation. The first is the ongoing process of identity formation (and reformation) through interaction with the situation at hand (which could be a project, say). The second refers to the norms and rules, or the ways in which people are expected to behave in situations (the explicit norms and rules of project management, for instance). It is clear that identity regulation – by laying down appropriate behaviours – can have a “colonizing effect” on people’s identities. The authors make the point that even identity work can have an implicit colonizing effect – an example would be where project leaders identify with their work to such an extent that they go above and beyond the call of duty.

The increasing projectisation of organisations means that many  individuals spend a large part of their work hours in a project environment. As such it is inevitable that this will affect their self-identity.  In a fascinating paper, Lindgren and Packendorff pointed out that there are certain commonly accepted ways of relating to and making sense of project situations.  For example – a project is seen as demanding higher levels of professionalism and loyalty than “normal” organizational work. Such norms constrain choices of those involved in projects. Once signed up, one has to behave in certain accepted ways. The authors make the point that there are very few empirical studies that look into how people handle such a “projectified reality.”

Several researchers have recognized the paradoxical nature of project work. Project-based management is touted as a means to loosen the hold of bureaucratic management structures. However, project management in practice tends to be riddled with (pointless?) bureaucratic procedures. As another example, projects are seen as a means to accelerate organizational learning – by doing new things under controlled, time-bound conditions. Yet, the reality is that when projects are in progress, organisations are loathe to spend time on capturing knowledge. The focus is always on the immediate deliverables rather than longer term learning. Given this, it is no surprise that individuals too, would rather focus on their next project than reflect on what happened on the previous one.  As the authors put it:

…project managers seem ambivalent about their ‘professional identity’; they both aspire to it and resist it. Because of the lack of opportunities for reflection and learning, project workers often seek higher positions in future projects as their reward, with the result that their careers become a series of endless projects requiring increased responsibility and commitment. Emergency situations and problems that arise owing to these time and resource constraints are resolved by heroic actions that gradually become taken-for granted solutions.

It’s a sad commentary on the profession, but I reckon that we, as practitioners, are at least partly to blame.

Project work-life balance

Projects usually operate under tight budgetary and time constraints. Even if one can find more money to throw at a project, it is often impossible to buy more time. As a result those who work on projects often end up working overtime – “doing whatever it takes” to finish the work. But as the authors point out:

‘Doing whatever it takes’ is a very abstract commitment that is hardly measurable since basically it is a social construction dependent on the project leaders’ sense of duty and the external pressures for heroic actions. The dark side of this commitment means long working hours with the inevitable risk of burnout, stress and work/life balance difficulties, all of which may lead to problems with health, general well-being, and family life. The potential damage is as real for the project workers as it is for their organizations.

But it’s even worse because this destructive type of “heroic behaviour”  is  generally seen as distinguishing committed workers from less committed ones.  The authors claim that this goes beyond specific organisations; it is a consequence of projectisation of society in general.

The case study

The qualitative data presented in the case study was gathered by the authors over two periods: 1998-2001 and 2007. In the intervening period, the authors stayed in touch with those in the organisation (Volvo Car Company), but did not actively gather data. Such a longitudinal study (over a long period) enables researchers to study how attitudes  evolve over time. The methodology of the study is best described in their own words:

We studied projects managers who were jointly running new car projects at Volvo Car Corporation (VCC). In addition to direct observations, we video-taped 100 hours of project management meetings and audio-taped individual interviews with all participating project leaders. Our combination of observations and interviews allowed us to observe the practice and everyday lives of the project leaders and to discuss the observations with the interviewees. Thus we were able to observe the project practice closely. In 2007, we interviewed some people from the initial round of interviews who still worked in new car projects. Using this wealth of empirical material, our focus here is on the multi-layered aspects of identity regulation and identity work, especially in terms of colonization, in the studied setting.

The project leaders enjoyed a high status within the company and were often seen as “company heroes, but only as long as their projects succeeded. This created a tremendous pressure to succeed thus making the heroic approach to project management almost inevitable.

Long hours come with the territory

Working long hours is often seen as a hallmark of a dedicated project manager. Further, in many organisations (such as the one studied by the authors) there was the expectation that project managers would sacrifice their own time for the good of the project. As one project leader explained to the authors:

I work all the time! Weekends, evenings…Before Christmas I picked up my husband at a Christmas party in the evening, and then I went back to work and stayed there until midnight. Still, I met a colleague on Saturday morning and was able to do some more work before we left for Christmas. That is typical in my work. I haven’t time for the simplest things in my private life.

The authors make the point that even if a project leader could finish his/her work  within a normal 40 hour work week, others would still question their commitment if they did not work overtime.  Further, if the project did fail, the failure would almost certainly be attributed to the lack of commitment. Project leaders are thus under pressure to work overtime, whether it is needed or not. This is reflected in another comment by a project leader, about what happens between projects:

The period between projects is tough. It takes time to come down. In the beginning it is hard to accept that working 40 hours a week is not the same as having a half-time job, but that is the feeling. […] I have accepted that there are no interesting jobs where you work 40 hours a week, but I think it might be possible to stop at 60 hours a week…

We see that although unreasonable demands are being made on project leaders, they accede to the demands – or even welcome them. So, who is colonizing whom?

Parallel identity construction

The answer to the question in the previous paragraph is complicated by the fact that the project managers who participated in the study wielded considerable influence over the product they were creating. Indeed, this was the intent of the company. That this is so is illustrated by the remarks of an HR director who was involved in the selection of a project leader:

[the candidate] was, along with his competence regarding car development, chosen because he was an almost perfect customer targeted for the V70….Putting him on the management team of the new car project gave him an opportunity to develop the perfect car that satisfied his lifestyle, and the company got the intended car developed. We also used him as an example of our projected customer in a part of our marketing campaign for the V70.

So, selection for prized positions within the company depended on the technical competence and lifestyles of the candidates. In turn, the chosen ones had an opportunity to stamp their personality (identity?) on the product they created.  Identities of people, projects and products were indeed entwined.

Discussion

The authors note that, once selected, project leaders were given considerable autonomy in running their projects. This included the freedom to make choices that influenced the product. Seen in this light, project leaders could stamp their identity on the products they were involved in creating.  However, there were limits to their independence. After all, project leaders depended on the organisation for their livelihoods. Furthermore, their independence was constrained by organizational rules and norms.

Taking another view, one could view the behaviour of the project leaders as being self imposed in the sense that the long hours they put in could be seen as some kind of an addiction to work. Workaholism is an apt term here, I suppose.    At first, the desire to become project leaders spurred them to work hard and once the position was attained they felt the desire to work harder still, possibly to prove that they were worthy of the trust reposed in them by the organisation.

Taking yet another view, one could say that the project leaders had limited choices in both their private and professional lives. On the one hand, there are professional expectations from the company and on the other, expectations from family and friends. Once the individual chose to become a project leader, the choices on offer in both spheres were limited by the choice they made and their personal priorities. All the project leaders interviewed gave their projects priority over all other aspects of their lives. This isn’t surprising because the selection process ensured it. Nevertheless, it does imply that the project leaders accepted that the organisation formed a major part of their self-identities.

It has to be acknowledged that because of their interest in cars, the project leaders were happy to work insane hours. However, equally, the company consciously exploited this interest to the extent that the project leaders believed the project to be the most important aspect of their lives.

Conclusions

Several researchers have suggested that organisations regulate individual identities in a manner that aligns them with organizational objectives (see this paper by Alvesson and Wilmott, for example). At first sight, the foregoing discussion is a case in point. Yet, to some extent, the project leaders in the study believed they had free will – that they made the choices they did because they wanted to. But in the end, the project (and organisation) “wins”. As the authors put it:

To some extent, the project leaders knew they were subjects of control, colonization, and regulation, and yet they chose this career path with full recognition of the consequences for their work/life balance. Their choice meant accepting long workdays and potential emotional and psychological damage in exchange for professional status, job fulfilment, and high compensation. The colonization had consequently moved beyond organizational controland corporate influence. The project leaders were colonized by the projectified society,a situation which made them aspire to the core constructions of the project management..

I suspect that many project managers – particularly those working on high profile projects within their organisations – will find themselves agreeing with the  authors.  Most of us choose to work on ever more challenging projects to further our professional experience, “make a difference” or even “influence the product”.  Regardless of our motives, we generally believe that we make the choice voluntarily. The question is: is this true?

Written by K

July 20, 2010 at 4:56 am

Beware the false analogy

with one comment

Reasoning by analogy refers to the process of drawing inferences based on similarities between different objects or concepts. For example, the electronic-hydraulic analogy is based on similarities between the flow of water in pipes and the flow of electricity in circuits. Closer home, project teams often use analogies when they estimate tasks based on similar work that they did in the past.   Analogical reasoning is powerful because it enables us to leverage existing knowledge in one area to solve problems in other, possibly unfamiliar, areas.  However, such reasoning can also mislead. This post looks at the problem of false analogies in project estimation.

I’ll begin with a story…

Some years ago, I was in a discussion with a client, talking about costs and timelines for an application that the client needed. The application was a sales bonus calculator for front-line sales staff.  The client needed an app that would calculate bonuses for each salesperson (based on some reasonably involved calculations) and display them via a web front-end.  Nothing fancy really,  just a run-of-the-mill corporate web-database application. The discussion was proceeding quite nicely until a manager from the client’s side felt obliged to make a remark based on a false analogy. I can’t recall the conversation word-for-word, but it went something like this:

“It can’t be that hard, he said. ” You guys have built a similar application before, your promotional literature says so”.  He knew from our brochure that we had built a bonus calculator before; problem was he didn’t know the details.

There was a brief silence until my boss said, “Umm…yes we have done a superficially similar project before, but the details were very different from this one.”

“How different can it be?” retorted the manager, “bonuses are based on sales data. You process the data based on rules and display it. Yes, the rules are different, but the concept’s the same. You should be able to do it in half the time you’ve quoted.”

My boss countered the argument politely, but the manager would not let it go. They went back and forth a number of times until the sponsor stepped in and asked my boss to ensure that the manager’s concerns were addressed. The issue was resolved later, after my boss stepped the manager through the application, showing him just how different it was from the one they had requested.

The technical manager based  his estimate on a superficial similarity between the app we were building for him and one that we had done earlier. Analogies almost always break down when examined in detail. For example, the electronic-hydraulic analogy mentioned in the first paragraph has several limitations. The same is true when comparing two projects or tasks.

An insidious (and dare I say, more common) occurence of such reasoning is when team members themselves draw false analogies.  This happens when they make seemingly harmless (and often tacit) assumptions regarding similarities between tasks  that are actually dissimilar in ways that are important.  See my post on the reference class problem for a discussion of an estimation technique that is prone to incorrect analogical reasoning.

Estimates based on false analogies are a reflection of poorly understood requirements.   This begs the question: why are  requirements misunderstood when most projects involve countless meetings to discuss scope and requirements? In my opinion this happens because talking about requirements doesn’t mean that everyone understands them in the same way. In fact, in most cases different stakeholders walk away from such meetings with their own version of what needs to be done and how to do it.  The first step towards curing the problem of false analogies is to ensure that all stakeholders have a shared understanding of the requirements. This applies  particularly to those who will create the product and those who will use it.  Dialogue mapping, which I’ve discussed at length in several posts on this blog, offers one way to achieve this shared understanding.

Of course, a deep understanding of  the requirements does not by itself   cure the problem of false analogies.  However, it does make estimators aware of what makes a particular project different from all the other ones they’ve done before. This makes it unlikely that they’ll use a false analogy when making their estimates.

Written by K

July 9, 2010 at 5:56 am

On the interpretation of probabilities in project management

with 3 comments

Introduction

Managers have to make decisions based on an imperfect and incomplete knowledge of future events.  One approach to improving managerial decision-making is to quantify uncertainties using probability.  But what does it mean to assign a numerical probability to an event?  For example, what do we mean when we say that the probability of finishing a particular task in 5 days is 0.75?   How is this number to be interpreted? As it turns out there are several ways of interpreting probabilities.  In this post I’ll look at three of these via an example drawn from project estimation.

Although the question raised above may seem somewhat philosophical, it is actually of great practical importance because of the increasing use of probabilistic techniques (such as Monte Carlo methods) in decision making. Those who advocate the use of these methods generally assume that probabilities are magically “given” and that their interpretation is unambiguous. Of course, neither is true – and hence the importance of clarifying what a numerical probability really means.

The example

Assume there’s a task that needs doing – this may be a project task or some other job that a manager is overseeing. Let’s further assume that we know the task can take anywhere between 2 to 8 days to finish, and that we (magically!) have numerical probabilities associated with completion on each of the days (as shown in the table below). I’ll say a teeny bit more about how these probabilities might be estimated shortly.

Task finishes on Probability
Day 2 0.05
Day 3 0.15
Day 4 0.3
Day5 0.25
Day 6 .15
Day 7 .075
Day 8 .025

This table is a simple example of what’s technically called a probability distribution. Distributions express probabilities as a function of some variable. In our case the variable is time.

How are these probabilities obtained? There is no set method to do this but commonly used techniques are:

  1. By using historical data for similar tasks.
  2. By asking experts in the field.

Estimating probabilities is a hard problem. However, my aim in this article is to discuss what probabilities mean, not how they are obtained. So I’ll take the probabilities mentioned above as given and move on.

The rules of probability

Before we discuss the possible interpretations of probability, it is necessary to mention some of the mathematical properties we expect probabilities to possess. Rather than present these in a formal way, I’ll discuss them in the context of our example.

Here they are:

  1. All probabilities listed are numbers that lie between 0 (impossible) and 1 (absolute certainty).
  2. It is absolutely certain that the task will finish on one of the listed days. That is, the sum of all probabilities equals 1.
  3. It is impossible for the task not to finish on one of the listed days. In other words, the probability of the task finishing on a day not listed in the table is 0.
  4. The probability of finishing on any one of many days is given by the sum of the probabilities for all those days.  For example, the probability of finishing on day 2 or day 3 is 0.20 (i.e,  0.05+0.15). This holds because the two events are mutually exclusive – that is, the occurence of one event precludes the occurence of the other. Specifically,  if we finish on day 2 we cannot finish on day 3 (or any other day) and vice-versa.

These statements illustrate the mathematical assumptions (or axioms) of probability. I won’t write them out in their full mathematical splendour, those interested in this should head off to the Wikipedia article on the axioms of probability.

Another useful concept is that of cumulative probability which, in our example, is the probability that the task will be completed by a particular day . For example,  the  probability that the task will be completed by day 5  is 0.75  (the sum of probabilities for days 2 through 5).   In general, the cumulative probability of finishing on any particular day is the sum of probabilities of completion for all days up to and including that day.

Interpretations of probability

With that background out of the way, we can get to the main point of this article which is:

What do these probabilities mean?

We’ll explore this question using the cumulative probability example mentioned above,  and by drawing on a paper by Glen Shafer entitled, What is Probability?

OK, so  what is meant by the statement, “There is a 75% chance that the task will finish in 5 days.” ?

It could mean that:

  1. If this task is done many times over, it will be completed within 5 days in 75% of the cases. Following Shafer, we’ll call this the frequency interpretation.
  2. It is believed that there is a 75% chance of finishing this task in 5 days. Note that belief can be tested by seeing if the person who holds the belief is willing to place a bet on task completion with odds that are equivalent to the believed probability. Shafer calls this the belief interpretation.
  3. Based on a comparison to similar tasks this particular task has a 75% chance of finishing in 5 days.  Shafer refers to this as the support interpretation.

(Aside: The belief and support interpretations involve subjective and objective states of knowledge about the events of interest respectively. These are often referred to as subjective and objective Bayesian interpretations because knowledge about these events can be refined using Bayes Theorem, providing one has relevant data regarding the occurrence of events.)

The interesting thing is that all the above interpretations can be shown to  satisfy the axioms of probability discussed earlier (see Shafer’s paper for details). However, it is clear from the above that each of these interpretations have very different meanings.  We’ll take a closer look at this next.

More about the interpretations and their limitations

The frequency interpretation appears to be the most rational one because it interprets probabilities in terms of results of experiments – I.e.  it interprets probabilities as experimental facts, not beliefs. In Shafer’s words:

According to the frequency interpretation, the probability of an event is the long-run frequency with which the event occurs in a certain experimental setup or in a certain population. This frequency is a fact about the experimental setup or the population, a fact independent of any person’s beliefs.

However, there is a big problem here: it assumes that such an experiment can actually be carried out. This definitely isn’t possible in our example: tasks cannot be repeated in exactly the same way – there will always be differences, however small.

There are other problems with the frequency interpretation. Some of these include:

  1. There are questions about whether a sequence of trials will converge to a well-defined probability.
  2. What if the event cannot be repeated?
  3. How does one decide on what makes up the population of all events. This is sometimes called the reference class problem.

See Shafer’s article for more on these.

The belief interpretation treats probabilities as betting odds. In this interpretation a 75% probability of finishing in 5 days means that we’re willing to put up 75 cents to win a dollar if the task finishes in 5 days (or equivalently 25 cents to win a dollar if it doesn’t).  Note that this says nothing about how the bettor arrives at his or her odds.  These are subjective (personal) beliefs. However, they are experimentally determinable – one can  determine peoples’ subjective odds by finding out how theyactually place bets.

There is a good deal of debate about whether the belief interpretation is normative or descriptive: that is, do the rules of probability tell us what people’s beliefs should be or do they tell us what they actually are. Most people trained in statistics would claim the former – that the rules impose conditions that beliefs should satisfy. In contrast, in management and behavioural science, probabilities based on subjective beliefs are often assumed to describe how the world actually is. However, the wealth of literature on cognitive biases suggests that the people’s actual beliefs, as reflected in their decisions, do not conform to the rules of probability.  The latter observation seems to favour normative option, but arguments can be made in support (or refutation) of either position.

The problem mentioned the previous paragraph is a perfect segue into the support interpretation,  according to which the probability of an event occurring is the degree to which we should believe that it will occur (based on available evidence).  This seems fine  until we realize that evidence can come in many “shapes and sizes.”  For example, compare the statements “the last time we did something similar we finished in 5 days, based on which we reckon there’s a 70-80% chance we’ll finish in 5 days” and “based on historical data for gathered for 50 projects, we believe that we have a 75% chance of finishing in 5 days. “ The two pieces of evidence offer very different levels of support. Therefore, although the support interpretation appears to be more objective than the belief interpretation, it isn’t actually so because it is difficult to determine which evidence one should use.  So, unlike the case of subjective beliefs (where one only has to ask people about their personal odds), it is not straightforward to determine these probabilities empirically.

So we’re left with a situation in which we have three interpretations, each of which address specific aspects of probability but also have major shortcomings.

Is there any way to break the impasse?

A resolution?

Shafer suggests that the three interpretations of probability are best viewed as highlighting different aspects of a single situation: that of an idealized case where we have a sequence of experiments with known probabilities.  Let’s see how this statement (which is essentially the frequency interpretation) can be related to the other two interpretations.

Consider my belief that that the task has a 75% chance of finishing in 5 days. This is analogous to saying that if the task is done several times over, I believe it would finish in 5 days in 75% of the cases.  My belief can be objectively confirmed by testing my willingness to put up 75 cents to win a dollar if the task finishes in five days.  Now, when I place this bet I have my (personal)  reasons for doing so. However, these reasons ought to relate to knowledge of the fair odds involved in the said bet.  Such fair odds can only be derived from knowledge of what would happen in a (possibly hypothetical) sequence of experiments.

The key assumption in the above argument is that my personal odds aren’t arbitrary – I should be able to justify them to another (rational) person.

Let’s look at the support interpretation. In this case I have hard evidence for stating that there’s a 75% chance of finishing in 5 days. I can take this hard evidence as my personal degree of belief (remember, as stated in the previous paragraph, any personal degree of belief should have some such rationale behind it.) However, since it is based on hard evidence, it should be rationally justifiable and hence can be associated with a sequence of experiments.

So what?

The main point from the above is the following: probabilities may be interpreted in different ways, but they have an underlying unity. That is, when we state that there is a 75% probability of finishing a task in 5 days, we are implying all the following statements (with no preference for any particular one):

  1. If we were to do the task several times over, it will finish within five days in three-fourths of the cases. Of course, this will hold only if the task is done a sufficiently large number of times (which may not be practical in most cases)
  2. We are willing to place a bet given 3:1 odds of completion within five days.
  3. We have some hard evidence to back up statement (1) and our betting belief (2).

In reality, however,  we tend to latch on to one particular interpretation depending on the situation. One is unlikely to think in terms of hard evidence when one is buying a lottery ticket but hard evidence is a must when estimating a project. When tossing a coin one might instinctively use the frequency interpretation but when estimating a task that hasn’t been done before one might use personal belief. Nevertheless, it is worth remembering that regardless of the interpretation we choose, all three are implied. So the  next time someone gives you a probabilistic estimate, ask them if they have the evidence to back it up for sure,  but don’t forget to  ask  if they’d be willing to accept a bet based on their own stated odds. 🙂

Written by K

July 1, 2010 at 10:09 pm

%d bloggers like this: