# Eight to Late

Sensemaking and Analytics for Organizations

## On the relationship between systems and the organisations that build them

with one comment

### Introduction

System design is a creative activity, but one that is subject to a variety of constraints. Many of these constraints are obvious: for example, when tasked with designing a new software product, a team might be asked to work within a budget or use a particular technology. These constraints place boundaries on the design activity; they force designers to work within parameters specified by the constraints. But there are other less obvious  constraints too. In a paper entitled, How Do Committees Invent, published in 1968, Melvin Conway described a notion that is now called Conway’s Law: An organisation which designs a system will inevitably produce a design that mirrors the organisation’s communication structure. This post is a summary of the key points of the paper.

Conway begins the paper with the observation that the system design is an activity that involves specifying how a system will be built using a number of diverse parts.  Many elements of the act of design are similar, regardless of the nature of the system –be it software or a shopping mall. The objective of a design team or organisation is to produce a specification or blueprint based on which the system can be built.

Much of design work is about making choices. Conway points out that these choices may be more than design decisions:

Most design activity requires continually making choices. Many of these choices may be more than design decisions; they may also be personal decisions the designer makes about his own future. As we shall see later, the incentives which exist in a conventional management environment can motivate choices which subvert the intent of the sponsor.

The paper is essentially an elaboration and justification of this claim.

### Pre-design work

The preliminary stages of design work are more about organizing than design itself.  First, the boundaries have to be understood so that the solution space can be defined. Second, the high-level structure of the system has to be explored so that work can be subdivided in a sensible way within the organisation that’s doing the design. This latter point is the crux of Conway’s argument:

…the very act of organizing a design team means that certain design decisions have already been made, explicitly or otherwise. Given any design team organization, there is a class of design alternatives which cannot be effectively pursued by such an organization because the necessary communication paths do not exist. Therefore, there is no such thing as a design group which is both organized and unbiased.

There are a couple of important points here:

1. The act of delegating design tasks narrows the scope of design options that can be pursued.
2. Once tasks are delegated to groups, coordination (via communication) between these groups is the only way that the work can be integrated.

Further, once established, it is very hard to change design idea (or a project team, for that matter).

### The system mirrors the organisation

Most systems of any significance are composed of several subsystems that communicate with each other via interfaces. According to Conway, these elements (italicized in the previous sentence) have a correspondence with the organisation that designs the system.  How so? Well, every subsystem is designed by a group within the organisation (call it a design group).  If two subsystems are to communicate interact with each other, the two groups responsible for their design must communicate with each other (to negotiate the interface design).  If subsystems don’t interact, no communication is necessary.  What we see from this argument is that the communication between subsystems roughly mirrors the communication paths within the organisation.

As any system designer knows: given a set of requirements, there are a number of designs that can satisfy them.  If the argument of the previous paragraph is true then the structure of the design organisation (or project team)  influences the choice that is made.

### Managing systems design

Conway points out that large system design efforts spin out of control more often than those for small systems. He surmises that this happens when the design becomes too complex for one person (or a small, tightly-knit group of people). A standard management reaction to such a situation is to delegate the design of component to sub-teams. Why? Well here’s what Conway says:

A manager knows that he will be vulnerable to the charge of mismanagement if he misses his schedule without having applied all his resources. This knowledge creates a strong pressure on the initial designer who might prefer to wrestle with the design rather than fragment it by delegation, but he is made to feel that the cost of risk is too high to take the chance. Therefore, he is forced to delegate in order to bring more resources to bear.

A major fallacy in this line of thinking is that more resources means that work gets done faster. It is well known that this isn’t so – at least as far as software systems development is concerned. Conway points out that politics also contributes to this effect. In most organisations, managerial status is tied to team size and project budgets. This provides an incentive to managers to expand their organisations (i.e. project teams), making design delegation almost inevitable.

Large teams have a large number of communication paths between their members. Specifically, in a team consisting of N people, there are N(N-1)/2 possible communication paths – each person can communicate with N-1 people making N(N-1), but this has to be halved because paths between every two individuals are counted twice. Organisations deal with this by restricting communication paths to hierarchical management structures.  Because communication paths mirror organizational structures, it is almost inevitable that system designs will mirror them.

### Conclusion

The main implication of Conway’s thesis is that a project team (or any organisation) charged with designing a system should be structured in a way that suits the communication needs of the system. For example, sub-teams involved in designing related subsystems should have many more communication channels than those that design independent components.  Further, system design is inherently complex, and the first design is almost never the final one.  A consequence that flows from this is that design organisations should be flexible because they’ll almost always need to be reorganized.

In the end it is less about the number of people on a team than the communication between them. As Conway mentions in the last two lines of his paper:

There is need for a philosophy of system design management which is not based on the assumption that adding manpower simply adds to productivity. The development of such a philosophy promises to unearth basic questions about value of resources and techniques of communication which will need to be answered before our system-building technology can proceed with confidence.

This is as true now as it was forty-odd years ago.

Written by K

March 23, 2010 at 10:24 pm

## Trumped by conditionality: why many posts on this blog are not interesting

### Introduction

A large number of the posts on this blog do not get much attention – not too many hits and few if any comments. There could be several reasons for this, but I need to consider the possibility that readers find many of the things I write about uninteresting.  Now, this isn’t for the want of effort from my side: I put a fair bit of work into research and writing, so it is a little disappointing. However,  I take heart from the possibility that it might not be entirely my fault:  there’s  a statistical reason (excuse?)  for the dearth of quality posts on this blog. This (possibly uninteresting) post discusses this probabilistic excuse.

The argument I present uses the concepts of conditional probability and Bayes Theorem. Those unfamiliar with these may want to have a look at my post on Bayes theorem before proceeding further.

### The argument

Grist for my blogging mill comes from a variety of sources: work,  others’ stories, books, research papers and the Internet.  Because of time constraints, I can write up only a fraction of the ideas that come to my attention.  Let’s put a number to this fraction – say I can write up only 10% of the ideas I come across.  Assuming that my intent is to write interesting stuff, this number corresponds to the best (or most interesting) ideas I encounter.  Of course, the term “interesting” is subjective – an idea that fascinates me might not have the same effect on you.  However  this is a problem for most qualitative judgements,  so we’ll accept this and move on.

If we denote the event “I have an interesting idea” by $I$ and its probability by $P(I)$, we have:

$P(I) = 0.1$

Then, if we denote the event “I have an idea that is uninteresting” by $U$, we have:

$P(U) = 0.9$,

assuming that an idea must either be  interesting or uninteresting (no other possibilities allowed).

Now, for me to write up an idea, I have to find it interesting (i.e. judge it as being in the top 10%). Let’s be generous and assume that I correctly recognise an interesting idea (as being interesting) 70% of the time. From this, the conditional probability of my writing a post given that I encounter an  interesting idea, $P(W|I)$,  is:

$P(W|I) = 0.7$,

where $W$ is the event that I write up an idea.

On the flip side, let’s assume that I correctly recognise 80% of the uninteresting ideas that I encounter as being no good.  This implies that I incorrectly identify 20% of the uninteresting stuff as being interesting. That is, 20% of the uninteresting stuff  is wrongly identified as being blog-worthy. So, the conditional probability of my writing a post about an uninteresting idea, $P(W|U)$, is:

$P(W|U) = 0.2$

(If the above values for $P(W|I)$ and $P(W|U)$ are confusing remember that, by assumption, I write about all ideas that I find interesting – and this includes those ideas that I deem interesting but are actually uninteresting)

Now, we want to figure out the probability that a post that appears on my blog is interesting – i.e. that a post is interesting given that I have written it up. Using the notation of conditional probability, this can be written as $P(I|W)$.  Bayes Theorem tells us that:

$P(I|W) = \displaystyle\frac{P(W|I) * P(I)}{P(W)}$

$P(W)$,  which is the probability that I write a post,  can be expressed as follows:

$P(W)$ = probability that I write an interesting post+ probability that I write an uninteresting post

This can be written as,

$P(W) = P(W|I) * P(I) + P(W|U) * P(U)$

Substituting this in the expression for Bayes Theorem, we get:

$P(I|W) = \displaystyle \frac{P(W|I) * P(I)}{P(W|I) * P(I) + P(W|U) * P(U)}$

Using the numbers quoted above

$P(I|W) =\displaystyle \frac{0.7*0.1}{0.7*0.1+0.2*0.9}=0.28$

So, only 28% of the ideas I write about are interesting. The main reason for is my inability to filter out all the dross.  These  “false positives”   – which are all the ideas that I identify as interesting but are actually not –   are represented by the $P(W|U) * P(U)$ term in the denominator. Since there are way more bad ideas than good ones floating around (pretty much everywhere!), the chance of false positives is significant.

So, there you go: it isn’t my fault really. 🙂

I should point out that the percentage of interesting ideas written up will be small whenever the false positive term is significant compared to the numerator.  In this sense the result is insensitive to the values of the probabilities that I’ve used.

Of course, the argument presented above is based on a number of assumptions. I assume that:

1. Mostreaders of this blog share my interests.
2. The ideas that I encounter are either interesting or uninteresting.
3. There is an arbitrary cutoff point between interesting and uninteresting ideas (the 10% cutoff).
4. There is an objective criterion for what’s interesting and what’s not, and that I can tell one from the other 70% of the time.
5. The relevant probabilities  are known.

### …and so, to conclude

I have to accept that much of the stuff I write about will be uninteresting, but  can take consolation in the possibility that it is a  consequence of conditional probabilities. I’m  trumped by conditionality, once more.

Acknowledgements

This post was inspired by Peter Rousseeuw’s brilliant and entertaining paper entitled, Why the Wrong Papers Get Published. Thanks also go out to Vlado Bokan for interesting conversations about conditional probabilities and Bayes theorem.

Written by K

March 17, 2010 at 10:43 pm

## Bayes Theorem for project managers

### Introduction

Projects are fraught with uncertainty, so it is no surprise that the language and tools of probability are making their way into project management practice. A good example of this is the use of Monte Carlo methods to estimate project variables.  Such tools enable the project manager to present estimates in terms of probabilities  (e.g. there’s a 90% chance that a project will finish on time) rather than illusory certainties. Now, it often happens that we want to find  the probability of an event occurring given that another event has occurred. For example, one might want to find the probability that a project will finish on time given that a major scope change has already  occurred.  Such conditional probabilities,  as they are referred to in statistics,  can be evaluated using Bayes Theorem. This post is a discussion of Bayes Theorem using an example from project management.

### Bayes theorem by example

All project managers want to know whether the projects they’re working on will finish on time. So, as our example, we’ll assume that a project manager asks the question: what’s the probability that my project will finish on time? There are only two possibilties here: either the project finishes on (or before) time or it doesn’t.  Let’s express this formally.  Denoting the event the project finishes on (or before) time by $T$, the event the project does not finish on (or before) time by $\tilde T$ and the probabilities of the two by $P(T)$ and $P(\tilde T)$ respectively, we have:

$P(T)+P(\tilde T) = 1$……(1),

Equation (1) is simply a statement of the fact that the sum of the probabilities of all possible outcomes must equal 1.

Fig 1.  is a pictorial representation of the two events and how they relate to the entire universe of projects done by the organisation our project manager works in. The rectangular areas  $A$ and $B$  represent the on time and not on time projects, and the sum of the two areas, $A+B$, represents all projects that have been carried out by the organisation.

Fig 1: On Time and Not on Time projects

In terms of areas, the probabilities quoted above can be expressed as:

$P(T) = \displaystyle \frac{A}{A+B}$……(2),

and

$P(\tilde T) = \displaystyle \frac{B}{A+B}$ ……(3).

This also makes explicit the fact that the sum of the two probabilities must add up to one.

Now, there are several variables that can affect project completion time. Let’s look at just one of them: scope change. Let’s denote the event “there is a major change of scope”  by $C$ and the complementary event (that there is no major change of scope) by $\tilde C$.

Again, since the two possibilities cover the entire spectrum of outcomes, we have:

$P(C)+P(\tilde C) = 1$ ……(4).

Fig 2. is a pictorial representation of by $C$ and $\tilde C$.

Fig 2: "Major Change" and "No Major Change" projects

The rectangular areas $D$ and $E$ represent the projects that have undergone major scope changes and those that haven’t respectively.

$P(C) = \displaystyle \frac{D}{D+E}$……(5),

and

$P(\tilde C) = \displaystyle \frac{E}{D+E}$……(6).

Clearly we also have $A+B=D+E$ since the number of projects completed is a fixed number, regardless of how it is arrived at.

Now things get interesting.  One could ask the question: What is the probability of finishing on time given that there has been a major scope change?  This is a conditional probability because it represents the likelihood that something will happen (on-time completion) on the condition that something else has already happened (scope change).

As a first step to answering the question posed in the previous paragraph, let’s combine the two events graphically.  Fig 3 is a combination of Figs 1 and 2. It shows four possible events:

1. On Time with Major Change ($T$, $C$) – denoted by the rectangular area $AD$ in Fig 3.
2. On Time with No Major Change ($T$, $\tilde C$) – denoted by the rectangular area $AE$ in Fig 3.
3. Not On Time with Major Change ($\tilde T$, $C$) – denoted by the rectangular area $BD$ in Fig 3.
4. Not On Time with No Major Change ($\tilde T$, $\tilde latex C$) – denoted by the rectangular area $BE$ in Fig 3.

Fig 3: Combination of events shown in Figs 1 and 2

We’re interested in the probability that the project finishes on time  given that it has suffered a major change in scope. In the notation of conditional probability, this is denoted by $P(T|C)$.   In terms of areas, this can be expressed as

$P(T|C) = \displaystyle \frac{AD}{AD+BD} = \frac{AD}{D}$……(7) ,

since $D$ (or equivalently $AD+BD$)  represent all projects that have undergone a major scope change.

Similarly, the conditional probability that a project has undergone a major change given that it has come in on time, $P(C|T)$, can be written as:

$P(C|T) =\displaystyle \frac{AD}{AD+AE} = \frac{AD}{A}$……(8) ,

since $AD+AE=A$.

Now, what I’m about to do next may seem like pointless algebraic jugglery, but bear with me…

Consider the ratio of the area $AD$  to the big outer rectangle (whose area is $A+B$) . This ratio can be expressed as follows:

$\displaystyle\frac{AD}{A+B}=\frac{AD}{D}\times\frac{D}{A+B}=\frac{AD}{A}\times\frac{A}{A+B}$……(9).

This is simply multiplying and dividing by the same factor ($D$ in the second expression and $A$ in the third.

Written in the notation of conditional probabilities, the second and third expressions in (9) are:

$P(T|C)*P(C)=P(C|T)*P(T)$……(10),

which is Bayes theorem.

From the above discussion, it should be clear that Bayes theorem follows from the definition of conditional probability.

We can rewrite Bayes theorem in several equivalent ways:

$P(T|C)=\displaystyle\frac{P(C|T)*P(T)}{P(C)}$……(11),

or

$P(T|C)=\displaystyle\frac{P(C|T)P(T)}{ P(C|T)P(T)+P(C|\tilde T)P(\tilde T)}$……(12),

where the denominator in (12)  follows from the fact that a project that undergoes a major change will either be on time or will not be on time (there is no other possibility).

### A numerical example

To complete the discussion, let’s look at a numerical example.

Assume our project manager has historical data on projects that have been carried out within  the organisation.  On analyzing the data, the PM  finds that 60% of all projects finished on time. This implies:

$P(T) = 0.6$……(13),

and

$P(T) = 0.4$……(13),

Let us assume that our organisation also tracks major changes made to projects in progress.  Say 50% of all historical projects are found to have major changes. This implies:

$P(C) = 0.5$……(15).

Finally, let us assume that our project manager has access to detailed data on successful projects, and that an analysis of this data shows that 30% on time projects have undergone at least one major scope change. This gives:

$P(C|T) = 0.3$……(16).

Equations (13) through (16) give us the numbers we need to calculated   $P(T|C)$ using Bayes Theorem.  Plugging the numbers in equation (11), we get:

$P(T|C)=\displaystyle\frac{0.3*0.6}{0.5}=0 .36$……(16)

So, in this organisation, if a project undergoes a major change then there’s a 36% probability that it will finish on time.  Compare this to the 60% (unconditional) probability of finishing on time.  Bayes theorem enables the project manager to quantify the impact of change in scope on project completion time, providing the relevant historical data is available. The italicised bit in the previous sentence is important; I’ll have more to say about it in the concluding section.

In closing this section I should emphasise that although my discussion of Bayes theorem is couched in terms of project completion times and scope changes,  the arguments used are general.  Bayes theorem holds for any pair of events.

### Concluding remarks

It should be clear that the probability calculated in the previous section is an extrapolation based on past experience. In this sense, Bayes Theorem is a formal statement of the belief that one can predict the future based on past events. This goes beyond probability theory;  it is  an assumption that underlies much of science.  It is important to emphasise that the prediction is based on enumeration, not analysis: it is solely based on ratios of the number of projects in one category versus the other; there is no attempt at finding a causal connection between the events of interest. In other words, Bayes theorem suggests there is a correlation between major changes in scope and delays, but it  does not tell us why. The latter question can be answered only via a detailed study which might culminate in a theory  that explains the causal connection between changes in scope and completion times.

It is also important to emphasise that data used in calculations should be based on events that akin to the one at hand. In the case of the example, I have assumed that historical data is for projects that resemble the one the project manager is working on.  This assumption must be validated because there could be situations in which  a major change in scope actually reduces completion time (when the project is “scoped-down”, for instance). In such cases, one would need to ensure that the numbers that go into Bayes theorem are based on historical data for “scoped-down” projects only.

To sum up: Bayes theorem expresses a fundamental relationship between conditional probabilities of two events. Its main utility is that it enables us to make probabilistic predictions based on past events; something that a project manager needs to do quite often. In this post I’ve attempted to provide a straightforward explanation of Bayes theorem – how it comes about and what its good for. I hope I’ve succeeded in doing so. But if you’ve found my explanation confusing, I can do no better than to direct you to a couple of excellent references.