Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘portfolio management’ Category

Lost in translation: the gap between expectation and reality of information systems

leave a comment »


Those involved in information systems projects will be well aware of the gap between user expectations and the actual capabilities newly rolled-out systems.  Despite change management and training initiatives, users often complain that they can’t do this or that (which the old system did so well…) or that the new system has a terrible interface and is just too hard to use.  Although such projects may not be dubbed total failures, both sides of the IS/User divide are left feeling a bit cheated.  As Jim Underwood noted in a paper entitled, Negotiating the chasm in system development: some advice from Actor Network Theory:

While academics, managers and accountants believe that many IS projects, if not total failures, are at least very expensive mistakes, the developers themselves seem to believe that they are doing as well as could be expected and are producing many highly valuable systems. For managers the gap between idea and reality is caused by inappropriate culture or lack of commitment on the part of the developers; for theoreticians the cause is a failure to use or properly adhere to an appropriate methodology.

The gap between expectations and reality is often attributed to a misunderstanding of requirements that are articulated at the start of a project. One of the key selling points of iterative/incremental approaches to development, wherein the “system reality checks” are carried out at regular intervals (at the end of each iteration), is that they allegedly reduce or eliminate this gap altogether.   However, in reality, the gaps often remain, and none of the parties involved get their own way entirely.   Unfortunately these differences are often “swept under the carpet”, only to emerge later on.  In this post, I summarise Underwood’s paper, highlighting the insights he provides that can help project professionals  negotiate this gap.

The world according to ANT

Underwood’s work is based on Actor-Network Theory (ANT) which, for the purposes of the present discussion, can be viewed as an approach to studying systems that consist of interacting social and technical elements. According to Underwood, ANT can be considered as a type of stakeholder analysis. A distinguishing feature of ANT is that it considers non-human “stakeholders”  (or actors, as they are referred to in ANT) to be on on  an equal footing to human ones.  For example, a system is an actor with its own “viewpoint” and “agenda.”  I know this sounds a bit “out there” but hopefully it will make more sense as we go on. A key point in ANT is that all stakeholders are internal to the network. Moreover they exert influence on each other, and this influence can change in time (exactly like in real life). Influence is exerted through scripts– descriptions of actions that the actors are expected to carry out.   Actors inscribe each other with scripts through communication – written, verbal or otherwise.  Others interpret (or de-scribe) scripts in order to understand the motives actions of actors.   A plan to inscribe actors with scripts with the aim of achieving a particular goal (project objectives, for example) is called a program. Programs can be countered with anti-programs, which as their name suggests, are plans that work against the program. It is important to note that the process of interpreting scripts is done within a particular set of norms and rules that are specific to a discipline.  For example, depending on the specific development methodology followed the script “gather user requirements” may be interpreted in different ways.  However, the interpretation is rarely unambiguous – different stakeholders will interpret scripts in different ways. A manager may, for instance, assume that the end product of a requirements gathering process is a comprehensive requirements document whereas a developer might be content with informal notes and mockups.

Espoused and interpreted scripts

In real life scripts change their form as they are passed on and interpreted by actors: directives, however explicit they may be, are always open to being interpreted in different ways, depending on the backgrounds of actors. So, we have two levels of scripts: the original one as intended by the author and the interpreted ones that are put into action by other actors.  Following Argyris and Schoen’s distinction between espoused and in-use theories, Underwood calls these espoused scripts and scripts- in-use.  As I see it, the collection of espoused scripts form the “official” project while the scripts-in-use make up the “actual” one.

Betrayal and ambiguity

The fact that there are two levels of scripts points to the fact that the planned project rarely coincides with the actual one. There are always deviations. The point is, a plan (an espoused script) is never unambiguous, it can be translated into many different scripts-in-use. At times these translations may be viewed as betrayals – i.e. unacceptable deviations from the original script. At others, the content of a script may be translated covertly. This often happens when the content of a script is potentially contentious or vague. A good example of the latter is the script “the system must be  scalable”, which may be translated to a specific interpretation of scalability that the system developer can achieve.

Implications for project managers

One of the key functions of a project manager is to ensure that the project is on track. In other words, it is to monitor and control the project. Indeed this is one of the process areas described in the PMBOK guide.  Underwood contends that there are essentially two ways in which project managers exert control: overt and covert. In his words:

They either take a Machiavellian view or promote superficial agreement and high sounding concepts while secretly working to their own goals, or they insist on all players subscribing to detailed design specifications expressed in the language of some dominant discourse.

Of course, they may do both, depending on what specific situations call for.  However, according to Underwood, the actor-network view suggests that project management is more about facilitation than control. With this in mind, he offers the following advice (or scripts!) to project managers.

  1. Don’t be afraid of politics: In the actor-network view, political stakeholders are actors who have their own scripts. These scripts must be incorporated into the project, interpreted in ways that keep all actors on side.  This may be a difficult process that requires skilful negotiation. Ignoring politically motivated scripts may end up jeopardizing the project.
  2. Keep project boundaries broad and less technical: system planners and developers tend to focus on the technical aspects of projects.  Despite rhetoric to the contrary, the technical aspects of systems are given the place of pride in project plans and schedules.   ANT focuses our attention on the fact that the non-technical aspects of systems – humans, environment and (organizational) culture  etc.  – are just as important (if not more).
  3. Track scripts: Underwood suggests keeping of track of scripts through regular audits. Scripts that have been dropped (i.e.  inadvertently forgotten) are likely to turn up again later when someone asks, “What have we done about X’.  The function of the audit is to remind everyone about actions that are required and reaffirm actors’ intention to action scripts that they have committed to.
  4. Don’t cover up disagreements: Disagreements (differing interpretations of scripts) are often ignored because no one wants unpleasantness.  Underwood suggests airing differences with the aim of reaching shared understanding. This does not mean that all differences will be resolved, only that they are debated openly with the aim of achieving a consensual way forward.

Of course, this advice is somewhat idealistic and difficult to implement in practice: good intentions and the need for shared understanding are often forgotten in the heat of managing messy, real-life projects.


The main point made in the paper is that project managers should attempt to facilitate actions rather than direct or control them. The best way to do this is by enabling stakeholders to reach a shared understanding of what the project is all about and a shared commitment to actions that will make it happen.  Dialogue mapping, which I have described in many earlier posts is an excellent way to achieve this. But in the end it isn’t about specific techniques or methodologies – it  is about taking responsibility and genuinely caring about the outcome.   As Underwood states in his conclusion,

The simplest, most important and most difficult script recommended for all actors, but particularly project managers is “let go”. Managing a project can be compared to teaching students or bringing up children. We have some ideas about what we hope to achieve, some knowledge and experience, plenty of advice to give and a few techniques (of doubtful efficacy) for influencing behaviour. We feel responsible, we care deeply about the outcome, but we should be neither surprised nor disappointed when the reality turns out quite differently from anything we might have expected…

I don’t think I can put it any better.

Written by K

August 4, 2011 at 4:44 am

From here to serendipity: gaining insights from project management research

with 3 comments


I’ve been blogging for over three years, during which I’ve written a number of reviews and summaries of research papers  in project management and related areas.  Research papers aren’t exactly riveting reads;  at times it seems  they are written to impress academics rather than inform laypeople.  Nevertheless, it is useful to read papers because it can lead to insights that are directly applicable to professional practice.   The main difficulty in doing this lies in finding papers that are relevant to one’s professional interests.  In this post I discuss how one can go about finding the “right” papers to read.  Further, as an illustration of how such random reading can lead to unexpected insights, I list some of the things I’ve learnt in the course of my rambles through project management research.

Prospecting for papers

My primary source for research papers is Google Scholar, which I browse in a somewhat haphazard fashion.  I usually start a search with a couple of key phrases denoting a broad area – for example “risk analysis” and “project management”.  I then browse the resulting list, short-listing titles that catch my attention. I read the abstracts of these, and make an even shorter list of those that I feel I’d like to read in full.  Google Scholar has a “Related Articles” link below each result. I often follow this link for articles that I find particularly interesting.  This usually yields a few more papers of interest.

Once I’ve determined the articles I want to read, I attempt to locate and download copies of these. This isn’t always straightforward: many papers cannot be accessed in full because they are copyrighted by journal publishers and are available only through paid subscriptions. However, many are available on author and university websites.  Google Scholar helpfully highlights those that are available for download. Journal archives, such as JSTOR,  are also good sources for copies of papers, but full access is usually available only to subscribers (check with your local university or public library system for more).

I then browse through the articles I’ve downloaded, printing out only those that that I think are worth a careful read. Yes, this choice is based on my subjective judgement, but then life’s too short waste time reading what others find interesting.  That said, if one wants a more objective assessment of a paper’s worth, one can use the citation count (number of times a paper has been cited).   Google Scholar displays citation count for most of the articles it displays. However, it should be noted that citation counts aren’t necessarily an indicator of quality.

In essence I look for interesting papers using keywords that describe things I’m thinking about. Fortunately I’m not doing research for a living so I can afford to read what I want, when I want – providing, of course, it doesn’t get in the way of my regular day job!

When reading papers, I usually keep a highlighter and pencil handy for making notes in the margins (very useful for when I’m writing reviews).  If I’m reading a pdf document, the commenting and highlighting features of Acrobat are very useful.

Finally,  it should be clear that what turns up depends very much on the keywords and phrases one uses. This choice is dictated by ones interests. My professional interests tend towards foundational and philosophical aspects of project management, so my searches and many of my reviews reflect this.

Serendipity at work

I’ve used this technique for about as long as I have been blogging. Along the way I have come across some truly exceptional papers which have influenced the way I think about and do my job.  Below are some posts that were inspired by such papers:

Project management in the post-bureaucratic  organisation: A critique on the use of project management as a means to direct creative and innovative work.  Based on:  Hodgson, D.E. , Project Work: The Legacy of Bureaucratic Control in the Post-Bureaucratic Organization , Organization, Volume 11, pages 81-100 (2004).

A memetic view of project management: Wherein project management is viewed as a collection of ideas that self-propagate. Based on: Whitty, S. J.,  A memetic paradigm of project management, International Journal of Project Management,  Volume 23, pages 575-583 (2005).

Cox’s risk matrix theorem and its implications for project risk management:  Describes some logical flaws in the way  risk matrices are commonly used. Based on:  Cox, L. A., What’s wrong with risk matrices?, Risk Analysis, Volume 28, pages 497-512 (2008)

The user who wasn’t there: on the role of the imagined user in project discourse: Highlights the use of imagined (as opposed to real) users to justify specific design views and/or decisions in projects. Based on:  Ivory, C. and Alderman, N., The imagined user in projects: Articulating competing discourses of space and knowledge work, Ephemera, Volume 9, pages 131-148 (2009).

The myth of the lonely project: Discusses why project managers need to be aware of the history, culture, strategic imperatives and social dynamics of the organisations within which they work. Based on: Engwall, M., No project is an island: linking projects to history and context, Research Policy,Volume 32, pages 789-808 (2003).

The papers referenced above are just a small selection of the interesting ones I have stumbled on in my random rambles through Google Scholar.

…and so, to conclude

Professional project managers rarely have the time (or inclination) to read research papers related to their field. If you don’t browse research papers often,  I hope this piece has  convinced you to give it a try.   Although the time you invest may not get you that new job or promotion, I guarantee that it will give you fresh insights into your profession by leading you from here to serendipity.

Written by K

July 5, 2011 at 6:23 am

Planned failure – a project management paradox

with 6 comments

The other day a friend and I were talking about a failed process improvement initiative in his organisation.  The project had blown its budget and exceeded the allocated time by over 50%, which in itself was a problem. However, what I found more interesting was that the failure was in a sense planned – that is, given the way the initiative was structured, failure was almost inevitable. Consider the following:

  • Process owners had little or no input into the project plan. The “plan” was created by management and handed down to those at the coalface of processes. This made sense from management’s point of view – they had an audit deadline to meet. However, it alienated those involved from the outset.
  • The focus was on time, cost and scope; history was ignored. Legacy matters – as Paul Culmsee mentions in a recent post, “To me, considering time, cost and scope without legacy is delusional and plain dumb. Legacy informs time, cost and scope and challenges us to look beyond the visible symptoms of what we perceive as the problem to what’s really going on.This is an insightful observation – indeed, ignoring legacy is guaranteed to cause problems down the line.

The conversation with my friend got me thinking about planned failures in general. A feature that is common to many failed projects is that planning decisions are based on dubious assumptions.  Consider, for example, the following (rather common) assumptions made in project work:

  • Stakeholders have a shared understanding of project goals. That is, all those who matter are on the same page regarding the expected outcome of the project.
  • Key personnel will be available when needed and – more importantly – will be able to dedicate 100% of their committed time to the project.

The first assumption may be moot because stakeholders view a project in terms of their priorities and these may not coincide with those of other stakeholder groups. Hence the mismatch of expectations between, say, development and marketing groups in product development companies. The second assumption is problematic because key project personnel are often assigned more work than they can actually do.  Interestingly, this happens because of flawed organisational procedures rather than poor project planning or scheduling  – see my  post on the resource allocation syndrome for a detailed discussion of this issue.

Another factor that contributes to failure is that these and other such assumptions often come in to play during the early stages of a project. Decisions that are based on these assumptions thus affect all subsequent stages of the project. To make matters worse, their effects can be amplified as the project progresses.    I have discussed these and other problems  in my post on front-end decision making in projects.

What is relevant from the point of view of failure is that assumptions such as the ones above are rarely queried, which begs the question as to why they remain unchallenged.  There are many reasons for this, some of the more common ones are:

  1. Groupthink:  This   is the tendency of members of a group to think alike because of peer pressure and insulation from external opinions. Project groups are prone to falling into this trap, particularly when they are under pressure. See this post for more on groupthink in project environments and ways to address it.
  2. Cognitive bias: This term refers to a wide variety of errors in perception or judgement that humans often make (see this Wikipedia article for a comprehensive list of cognitive biases).  In contrast to groupthink, cognitive bias operates at the level of an individual.  A common example of cognitive bias at work in projects is when people underestimate the effort involved in a project task through a combination of anchoring and/or over-optimism  (see this post for a detailed discussion of these biases at work in a project situation). Further examples can be found in in my post on the role of cognitive biases in project failure,  which discusses how many high profile project failures can be attributed to systematic errors in perception and judgement.
  3. Fear of challenging authority:  Those who manage and work on projects are often reluctant to challenge assumptions made by those in positions of authority. As a result, they play along until the inevitable train wreck occurs.

So there is no paradox:  planned failures occur for reasons that we know and understand. However, knowledge is one thing,  acting on it quite another.  The paradox will live on because in real life it is not so easy to bell the cat.

Written by K

June 16, 2011 at 10:17 pm

Six common pitfalls in project risk analysis

with 3 comments

The discussion of risk in presented in most textbooks and project management courses follows the well-trodden path of risk identification, analysis, response planning and monitoring (see the PMBOK guide, for example).  All good stuff, no doubt.  However, much of the guidance offered is at a very high level. Among other things, there is little practical advice on what not to do. In this post I address this issue by outlining some of the common pitfalls in project risk analysis.

1. Reliance on subjective judgement: People see things differently:  one person’s risk may even be another person’s opportunity. For example, using a new technology in a project can be seen as a risk (when focusing on the increased chance of failure) or opportunity (when focusing on the opportunities afforded by being an early adopter). This is a somewhat extreme example, but the fact remains that individual perceptions influence the way risks are evaluated.  Another problem with subjective judgement is that it is subject to cognitive biases – errors in perception. Many high profile project failures can be attributed to such biases:  see my post on cognitive bias and project failure for more on this. Given these points, potential risks should be discussed from different perspectives with the aim of reaching a common understanding of what they are and how they should be dealt with.

2. Using inappropriate historical data: Purveyors of risk analysis tools and methodologies exhort project managers to determine probabilities using relevant historical data. The word relevant is important: it emphasises that the data used to calculate probabilities (or distributions) should be from situations that are similar to the one at hand.  Consider, for example, the probability of a particular risk – say,  that a particular developer will not be able to deliver a module by a specified date.  One might have historical data for the developer, but the question remains as to which data points should be used. Clearly, only those data points that are from projects that are similar to the one at hand should be used.  But how is similarity defined? Although this is not an easy question to answer, it is critical as far as the relevance of the estimate is concerned. See my post on the reference class problem for more on this point.

3. Focusing on numerical measures exclusively: There is a widespread perception that quantitative measures of risk are better than qualitative ones. However,  even where reliable and relevant data is available,  the measures still need to  based on sound methodologies. Unfortunately, ad-hoc techniques abound in risk analysis:  see my posts on Cox’s risk matrix theorem and limitations of risk scoring methods for more on these.  Risk metrics based on such techniques can be misleading.  As Glen Alleman points out in this comment, in many situations qualitative measures may be more appropriate and accurate than quantitative ones.

4. Ignoring known risks: It is surprising how often known risks are ignored.  The reasons for this have to do with politics and mismanagement. I won’t dwell on this as I have dealt with it at length in an earlier post.

5. Overlooking the fact that risks are distributions, not point values: Risks are inherently uncertain, and any uncertain quantity is represented by a range of values, (each with an associated probability) rather than a single number (see this post for more on this point). Because of the scarcity or unreliability of historical data, distributions are often assumed a priori: that is, analysts will assume that the risk distribution has a particular form (say, normal or lognormal) and then evaluate distribution parameters using historical data.  Further, analysts often choose simple distributions that that are easy to work with mathematically.  These distributions often do not reflect reality. For example,  they may be vulnerable to “black swan” occurences because they do not account for outliers.

6. Failing to update risks in real time: Risks are rarely static – they evolve in time, influenced by circumstances and events both in and outside the project. For example, the acquisition of a key vendor by a mega-corporation is likely to affect the delivery of that module you are waiting on –and quite likely in an adverse way. Such a change in risk is obvious; there may be many that aren’t. Consequently, project managers need to reevaluate and update risks periodically. To be fair, this is a point that most textbooks make – but it is advice that is not followed as often as it should be.

This brings me to the end of my (subjective) list of risk analysis pitfalls. Regular readers of this blog will have noticed that some of the points made in this post are similar to the ones I made in my post on estimation errors. This is no surprise: risk analysis and project estimation are activities that deal with an uncertain future, so it is to be expected that they have common problems and pitfalls. One could generalize this point:  any activity that involves gazing into a murky crystal ball will be plagued by similar problems.

Written by K

June 2, 2011 at 10:21 pm

On the meaning and interpretation of project documents

with 3 comments


Most projects generate reams of paperwork ranging from business cases to lessons learned documents. These are usually written with a specific audience in mind: business cases are intended for executive management whereas lessons learned docs are addressed to future project staff  (or the portfolio police…). In view of this,  such documents are intended to convey a specific message:  a business case aims to convince management that a project has strategic value while a  lessons learnt document offers future project teams experience-based advice.

Since the writer of a project document has a clear objective in mind, it is natural to expect that the result would be largely unambiguous. In this post, I look at the potential gap between the meaning of a project document (as intended by the author) and its interpretation (by a reader).  As we will see, it is far from clear that the two are the same  –  in fact, most often, they are not.    Note that the points I make apply to any kind of written or spoken communication, not just project documents. However, in keeping with the general theme of this blog, my  discussion will focus on the latter.

Meaning and truth

Let’s begin with an example. Consider the following statement taken from this sample business case:

ABC Company has an opportunity to save 260 hours of office labor annually by automating time-consuming and error-prone manual tasks.

Let’s ask ourselves: what is the meaning of this sentence?

On the face of it, the meaning of a sentence such as the one above is equivalent to knowing the condition(s) under which the claim it makes is true. For example, the statement above implies that if the company undertakes the project (condition) then it will save the stated hours of labour (claim).  This interpretation of meaning is called the truth-conditional model. Among other things, it assumes that the truth of a sentence has an objective meaning.

Most people have something like the truth-conditional model in mind when they are writing documents: they (try to) write in a way that makes the truth of their claims plausible or, better yet, evident.

Buehler’s model of language

At this point, it is helpful to look at a model of language proposed by the German linguist Karl Buehler in the 1930s. According to Buehler, language has three functions, not just one as in the truth-conditional model. The three functions are:

  1. Cognitive: representing an (objective) truth about the world. This is the same “truth” as in the truth-conditional model.
  2. Expressive:  expressing a point of view of the writer (or speaker).
  3. Appeal: making a request of the reader – or “appealing to” the reader.

A graphical representation of the model –sometimes called the organon model – is shown in Figure 1 below.

Figure 1: Buehler's organon model

The basic point Buehler makes is that focusing on the cognitive function alone cannot lead to a complete picture of meaning. One has to factor in the desires and intent of the writer (or speaker) and the predispositions of those who make up the audience. Ultimately, the meaning resides not in some idealized objective truth, but in how readers interpret the document.

Meaning and interpretation

Let’s look at the statement made in the previous section in the light of Buehler’s model.

First, the statement (and indeed the document) makes some claims regarding the external, objective world. This is essentially the same as the truth-conditional view mentioned in the previous section.

Second, from the viewpoint of the expressive function, the statement (and the entire business case, for that matter) selects facts that the writer believes will convince the reader. So, among other things, the writer claims that the company will save 260 hours of manual labour by automating time-consuming and error-prone tasks. The adjectives used imply that some tasks are not carried out efficiently.  The author chose to make this point; he or she could have made it another way or even not made it all.

Finally, executives who read the business case might interpret claim made in many different ways depending on:

  1. Their knowledge of the office environment (things such as the workload of office staff, scope for automation etc.) and the environment.  This corresponds to the cognitive function in Buehler’s model.
  2. Their own predispositions, intentions and desires and those that they impute to the author.  This corresponds to the appeal and expressive functions.

For instance, the statement might be viewed as irrelevant by an executive who believes that the existing office staff are perfectly capable of dealing with the workload (“They need to work smarter”, he might say).  On the other hand, if he knows that the business case has been written up by the IT department (who are currently looking to justify their budgets), he might well question the validity of the statement and ask for details of how the figure of 260 hours was arrived at. The point is: even a simple and seemingly unambiguous statement (from the point of view of the writer) might be interpreted in a host of unexpected ways.

More than just “sending and receiving”

The standard sender-receiver model of communication is simplistic. Among other things it assumes that interpretation is “just” a matter of interpreting a message correctly. The general assumption is that:

…If the requisite information has been properly packed in a message, only someone who is deficient could fail to get it out. This partitioning of responsibility between the sender and the recipient often results in reciprocal blaming for communication. (Quoted from Questions and Information: contrasting metaphors by Thomas Lauer)

Buehler’s model reminds us that any communication – as clear as it may seem to the sender – is open to being interpreted in a variety of different ways by the receiver. Moreover, the two parties need to understand each others  intent and motives, which are generally not open to view.

Wrapping up

The meaning of project documents isn’t as clear-cut as is usually assumed. This is so even for documents that are thought of as being unambiguous  (such as contracts or status reports).  Writers write from their point of view, which may differ considerably from that of their readers. Further, phrases and sentences which seem clear to a writer can be interpreted in a variety of ways by readers, depending on their situation and motivations. The bottom line is that the writer must not only strive for clarity of expression, but must also try to anticipate ways in which readers might interpret what’s written.

Written by K

May 19, 2011 at 9:47 pm

Planning and improvisation – complementary facets of organizational work

with 13 comments


Cause-effect relationships in the business world are never clear cut. Yet, those who run business organisations hanker after predictability. Consequently, a great deal of effort is expended on planning – thinking out and organizing actions aimed at directing the course of the future.  In this “planning view”, time is seen as a commodity that can be divided, allocated and used to achieve organizational aims.  In this scheme of things, the future is seen as unfolding linearly, traversing the axis of time according to plan. Although (good) plans factor in uncertainties and unforeseen events, the emphasis is on predictability and it is generally assumed that things will go as foreseen.

In reality things rarely go according to plan. Stuff happens, things that aren’t foreseen – and what’s not foreseen cannot be planned for. People deal with this by improvising, taking extemporaneous actions that feel right at the time. In retrospect such actions often turn out to be right.  However, such actions are essentially unplanned; one cannot predict or allocate a particular time at which they will occur. In this sense they lie outside of normal (or planned) organizational time.

In a paper entitled Notes on improvisation and time in organisation (abstract only), Claudio Ciborra considered the nature of improvisation in organisations. Although the paper was written a while ago, primarily as a critique of Business Process Reengineering (BPR) and its negative side effects, many of the points he made are of wider relevance.  This post, inspired by Ciborra’s paper, is the first of a two-part series of posts in which I discuss the nature of improvisation and planning in organisations. In the present post I discuss the differences between the two and how they complement each other in practice. In a subsequent post I will talk about how the two lead to different notions of time in organisations.

Contrasting planning and improvisation

The table below summarises some of the key contrasting characteristics between planning and improvisation:



Follows procedures and processes; operates within clearly defined boundaries Idiosyncratic; boundaries are not well defined, or sometimes not defined at all.
Operates within organizational rules and decrees Often operates outside of organizational rules and norms.
Method of solution is assumed to be known. Method emerges via sensemaking and exploration.
Slow, deliberate decision-making Quick – almost instantaneous decision making
Planning attempts to predict and control (how events unfold in) time. Improvisation is extemporaneous – operates “outside of time”

In essence improvisation cannot be planned;  it is always surprising, even to improvisers.

Planning and improvisation coexist

Following Alfred Schutz, Ciborra notes that in planned work (such as projects) every action is carried out according to a view of a future in which it is already accomplished.  In other words, in projects we do things according to a plan because we expect those actions to lead to certain consequences – that is we expect our actions to achieve certain goals.  Schutz referred to such motives as in-order-to motives. These motives are embedded in the project and its rationale, and are often documented for all to see. However, in-order-to motives are only part of the story, and a small one at that. More important are the reasons for which the goals are thought to be worthwhile. Among other things, these involve factors relating history, environment and past experiences of the people who make up the organisation or project. Schutz referred to such motivations as because-of motives. These motives are usually tacit and remain so unless a conscious effort is made to surface them.

As Ciborra puts it:

The in-order-to project deals with the actor’s explicit and conscious meaning in solving a problematic situation while the because-of motives can explain why and how a situation has been perceived as problematic in the first place.

The because-of motives are tacit and lie in the background of the explicit project at hand. They fall outside the glance of rational, awake attention during the performance of the action. They could be inferred by an outsider, or made explicit by the actor, but only as a result of reflection after the fact.

(Note that although Ciborra uses the word project as referring to any future-directed action, it could just as well be applied to the kinds of projects you and I work on.)

Ciborra uses the metaphor of an iceberg to illustrate the coexistence of the two types of motives. The in-order-to motives are the tip of the iceberg, there for all to see. On the other hand, because-of motives, though more numerous, are hidden below the surface and can’t be seen unless one makes the effort to see them. Improvisation generally draws upon these tacit, because-of motives that are not visible.  Moreover, the very interpretation of formalized procedures and best practices involves these motives. Actions performed as a consequence of such interpretations are what bring procedures and practices to life in specific situations. As Ciborra puts it:

A formalized procedure embeds a set of explicit in-order-to’s, but the way these are actually interpreted and put to work strictly depends upon the actor’s in-order-to and because-of motives, his/her way of being in the world “next” to the procedure, the rule or the plan. In more radical terms what is at stake here is not “objects” or “artifacts” but human existence and experience. Procedure and method are just “dead objects”: they get situated in the flow of organizational life only thanks to a mélange of human motives and actions. One cannot cleanse human existence and experience from the ways of operating and use of artifacts.

In short, planning and improvisation are both necessary for a proper functioning of organizations.

Opposite, but complementary

Planning and improvisation are very different activities – the former is aimed at influencing the future through activities that are pre-organized whereas the latter involves actions that occur just-in-time.   Moreover, planning is a result of conscious thought and deliberation whereas improvisation is a result of tacit knowledge being brought to bear, in an instant, on  specific situations encountered in project (or other organizational) work. Nevertheless, despite their differences, both activities are important in organizations. Efforts aimed at planning the future down to the last detail are misguided and bound to fail. Contraria sunt complementa: planning and improvisation are opposites, but they are complementary.1


1 The phrase contraria sunt complementa means opposites are complementary. It appears on the physicist Niels Bohr’s coat of arms (he was knighted after he won the Nobel Prize for physics in 1922). Bohr formulated the complementarity principle, the best known manifestation of which is wave-particle duality – i.e. that in the atomic world, particles can display either wave or particle like characteristics,  depending on the experimental set up.

Written by K

May 5, 2011 at 10:12 pm

The resource allocation syndrome in multi-project environments

with 18 comments


In many organisations, employee workloads consist of a mix of project and operational assignments.  Due to endemic shortfalls in staffing, such folks – particularly those who have key skills and knowledge – generally have little or no spare capacity to take on more work.   However, soon comes along another “important” project in urgent need of staffing and the rest, as they say, is tragedy:  folks who are up to their necks in work are assigned to work on the new project. This phenomenon is a manifestation of the resource allocation syndrome, discussed at length in a paper by Mats Engwall and Anna Jerbrant entitled,  The resource allocation syndrome: the prime challenge of multi-project management?. The present post is a summary of the paper.


Scheduling and resource allocation is a critical part of project planning in multi-project environments. Those who work in such settings know (often from bitter experience) that, despite the best laid plans, it is easy to be over-allocated to multiple projects. Engwall and Jerbrant’s work delves into the factors behind resource over-allocation via a comparative case study involving two very different environments: the contracts department of a railway signalling equipment firm and an R&D division of a telecoms company.

Specifically, the work addresses the following questions:

  1.  Are there any (resource allocation) issues that are common to multi-project / portfolio environments?
  2. What are the mechanisms behind these issues?

As they point out, there are several articles and papers that deal with the issue of resource allocation on concurrent projects. However, there are relatively few that tackle the question of why problems arise. Their aim is to shed light on this question.

Methodology and the case studies

As mentioned above, the authors’ aim was surface factors that are common to multi-project environments. To this end, they   gathered qualitative data from a variety of sources at both sites. This included interviews, studies of project and technical documentation, company procedures and direct observation of work practices.

The first study was carried out at the contract division of a mid-sized railway signalling equipment firm.  The division was well-established and had a long history of successful projects in this domain. As might be expected given the history of the organisation, there was a mature project management methodology in place. The organisation had a matrix management structure with 200 employees who were involved in executing various projects around the world. The work was managed by 20 project managers. Most of the projects were executed for external clients. Further, most projects involved little innovation: they were based on proven technologies that project teams were familiar with. However, although the projects were based on known technologies, they were complex and of a relatively long duration (1 to 5 years).

The second study was done in the R&D division of a telecom operator. The division, which had just been established, had 50 employees who worked within a matrix structure that was organised into five specialist departments. Since the division was new, the project management procedures used were quite unsophisticated. Projects were run by 7 project managers, and often involved employees from multiple departments.  Most of the projects run by the division were for internal customers – other divisions of the company. Also in contrast to the first study, most projects involved a high degree of innovation as they were aimed at developing cutting-edge technologies that would attract new subscribers. However, even though the projects involved new technologies, they were of relatively short duration (0.5 to 2 years).

Important, from the point of view of the study, was the fact that most employees in both organisations were engaged in more than one project at any given time.

For those interested, the paper contains more detail on the methodology and case studies.


As might be expected from a study of this nature, there were differences and similarities between the two organisations that were studied.  The differences were mainly in the client base (external for the contract division, internal for the other), project complexity (complex vs. simple) and organisational maturity (older and mature vs. newly instituted and immature).

Despite the differences, however, both organisations suffered from similar problems. Firstly, both organisations had portfolios with extensive project interdependencies. As a consequence, priority setting and resource (re)allocation was a major management issue. Another issue was that of internal competition between projects – for financial and human resources.  In fact, the latter was one of the most significant challenges faced by both organisations. Finally, in both organisations, problems were dealt with in an ad-hoc way, often resulting in solutions that caused more issues down the line.

From the common problems identified, it was clear that:

In both organizations, the primary management issue revolved around resources. The portfolio management was overwhelmed issues concerning prioritization of projects and, distribution of personnel from one project to another, and the search for slack resources. However, there were no resources available. Furthermore, when resources were redistributed it often produced negative effects on other projects of the portfolio. This forced the management to continuous fire fighting, resulting in reactive behavior and short-term problem solving. However, the primary lever for portfolio management to affect an ongoing project in trouble was resource re-allocation.

There are a couple of points to note here. Firstly, resource re-allocation did not work. Secondly,  despite major differences in between the two organisations, both suffered from similar resource allocation  issues. This suggests that this resource allocation syndrome is a common problem in multi-project environments.

Understanding the syndrome

Based on data gathered, the authors identify a number of factors that affect resource allocation.  These are:

  1. Failure in scheduling: this attributes the resource allocation syndrome to improper scheduling rather than problems of coordination and transition. The fact of the matter is that it is impossible for people to shift seamlessly from one project to another. There is – at the very least – the overhead of context switching. Further, projects rarely run on schedule, and delays caused by this are difficult to take into account before they occur.
  2. Over commitment of resources: This is another common problem in multi-project environments:  there are always more projects than can be handled by available personnel.  This problem arises because there is always pressure to win new business or respond to unexpected changes in the business environment.
  3. Effect of accounting methods: project organisations often bill based on hours spent  by personnel on projects. In contrast, time spent on internal activities such as meetings are viewed as costs. In such situations there is an in-built incentive for management to keep as many people as possible working on projects. A side-effect of this is the lack of availability of resources for new projects.
  4. Opportunistic management behaviour: In many matrix organisations, the allocation of resources is based on project priority. In such cases there is an incentive for project sponsors and senior managers to get a high priority assigned by any means possible. On the other hand, those who already have resources assigned to their projects would want to protect them from being poached to work on other projects.

The above factors were identified based on observations and from comments made by interviewees in both organisations.

Resource allocation (as taught in project management courses) focuses on the first two points noted above: scheduling and over-commitment. The problem is thus seen as a pure project management issue – one that deals with assigning of available resources to meet demand in the most efficient (i.e. optimal) way. In reality, however, the latter two points (which have little to do with project management per se) play a bigger role.  As the author’s state:

Instead of more scheduling, progress reports, or more time spent on review meetings, the whole system of managerial procedures has to be reconceptualized from its roots. As current findings indicate: the resource allocation syndrome of multi-project management is not an issue in itself; it is rather an expression of many other, more profound, organizational problems of the multi-project setting.

The syndrome is thus a symptom of flawed organisational procedures. Consequently,  dealing with it is beyond the scope of project management.


The key takeaway from the paper is that the resource allocation issues are a consequence of flawed organisational procedures rather than poor project management practices.  Project and portfolio managers responsible for resource allocation are only too aware of this. However, they are powerless to do anything about it because, as Engwall and Jerbrant suggest,  addressing the root cause of this syndrome is a task for executive management.

Written by K

April 27, 2011 at 5:20 am

%d bloggers like this: