Eight to Late

Sensemaking and Analytics for Organizations

Archive for March 2009

Measuring the unmeasurable: a note on the pitfalls of performance metrics

with 7 comments

Many organisations measure  performance – of people, projects processes or whatever – using  quantitative metrics, or KPIs as they are often called.  Some examples of these include: calls answered / hour (for a person working in a contact centre); % complete (for a project task) and  orders processed / hour (for an order handling process). The rationale for measuring performance quantitatively is rooted in Taylorism or scientific management. The early successes of Taylorism in improving efficiencies on the shopfloor lead to its adoption in other areas of management. The scientific approach to management underlies the assumption that metrics are a Good Thing,  echoing the words of the 19th century master physicist, Lord Kelvin:

When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge of it is of a meager and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced it to the stage of science.

This is a fine sentiment for science: precise measurement is a keystone of physics and other natural sciences. So much so, that scientists expend a great deal of effort in refining and perfecting certain measurements. However, it can be misleading and sometimes downright counterproductive to attempt such quantification in management.  This post explains  why I think so.

Firstly, there are basically two categories of things (indicators, characteristics or whatever) that management attempts to quantify when defining performance metrics– tangible (such as number of calls per unit time) and intangible (for example, employee performance on a five point scale). Although people attach numerical scores to both kinds of things, I’m sure most people would agree that any quantification of employee performance is way more subjective than number of calls per unit time. Now, it is possible to reduce this subjectivity by associating the intangible characteristic to a tangible one – for example, employee performance can be tied to sales (for a sales rep), r number of projects successfully completed (for a project manager) or customer satisfaction as measured by surveys (for a customer service representative).  However, all such attempts result in a limited view of the characteristic being measured.  Such associated tangible metrics cannot measure all aspects of the intangible metric in question. In the case at hand – employee performance –   factors such as enthusiasm, motivation, doing things beyond the call of duty etc., all of which are important aspects of employee performance, remain unmeasurable. So as a first point we have the following: attaching a numerical score to intangible quantities is fraught with subjectivity and ambiguity.

But even measures of tangible characteristics can have issues. An example that comes to mind is the infamous % complete metric for tasks in a project management. Many project managers record a progress by noting that a task – say data migration – is 70% complete. But, what does this figure mean? Does it mean that 70% of the data has been migrated (and what does that mean anyway?), or is it that 70% of the total effort required (as measured against days allocated to the task) has been expended. Most often, the figure quoted has no explanation as to what it means – and everyone interprets it in a way that best suits their agenda. My point here is: a well designed metric should include an unambiguous statement as to what is being measured,  how it is to be measured and how it is to be interpreted. Many seemingly well defined metrics do not satisfy this criterion – the % complete metric being a sterling example. These give the illusion of precision, which can be more harmful than having no measurement at all. My second point is thus summarised as follows: it is hard to design unambiguous metrics, even for tangible performance characteristics. Of course, speaking of the % complete metric, many project managers now understand its shortcomings and use an “all or nothing” approach – a task is either 0% complete (not started or in progress) or 100% complete (truly complete).

Another danger of quantification of performance is highlighted by Eliyahu Goldratt in his book The Haystack Syndrome. To quote from the book:

…Tell me how you measure me and I will tell you how I will behave. If you measure me in an illogical way…do not complain about illogical behaviour…

A case in point is the customer contact centre employee who is measured by calls handled per hour. The employee knows he has to maximise calls taken, so he ends up trying to keep conversations short – even if it means upsetting customers. By trying to improve call throughput, the company ends up reducing quality of service. Fortunately, some service companies are beginning to understand this – read about Repco‘s experience in this article from MIS Australia, for example. The take-home point here is: performance measurements that focus on the wrong metric have the potential to distort employee behaviour to the detriment of  the organisation.

Finally, metrics that rely on human judgements are subject to cognitive bias. Specifically, it is well known that biases such as anchoring and framing can play a big role in determining the response received to a question such as, “How would you rate X’s performance on a scale of 1 to 5 (best performance being 5)?” In earlier posts, I’ve written about the role of cognitive biases in project task estimation and project management research. The effect of these biases on performance metrics can be summarised as follows: since many performance metrics rely on subjective judgements made by humans, these metrics are subject to cognitive biases. It is difficult, if not impossible, to correct for these biases.

To conclude: it is difficult to design performance metrics that are unambiguous, unbiased and do not distort behaviour. Use them if you must – or are required to do so by your organisation – but design and interpret them with care because, if used unthinkingly, they can cause terminal damage to employee morale.

Written by K

March 20, 2009 at 7:48 pm

Fostering cross-project learning and continuous improvement in projectised environments.

with 3 comments

Introduction

There’s much angst and hand-wringing about how difficult it is to engender a learning environment in projectised organisations. In an earlier post I made a case for an emphasis on learning rather than efficiency in project execution. That  discussion focussed on challenges around learning within a project.  These challenges are magnified in the case of learning across projects. In most organisations, the responsibility for cross-project learning typically rests with the project management office  (PMO) or its equivalent – whether or not it exists as a formal entity. In a paper entitled, How Project Management Office Leaders Foster Cross-Project Learning and Continuous Improvement, published in the Project Management Journal, Jerry Julian investigates how project management office leaders facilitate cross-project learning. Based on his findings, he also presents some recommendations for improving cross-project learning and foster continuous improvement. I summarise and discuss the paper below.

The paper begins with the observation that, “…project teams often start solving problems anew rather than learning from previous projects within the same organisation. This often means that the end of a project is the end of collective learning…” As they are the focal point for all project work within the organisation, PMOs are best placed to foster cross-project learning. Further, since they serve as a repository for project documentation, they are also well placed to identify potential improvements and implement these after due consideration.  The paper explores how PMO leaders perceive their role in fostering learning and continuous improvement.  The specific questions addressed are:

  1. What do PMO leaders see as their responsibilities in fostering learning and continuous improvement?
  2. How do they foster cross-project learning?
  3. What are the enablers and barriers (as perceived by them) to cross project learning and continuous improvement.

The author attempts to  answer these based on data gathered from surveys and subsequently validated through focus groups. Regardless of the validity of the author’s methodology, the paper serves to inform PMO leaders about what their peers are doing to foster cross-project learning. It therefore merits attention from PMO or program/portfolio managers.

Background

One of the most common activities associated with cross-project learning is the practice of conducting project post-mortems aimed at finding out what went well and what didn’t.  From his review of the literature,  the author finds that although the value of such “lessons learned” sessions is widely acknowledged, many organisations fail to conduct them in practice (see this paper by Maximilian von Zedtwitz, for example).  Amongst those that do, there is general dissatisfaction with the process. As Anne Keegan and Rodney Turner state in this paper, “…Project team members frequently do not have the time for meetings, or for sessions to review lessons learned. Often project team members are immediately reassigned to new projects before they have had time for lessons learned sessions or after action reviews. In no single company did respondents express satisfaction with this process, and all claimed that time pressures exert enormous pressure, and reduce the effectiveness of these learning practices.”

However, the problem is deeper than time pressures. As Jacky Swan and coworkers point out in this paper, the assumption that knowledge can be captured and transferred in textual form is itself questionable, basically because it ignores that knowledge is often embedded in practice, and hence cannot be understood independently of that practice. If this is true – and most experienced project managers would recognise that it is – then the traditional lessons learned document is less useful than it is thought to be. What might be more useful is in knowledge is transferred in other ways, such as narration and joint work.  In essence, many lessons can only be learned by doing – furthermore, doing in the right context.  This isn’t new: it is central to the idea of reflective practice articulated by Donald Schon in his 1983 classic entitled The Reflective Practitioner . It is also central to Jean Lave and Etienne Wengers ‘ ideas on communities of practice. Accordingly, the author uses these concepts in developing a framework to study the role of PMO leaders in cross-project learning.

Framework and Methodology

Following, Wenger the author views (cross-functional) project teams as being made up of  people from multiple communities of practice, and a PMO as being embedded in a “constellation of practices” through which knowledge about past projects must be negotiated and shared.  In this framework a PMO leader can be seen as a broker between various communities of practice – which may include senior management, project teams and the PMO itself. A PMO leader would do things that promote communication between various communities of practice. Specifics of what those “things” might be include: alignment (ensure everyone’s “on the same page”), translation (for instance, between IT and business-speak), coordination (ensure people’s efforts are directed to the same end) etc. The PMO leader, being at the boundary between multiple communities of practice, has a particular responsibility in managing boundaries between these communities. A PMO leader manages these boundaries by:

  • Promoting boundary encounters (single or discrete encounters that provide connections across practices)
  • Developing effective boundary practices (practices that sustain connections across boundaries)
  • Creating / archiving useful boundary objects (artefacts – documents, stories etc.- that organise interconnections across communities).

In addition to managing boundaries, the PMO leader should reflect on the knowledge gained in boundary encounters. Such reflection could be at the level of

  • Content: review how ideas have been applied to solving problems.
  • Process: review the problem solving process.
  • Premise: review the assumptions underlying the process.

Ideally reflection should occur at all three levels. In this view, the PMO leader is not just the custodian of project-related processes, but also a driver of process improvement. The author uses the aforementioned concepts of communities of practice (and the boundaries between them) and reflection (or reflective learning) to frame his research methodology and subsequent analysis. To gather data, the author interviewed several PMO leaders drawn from various industries. Prospective interviewees were identified through 20 initial contacts and then using a snowball sampling strategy whereby initial contacts and others were asked to provide referrals to individuals who met the author’s selection criteria. Data collection was done through a pre-interview questionnaire followed by an in-depth interview. The paper has a fairly detailed description of the methodology, so I’ll leave it at that and proceed on to the findings which are likely to be of greater interest to my readers.

Findings

The author’s results are best discussed in the context of the research questions posed in the introduction:

Perceptions of PMO leader responsibilities relating to cross-project learning

  • 75% of participants said that their primary responsibility was to ensure that projects were delivered on time and within budget and expectations.
  • 60% required project teams to identify lessons learned.
  • 45% said that continuous improvement of project performance was an important part of their job.
  • 45% felt that they were responsible for consistent adoption of project management practices across the organisation.
  • 20% said that their responsibilities included providing a learning and development environment for project managers.

How PMO leaders foster cross-project learning

Unsurprisingly, every interviewee claimed that they facilitated cross-project learning by brokering connections between senior management, project teams and other communities. They do this through a variety of boundary practices, objects and encounters. Specific boundary practices include:

  • Lessons learned practices (85%)
  • Status and project reports to senior management and other governance processes (85%)
  • Common project management practices (80%)
  • Knowledge sharing forums (40%)

Boundary objects used to foster connections across projects include:

  •  Tools and templates (85%)
  • Systems (65%)
  • Documents (40%)

Boundary encounters included meetings to

  • Intervene when projects were going off-track
  • Transfer project management knowledge to project teams
  • Improve processes used on projects.

In addition it was found that about half the interviewed PMO leaders engaged in content and process reflection to diagnose project-related problems and to help project staff and other stakeholders learn from prior projects.

Enablers of cross-project learning

The following were the major findings relating to enablers of cross-project learning:

  • 60% of those interviewed believed that a network of strong relationships is important to enabling cross-project learning.
  • 6% thought that senior management support is important.
  • 30% believed that organizational culture plays an important role.
  • 25% believed that it is important to have a neutral facilitator
  • 25% emphasised the importance of professional development of project managers via training, apprenticeships etc.
  • 10% believed that reflection throughout the course of the project (rather than just at the end) is critical.

Barriers to cross-project learning

The following were the author’s findings relating to barriers to cross-project learning:

  • 55% identified the lack of direct authority over project managers as a major barrier.
  • 45% thought that time pressures were an important factor.45% reckoned that frequent staff rotation hindered cross project learning.
  • 35% felt that fear of reprisals prevented staff from airing or owning up to mistakes.
  • 20% thought that lack of senior management support played a role.
  • 20% identified the deferral of learning to the end of the project – i.e. lack of continuous reflection and improvement – reduced the effectiveness of learning.

Analysis and discussion

On analysing his data, the author finds that PMO leaders broker learning through a variety of activities, including:

  • Project intervention – 35%
  • Status reporting and governance – 17%
  • Lessons learned practices -16%
  • Process improvement -12 %
  • Transfer of standards and practices -12 %
  • Knowledge sharing fora – 7%

The author acknowledges that the large percentage associated with the first item might be due to a framing effect – the questions asked in the interview specifically solicited critical incidents in which intervention was required. He also suggests that the relative importance of the reporting/governance item is due to the fact that most PMO leaders consider project performance to be their primary responsibility. Since PMO leaders are often judged on project performance, it is common for them to dashboards to keep track of project statuses. In these, statuses are monitored through traffic light reports, wherein projects that are going well are coded green; those in potential difficulty, yellow; and those in trouble, red. Most often, projects doing well are ignored and only those in trouble are given attention. This leads to what the researcher calls “red-light learning” wherein lessons are learnt only in the context of something going wrong. Potentially useful information pertaining to “what went right” is ignored.

Based on the findings described in the previous section, the author posits that social capital  is a key enabler of  cross-project learning. In the present context of cross-project learning, it is important for PMO leaders to build social capital by developing connections to individuals and other social networks (project teams, expert groups etc.) both within and outside their organisations. On the other hand the findings suggest that, defensive routines are barriers to cross-project learning. The term defensive routine was coined by Chris Argyris, who defined it as, “..actions or policies that prevent individuals or teams of the organization from experiencing embarrassment or threat…” An example of this, familiar to those working in projectised environments, is the unwillingness of project team members to talk about things that didn’t go so well. Most often such defensive routines point to a deeper malaise – in the case of the example, unwillingness to discuss what went wrong may be due to a “blame culture” within the organisation.

The findings indicate that PMO leaders broker learning through a variety of boundary practices. The author classifies these into two categories: retrospective and prospective. The former includes activities aimed at generating and reviewing knowledge from past projects (e.g. lessons learned, status reporting), whereas the latter is aimed at applying knowledge gained in past projects to future ones (e.g. process improvement, knowledge forums).  The author refers to these practices as “collective brokering.”  In his words, “PMO leaders can be viewed as knowledge brokers who, through the establishment of retrospective and prospective brokering practices, help their organizations learn from past project experiences by embedding process knowledge into organizational routines that can be transferred to new or existing projects.”  In my opinion, the reference to organizational routines is unfortunate, as routines imply rigidity. Perhaps the term “organizational practices” may be more appropriate here – implying a degree of flexibility and adaptability. See this post by Nikolai Foss for more on organizational routines versus organizational practices.

Concluding Remarks

The paper ends with some conclusions and recommendations. I list these below, along with some comments. Conclusions first:

  1.  PMO leaders are knowledge brokers who facilitate organizational learning and continuous improvement.
  2.  Organizational routines that incorporate knowledge gained from past projects provide a vehicle for improving an organisation’s everyday processes.
  3. Defensive routines can hinder organisational learning from projects.

The view of PMO leaders as knowledge brokers is a useful one: PMO leaders, by virtue of their involvement in diverse projects across the organisation, are uniquely placed to serve as catalysts for learning and improvement. The second conclusion merely states that knowledge gained from projects should, where possible, be incorporated into an organisation’s day-to-day work processes. I’m sure many project and program managers will concur with the third conclusion. I suspect it is the main reason why – so often – not much is learnt from lessons learned sessions. Finally, the author presents some recommendations for PMO leaders. These are:

  1. To focus on building up social capital across multiple communities (within and outside the organisation) through the development of relationships based on trust, professional development and mutual understanding.
  2. To focus on  learning from both successful and failed projects.
  3.  To reflect throughout the course of a project, not just at the end.
  4. To use neutral facilitators in order to get the most out of project retrospectives.

The recommendations range from the very general (the first one) to the very specific (the fourth one). It is all good advice,  no doubt. However, I can’t help but feel that it isn’t really new: most PMO managers who understand their role in fostering cross-project learning already know this from experience.

References:

Julian, Jerry., How project management office leaders facilitate cross-project learning and continuous improvement, Project Management Journal, 39 (3), 43-58. (2008).

Written by K

March 12, 2009 at 11:15 pm

%d bloggers like this: