Measuring the unmeasurable: a note on the pitfalls of performance metrics
Many organisations measure performance – of people, projects processes or whatever – using quantitative metrics, or KPIs as they are often called. Some examples of these include: calls answered / hour (for a person working in a contact centre); % complete (for a project task) and orders processed / hour (for an order handling process). The rationale for measuring performance quantitatively is rooted in Taylorism or scientific management. The early successes of Taylorism in improving efficiencies on the shopfloor lead to its adoption in other areas of management. The scientific approach to management underlies the assumption that metrics are a Good Thing, echoing the words of the 19th century master physicist, Lord Kelvin:
When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge of it is of a meager and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced it to the stage of science.
This is a fine sentiment for science: precise measurement is a keystone of physics and other natural sciences. So much so, that scientists expend a great deal of effort in refining and perfecting certain measurements. However, it can be misleading and sometimes downright counterproductive to attempt such quantification in management. This post explains why I think so.
Firstly, there are basically two categories of things (indicators, characteristics or whatever) that management attempts to quantify when defining performance metrics– tangible (such as number of calls per unit time) and intangible (for example, employee performance on a five point scale). Although people attach numerical scores to both kinds of things, I’m sure most people would agree that any quantification of employee performance is way more subjective than number of calls per unit time. Now, it is possible to reduce this subjectivity by associating the intangible characteristic to a tangible one – for example, employee performance can be tied to sales (for a sales rep), r number of projects successfully completed (for a project manager) or customer satisfaction as measured by surveys (for a customer service representative). However, all such attempts result in a limited view of the characteristic being measured. Such associated tangible metrics cannot measure all aspects of the intangible metric in question. In the case at hand – employee performance – factors such as enthusiasm, motivation, doing things beyond the call of duty etc., all of which are important aspects of employee performance, remain unmeasurable. So as a first point we have the following: attaching a numerical score to intangible quantities is fraught with subjectivity and ambiguity.
But even measures of tangible characteristics can have issues. An example that comes to mind is the infamous % complete metric for tasks in a project management. Many project managers record a progress by noting that a task – say data migration – is 70% complete. But, what does this figure mean? Does it mean that 70% of the data has been migrated (and what does that mean anyway?), or is it that 70% of the total effort required (as measured against days allocated to the task) has been expended. Most often, the figure quoted has no explanation as to what it means – and everyone interprets it in a way that best suits their agenda. My point here is: a well designed metric should include an unambiguous statement as to what is being measured, how it is to be measured and how it is to be interpreted. Many seemingly well defined metrics do not satisfy this criterion – the % complete metric being a sterling example. These give the illusion of precision, which can be more harmful than having no measurement at all. My second point is thus summarised as follows: it is hard to design unambiguous metrics, even for tangible performance characteristics. Of course, speaking of the % complete metric, many project managers now understand its shortcomings and use an “all or nothing” approach – a task is either 0% complete (not started or in progress) or 100% complete (truly complete).
…Tell me how you measure me and I will tell you how I will behave. If you measure me in an illogical way…do not complain about illogical behaviour…
A case in point is the customer contact centre employee who is measured by calls handled per hour. The employee knows he has to maximise calls taken, so he ends up trying to keep conversations short – even if it means upsetting customers. By trying to improve call throughput, the company ends up reducing quality of service. Fortunately, some service companies are beginning to understand this – read about Repco‘s experience in this article from MIS Australia, for example. The take-home point here is: performance measurements that focus on the wrong metric have the potential to distort employee behaviour to the detriment of the organisation.
Finally, metrics that rely on human judgements are subject to cognitive bias. Specifically, it is well known that biases such as anchoring and framing can play a big role in determining the response received to a question such as, “How would you rate X’s performance on a scale of 1 to 5 (best performance being 5)?” In earlier posts, I’ve written about the role of cognitive biases in project task estimation and project management research. The effect of these biases on performance metrics can be summarised as follows: since many performance metrics rely on subjective judgements made by humans, these metrics are subject to cognitive biases. It is difficult, if not impossible, to correct for these biases.
To conclude: it is difficult to design performance metrics that are unambiguous, unbiased and do not distort behaviour. Use them if you must – or are required to do so by your organisation – but design and interpret them with care because, if used unthinkingly, they can cause terminal damage to employee morale.