Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Consulting’ Category

Measuring the unmeasurable: a note on the pitfalls of performance metrics

with 7 comments

Many organisations measure  performance – of people, projects processes or whatever – using  quantitative metrics, or KPIs as they are often called.  Some examples of these include: calls answered / hour (for a person working in a contact centre); % complete (for a project task) and  orders processed / hour (for an order handling process). The rationale for measuring performance quantitatively is rooted in Taylorism or scientific management. The early successes of Taylorism in improving efficiencies on the shopfloor lead to its adoption in other areas of management. The scientific approach to management underlies the assumption that metrics are a Good Thing,  echoing the words of the 19th century master physicist, Lord Kelvin:

When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge of it is of a meager and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced it to the stage of science.

This is a fine sentiment for science: precise measurement is a keystone of physics and other natural sciences. So much so, that scientists expend a great deal of effort in refining and perfecting certain measurements. However, it can be misleading and sometimes downright counterproductive to attempt such quantification in management.  This post explains  why I think so.

Firstly, there are basically two categories of things (indicators, characteristics or whatever) that management attempts to quantify when defining performance metrics– tangible (such as number of calls per unit time) and intangible (for example, employee performance on a five point scale). Although people attach numerical scores to both kinds of things, I’m sure most people would agree that any quantification of employee performance is way more subjective than number of calls per unit time. Now, it is possible to reduce this subjectivity by associating the intangible characteristic to a tangible one – for example, employee performance can be tied to sales (for a sales rep), r number of projects successfully completed (for a project manager) or customer satisfaction as measured by surveys (for a customer service representative).  However, all such attempts result in a limited view of the characteristic being measured.  Such associated tangible metrics cannot measure all aspects of the intangible metric in question. In the case at hand – employee performance –   factors such as enthusiasm, motivation, doing things beyond the call of duty etc., all of which are important aspects of employee performance, remain unmeasurable. So as a first point we have the following: attaching a numerical score to intangible quantities is fraught with subjectivity and ambiguity.

But even measures of tangible characteristics can have issues. An example that comes to mind is the infamous % complete metric for tasks in a project management. Many project managers record a progress by noting that a task – say data migration – is 70% complete. But, what does this figure mean? Does it mean that 70% of the data has been migrated (and what does that mean anyway?), or is it that 70% of the total effort required (as measured against days allocated to the task) has been expended. Most often, the figure quoted has no explanation as to what it means – and everyone interprets it in a way that best suits their agenda. My point here is: a well designed metric should include an unambiguous statement as to what is being measured,  how it is to be measured and how it is to be interpreted. Many seemingly well defined metrics do not satisfy this criterion – the % complete metric being a sterling example. These give the illusion of precision, which can be more harmful than having no measurement at all. My second point is thus summarised as follows: it is hard to design unambiguous metrics, even for tangible performance characteristics. Of course, speaking of the % complete metric, many project managers now understand its shortcomings and use an “all or nothing” approach – a task is either 0% complete (not started or in progress) or 100% complete (truly complete).

Another danger of quantification of performance is highlighted by Eliyahu Goldratt in his book The Haystack Syndrome. To quote from the book:

…Tell me how you measure me and I will tell you how I will behave. If you measure me in an illogical way…do not complain about illogical behaviour…

A case in point is the customer contact centre employee who is measured by calls handled per hour. The employee knows he has to maximise calls taken, so he ends up trying to keep conversations short – even if it means upsetting customers. By trying to improve call throughput, the company ends up reducing quality of service. Fortunately, some service companies are beginning to understand this – read about Repco‘s experience in this article from MIS Australia, for example. The take-home point here is: performance measurements that focus on the wrong metric have the potential to distort employee behaviour to the detriment of  the organisation.

Finally, metrics that rely on human judgements are subject to cognitive bias. Specifically, it is well known that biases such as anchoring and framing can play a big role in determining the response received to a question such as, “How would you rate X’s performance on a scale of 1 to 5 (best performance being 5)?” In earlier posts, I’ve written about the role of cognitive biases in project task estimation and project management research. The effect of these biases on performance metrics can be summarised as follows: since many performance metrics rely on subjective judgements made by humans, these metrics are subject to cognitive biases. It is difficult, if not impossible, to correct for these biases.

To conclude: it is difficult to design performance metrics that are unambiguous, unbiased and do not distort behaviour. Use them if you must – or are required to do so by your organisation – but design and interpret them with care because, if used unthinkingly, they can cause terminal damage to employee morale.

Written by K

March 20, 2009 at 7:48 pm

Project portfolio management for the rest of us

with one comment

Introduction

In small organisations,  projects are often handled on a case-by-case basis, with little or no regard to the wider ramifications of the effort. As such organisations grow, there comes a point where it becomes necessary to prioritise and manage the gamut of projects from a strategic viewpoint. Why?  Well, because if not, projects are undertaken on a first-come-first-served basis or worse, based on who makes the most noise (also known as the squeakiest wheel).  Obviously, neither of these approaches serves the best interests of the organisation. The issue of prioritising projects is addressed by Project Portfolio Management  or PPM (which should be distinguished from IT Portfolio Management). This post presents a simple approach to PPM; one that can be put to immediate use in smaller organisations which have grown to a point where an ad-hoc approach to multiple projects is starting to hurt.

Let’s begin with a few definitions:

Portfolio: The prioritised set of all projects and programs in an organisation.

Program: A set of multiple, interdependent projects which (generally, but not always) contribute to a single (or small number of) strategic objectives.

Project: A unique effort with a defined beginning and end, aimed at creating specific deliverables using defined resources.

As per the definition, an organisation’s project portfolio spans the entire application and infrastructure development effort within the organisation. In a nutshell: the basic aim of PPM is to ensure that the projects undertaken are aligned with the strategic objectives of the organisation. Clearly then, strategy precedes PPM – one can’t, by definition, have the latter without the former. This is a critical issue that is sometimes overlooked: the executive board is unlikely to be enthused by PPM unless there are demonstrable strategic benefits that flow from it.
 
It is worth pointing out that there are several program and portfolio management methodologies, each appropriate for a particular context. This post outlines a light-weight approach,  geared towards smaller organisations.

Project portfolio management in three minutes

The main aim of PPM is to ensure that the projects undertaken within the organisation are aligned with its strategy. Outlined below is an approach to PPM that is aimed at doing this.

The broad steps in managing a project portfolio are:

  1.  Develop project evaluation criteria.
  2. Develop project balancing criteria. Note: Steps (1) and (2) are often combined into a single step.
  3.  Compile a project inventory.
  4. Score projects in inventory according to criteria developed in step (1)
  5. Balance the portfolio based on criteria developed in step (2). Note: Steps (4) and (5) are often combined into one step.
  6. Authorise projects based on steps (4) and (5) subject to resource constraints and interdependencies
  7. Review the portfolio

I elaborate on these briefly below

1.  Develop project evaluation criteria: The criteria used to evaluate projects are obviously central to PPM, as they determine which projects are given priority. Suggested criteria include: 

  • Fit with strategic objectives of company.
  • Improved operational efficiency
  • Improved customer satisfaction
  • Cost savings

 Typically most organisations use a numerical scale for each criterion (1-5 or 1-10) with a weighting assigned to each (0<weighting<1). The weightings should add up to 1. Note that the above criteria are only examples. Appropriate criteria would need to be drawn up in consultation  with senior management.

2. Develop balancing criteria: These criteria are used to ensure that the portfolio is balanced, very much like a balanced financial portfolio (on second thoughts, perhaps,  this analogy doesn’t inspire much confidence in these financially turbulent times). Possible criteria include:

  • Risk vs. reward.
  • Internal focus vs. External (market) focus.
  •  External vs. internal development

3. Compile a project inventory: At its simplest this is a list of projects. Ideally the inventory should also include a business case for each project, outlining the business rationale, high level overview of implementation alternatives, cost-benefit analysis etc. Further, some organisations also include a high-level plan (including resource requirements) in the inventory.

4. Score projects: Ideally this should be done collaboratively between all operational and support units within the organisation. However, if scoring and balancing criteria set are set collaboratively, scoring projects may be a straightforward, non-controversial process. The end-result is a ranked list of projects.

5. Balance the portfolio: Adjust rankings arrived at in (4) based on the balancing criteria. The aim here is to ensure that the active portfolio contains the right mix of projects.

6. Authorise projects: Projects are authorised based on rankings arrived at in the previous step, subject to constraints (financial, resource etc.) and interdependencies. Again, this process should be uncontroversial providing the previous steps are done using a consultative approach. Typically, a cut-off score is set, and all projects above the cut-off are authorised. Sounds easy enough and it is. But it can be an exercise in managing disappointment, as executives whose projects don’t make the cut are prone to go into a sulk.

7. Review the portfolio: The project portfolio should be reviewed at regular intervals, monitoring active project progress and looking at what’s in the project pipeline. The review should evaluate active projects with a view to determining whether they should be continued or not. Projects in the pipeline should be scored and added to the portfolio, and those above the cut-off score should be authorised subject to resource availability and interdependencies.

The steps outlined above provide an overview of a suggested first approach to PPM for organisations beginning down the portfolio management path. As mentioned earlier, this is one approach; there are many others.

Conclusion

Organisational strategy is generally implemented through initiatives that translate to  a number of programs  and projects. Often these initiatives have complex interdependencies and high risks (not to mention a host of other characteristics). Project portfolio management, as outlined in this note, offers a transparent way to ensure that the organisation gets the most bang for its project buck – i.e that projects are implemented in order of strategic priority.

Written by K

February 23, 2009 at 9:01 pm

Why I didn’t do some of the things I had to do…

leave a comment »

Why do people postpone important tasks?  Research by Sean McCrea and his colleagues may provide a partial answer. Theyfound that people tend to procrastinate when asked to perform tasks that are defined in  abstract terms. What this means is best explained through one of their experiments: half of a group of students were asked to describe how they would carry out a mundane task such as opening a bank account, and the other half  were  asked to describe reasons why one might do that task  – i.e. why one might want to open a bank account.  The first task is straightforward, and needs little thought  prior to execution. The second one is more abstract; some deliberation is required before doing it.  Even though all participants were offered a small (but interesting enough) sum of money if they completed the task within three weeks, it was found that most of those who were given the concrete task completed it on time  whereas more than half those assigned the abstract task failed to complete it.  The researchers use the concept of psychological distance to describe this behaviour. Psychological distance in this context is a measure of the closeness (or remoteness) a person feels to a task, abstract tasks being more “distant” in this sense than concrete ones.

Reading about this reminded me of an incident that occurred many years ago, just after I’d made a career switch from academic research to business consulting. One of the partners in the firm I was working for had asked me to write a project proposal for a new client. He assumed I knew what was needed, and offered no guidance. I had a half-hearted try at it, but couldn’t make much headway.  Like the stereotypical student, I then put it off for several days. The day before the deadline, fearing the consequences of inaction,  I got down to it.  I spoke to a few colleagues to make the task clearer, spent some thinking it through then, finally, wrote (and rewrote) the proposal well into the night.

Seen in the light of Dr. McCrea’s research, my procrastination was simply a normal human reaction to an abstract task. Once I was able to define the task better – with the help of my colleagues and some thought – my reasons for procrastination vanished, and with it my mental block.

I see this operate in my current job too. I work with a small group of developers who tackle a wide range of projects ranging from enterprisey stuff (such as the implementation of CRM systems), to the development of  niche applications used by a handful of people. The small size of our group means that everyone has to do a bit of everything – design, coding, testing, maintenance, support and (unfortunately) … documentation. Now, in keeping with the stereotypical developer, most of the mob detest doing documentation.  “I’d rather do maintenance coding,” said one. When asked why, he replied that it took him a lot more effort to write than it did to do design or coding work.   Of course, this is not to say that cutting coding is easy, but that developers  (or the ones I work with, at any rate)  find it less remote psychologically – and hence easier –  than writing.  So, when required to do documentation, they typically put it off if as much as possible.

The relationship between task abstraction and procrastination indicates how  managers can help reduce the  tendency to procrastinate.  The basic idea is to reduce task abstraction, and hence reduce the psychological remoteness an assignee feels in relation to a task. For example, when asking a coder to write documentation, it might help to provide a template with headings and sub-headings, or make suggestions on what should and should not be included in the documentation. Anything that makes the task less abstract will help counter procrastination.

Tasks can be made more concrete in a number of ways. Some suggestions:

  • Outline  steps  required to perform the task.
  • Providing more detail about the task.
  • Narrow the task down to specifics.
  • Provide examples or templates of how the task might be done.

Of course, not all procrastination can be attributed to task abstraction. Folks put off tasks for all kinds of reasons – and sometimes even for no reason at all.  However, speaking from personal experience,  Dr. McCrea’s work does ring true:  I didn’t do some of the things I had to do simply because they weren’t clear enough to me – like that project plan I was supposed to have started on a week ago.  But advice is easier given than taken. With only a gentle pang of guilt, I put it off until tomorrow.

Written by K

February 7, 2009 at 3:41 pm