Hitting the target but missing the point

22 April 2024

Performance monitoring is an essential part of running any organisation but it’s easy to get it wrong. Joe Roberts outlines some potential pitfalls.

Monitoring performance statistics is an unavoidable element of overseeing all but the smallest of organisations, to ensure they achieve their objectives, remain solvent and, in some cases, remain on the right side of the law and regulations.

In the public sector, monitoring also promotes accountability. Information about service performance can be used to hold public bodies and their leaders to account for their use of scarce public resources. It can also shine a light on whether the rest of us are receiving the standard of service we have a right to expect as citizens and taxpayers.

Targets for timely access to services and clinical quality have been part of the healthcare landscape in England since the early 1990s, when the first waiting time targets were introduced. They have sometimes been controversial with the public and unpopular with staff, but there are numerous examples of where they have helped to make a difference, such as in the prevention and management of healthcare-acquired infections.

Getting measurement wrong

For all its benefits, the monitoring of performance data does need to be approached with caution. There is much that can go wrong: in selecting what to measure, in calculating the figures, in presenting the data, and in interpreting and acting upon it. Here, we will talk about the perverse incentives and unintended consequences that can arise when unrealistic, ambiguous or meaningless targets are set.

Challenging ‘stretch’ targets can spur on teams to reach new heights and achieve things that they never thought they could, although the evidence for this is mixed. For example, a study by the Institute for Government found that targets are useful—indeed essential—for setting basic minimum standards and bringing everyone up to the expected level. However, they are not the best way of encouraging further improvement above that baseline because they can reduce professionals’ independence and autonomy and thus discourage innovation.

Setting a target that is either unachievable, or not within the gift of the person or team being held accountable for meeting it, frequently has dysfunctional outcomes. One of the most obvious is the temptation to ‘game the system'—for example, by diverting resources from other services where they might do more good or exploiting loopholes and technicalities in the guidance about how to calculate the figures in order to generate encouraging data that may be technically correct but is ultimately misleading. This results in the public not receiving the standard of service they deserve, but there is an illusion of achievement. Over the years, some NHS organisations have yielded to this temptation.

Crossing the line

In extreme cases, individuals might respond to the pressure of targets by outright falsifying data. Impossible targets can incentivise unethical behaviour and bring an organisation to the brink of destruction. The American retail bank Wells Fargo set extremely aggressive sales targets for its staff in the first half of the 2010s, and their continued employment depended on achieving them. Bank employees went on to open 1.5 million current accounts and 500,000 credit cards for customers without their consent. More than 5,000 staff were later fired, and the bank was fined a total of $200 million by various regulatory agencies at state and federal level.

Nor should we forget the impact that unachievable targets may have on individuals tasked with delivering the impossible: stress, a sense of moral conflict, and ultimately burnout in some cases. Setting someone up to fail has been identified in employment tribunals as a form of workplace bullying.

Another form of dysfunctional performance management is when key performance indicators are reported that are not meaningful or do not tell the whole story.

An NHS trust reported in its integrated performance report the number of complaints that had been received in the previous month, and the number of responses sent to complainants. The two numbers were roughly in balance most of the time and did not change much from month to month, suggesting little cause for concern. It was not possible to guess from these graphs that the trust had a significant backlog of very old overdue complaints, or that a significant proportion of the complaints received each month were actually previous complaints that had to be re-opened because the trust’s response had not addressed the patient’s concerns satisfactorily.

The balancing act

Performance indicators need to balance competing priorities and different aspects of the organisation’s business. It is often said, correctly, that “what gets measured gets done”. The flip side is that what doesn’t get measured often doesn’t get done. If a healthcare organisation’s key performance indicators are solely about financial balance and waiting time targets, it should not be entirely surprising when clinical quality starts to slip, and staff wellbeing deteriorates.

It's also essential that performance indicators can be measured accurately and easily. This is not always as straightforward it sounds, especially if an organisation’s information systems are outdated or its employees have not been trained in how to enter data (garbage in, garbage out).

Sometimes, KPIs can be calculated accurately but only through a laborious process of analysis and data validation, which uses up time and effort that could be applied more productively elsewhere.

It is so important to develop a balanced portfolio of performance indicators that encompass statutory requirements, the organisation’s own strategic priorities, contractual commitments to the funders of its services and, last but definitely not least, what matters most to people who use services.

When developing KPIs, it is essential to understand the processes that they are measuring. What is the yardstick of success? What does the customer, patient or service user value the most? What drives demand for the service? What are the obstacles to success? Which of these factors are within the organisation’s control and which are not? Who are the key people in the process and what are they responsible for doing?

Many – although not all – of the above questions can best be answered by the people who work in the service being monitored and who will be responsible for achieving whatever target is set. They should be consulted on the target – not least because they are more likely to feel committed to the target if they have been involved in setting it.

Performance reporting is a fundamental part of modern management but there are many ways to get it wrong. It’s essential to carefully consider which areas to look at and which questions to ask, and to develop a deep understanding of the performance issues behind the metrics.

External perspective can be invaluable for busy boards, both to help home in on the right focus areas, and to develop the skills needed to effectively oversee the activities of any complex organisation.

Meet the author: Joe Roberts

Consultant

Find out more

Prepared by GGI Development and Research LLP for the Good Governance Institute.

Enquire about this article

Enquire
Here to help