NHS Voices blogs

Change wisely – why we must get evaluation right

What should we keep, and what should we lose? As the NHS resets, approaches to evaluation need to take a new turn too.
Sally Eason

17 December 2021

When it comes to evaluation, continual assessment is better than waiting until the end of a pilot or project, argues Sally Eason. But we must be allowed to make mistakes, introduce changes and be encouraged to share experiences.  

Never has it been more important for us to use NHS resources wisely. COVID-19 continues to exert pressure throughout the health and care system, on top of an already overstretched service. Tackling these demands is a balancing act – knowing what to change, what should stay the same, what new services are needed, and what could and should be stopped to deliver a more efficient, patient-centred service.

Monitoring and measuring the impact of what we do allows us to make informed decisions when delivering change

Innovation, by its nature, carries some risk – we can’t be sure in advance that something will work as intended until we put it into practice. But we can’t let that prevent us from developing new ways of working that will help us meet growing demand.

That’s where effective evaluation comes in. Monitoring and measuring the impact of what we do allows us to make informed decisions when delivering change, but how we do it matters, particularly when it comes to complex transformation.

Meaningful measurement

The core purpose of an evaluation is to understand the value or worth of the intervention you are making, essentially to determine does it work? Practically speaking the focus is to understand how well the activities have been implemented, what ‘process’ has been followed and whether the activities have made a difference, i.e. what has been the outcome/impact?

The starting point is understanding what it is you want to achieve and setting up metrics that allow you to measure whether the change you are making is having the desired impact. This should tell you whether, and to what degree, the change is working and the contributing factors that are having the most impact – good or bad – on the outcomes. Evaluation investment should be commensurate with the scale of the project and should only measure meaningful data attributable to the interventions.

‘How’ you will evaluate is as important as ‘what and why’

In developing your approach, it’s important to:

  • be clear about your intended outcomes and the steps you plan to take to deliver those outcomes, i.e. ‘If I do X, it will lead to Y’, taking into consideration the inputs – people, process, technology and money – that will be needed to get you there
  • ask yourself not just ‘does it work?’ but also for whom and in what circumstances – what works in one area may not work elsewhere so what are the characteristics required for success?
  • bring in learning from others working on similar projects – how can you fast track decision-making based on lessons learned elsewhere?

‘How’ you will evaluate is as important as ‘what and why’. At Arden & GEM, we are increasingly advocating for formative evaluation of programmes which is a much more agile approach to assessing, learning from and refining projects than traditional techniques. As part of the Global Digital Exemplar (GDE) programme independent evaluation, led by the University of Edinburgh, we were tasked with capturing data and emerging insights which was fed back into the programme as it progressed. This gave NHS England and NHS Improvement the opportunity to learn, tweak and optimise how the GDE programme was delivered to benefit GDE sites, ‘fast followers’ and latterly, ‘digital aspirants’. Adopting this approach allows organisations to be much more responsive to what the data is showing, rather than waiting until the end of a pilot/programme to assess its impact.

Continuous learning

Formative evaluation builds learning into your programme, encouraging regular reflection and tweaking. This means progressing with what works, understanding why the elements that aren’t working are falling short and identifying what needs to change. This may simply require a tweak here and there, or a targeted change in a key area, rather than going back to the drawing board or living with sub-optimal outcomes.

For example, if early signs show your pilot is working well in only four out of six areas, analysing the data will help pinpoint the characteristics that make the difference. Sometimes this will be about softer elements such as the individuals involved and the mix of skills they bring to the programme rather than processes and systems – and some elements are easier to replicate than others. But knowledge is essential in determining whether you can make the necessary changes to achieve the outcomes you seek.

Taking action

Staff need to be allowed to make mistakes, have the confidence to make changes and share their learning

If we want to make best use of resources, we need to be prepared to fail fast – and to pivot where we can, to achieve success. Instead of waiting until the end of a pilot to assess its impact, the formative approach gives us the flexibility to change tack where needed and put an early stop to aspects that aren’t delivering the outcomes expected. It also enables you to start to catalogue the key ingredients of success to inform replication and scale.

But this approach is more of a culture than a discipline. For us to truly benefit from this agile, responsive approach to evaluation, staff need to be allowed to make mistakes, have the confidence to make changes and share their learning. That way we can all work together to meet the growing demands in health and social care.

Sally Eason is associate director of digital transformation and service redesign at NHS Arden & GEM Commissioning Support Unit