Long Read

NHS league tables: why thoughtful design is essential

Exploring how league tables can better serve the government, healthcare leaders and, importantly, the public.
Joe Kiely, Izzy Allen

21 August 2025

In this long read, Joe Kiely from the NHS Confederation and Izzy Allen from NHS Providers set out a joint view on the upcoming publication of leagues tables for NHS trusts. While many trust and integrated care board leaders recognise the potential of league tables to drive improvement and enhance public transparency, they also emphasise the importance of careful design and implementation to avoid the risk of unintended consequences. Based on feedback from healthcare leaders, we offer recommendations to help league tables better serve the government, healthcare leaders and importantly, the public. 

Introduction

In November 2024, following a major boost to NHS funding in the Autumn Statement, the health secretary announced plans to reintroduce NHS league tables. The move is part of a broader reform agenda aimed at tackling underperformance and ensuring there are ‘no more rewards for failure’. The government’s 10 Year Health Plan also positions league tables as a tool to empower patients by promoting ‘transparency, voice and choice’. 

Starting in early autumn, quarterly rankings will be published for acute, mental health, community and ambulance trusts as part of a new public facing dashboard. These rankings will be linked to NHS England’s (NHSE’s) 2025/26 Oversight Framework (NOF), which assesses trusts using a set of primarily short-term, operational metrics and a financial override. Together, these produce an organisational delivery score ranging from 1 (high performing) to 4 (low performing). This score determines each trust’s NOF segment and therefore its position in a league table. 

The importance of getting league tables right

Healthcare leaders share the government’s ambition to improve performance and ensure that NHS funding delivers value for taxpayers. They also support a robust oversight regime – one that acknowledges excellence, sets clear consequences for underperformance and provides tailored support to those who need it. 

History suggests the government should proceed with care

League tables can be an impactful part of that, driving improvement by encouraging healthy competition, increasing local accountability and sharpening the focus of trust leaders on national priorities. However, history suggests the government should proceed with care. Many healthcare leaders recall that while the last Labour government’s NHS ‘star ratings’ contributed to a reduction in waiting times, they were eventually scrapped due to concerns about their effectiveness. 

If not carefully designed and implemented, performance measurement systems can lead to unintended consequences – especially if used principally to ‘name and shame’. It matters what metrics are used, the quality and consistency of data, how the information is presented to the public and the nature of the regulatory response.

The first year of implementation will inevitably bring teething issues, which NHS leaders and staff appreciate. However, appreciating this won’t stop them having understandable anxieties, particularly those who feel the data is not robust enough and fear being unfairly penalised or subjected to undue public criticism. It is therefore essential that the government continues to respond quickly and transparently to any issues that arise, so that trusts feel supported and retain trust in the process. NHSE’s open approach to feedback so far has been appreciated by trust leaders, as has the decision to delay publication while efforts are made to improve the data quality and comparability. 

We are committed to working with the government to give league tables the best chance of success. As part of this, we propose four tests the government should apply to its league tables before publishing them later this year.

Preparing for league table publication: our four tests

Based on engagement with trust and integrated care board (ICB) leaders, we propose four tests the government should apply to league tables before publishing them. For each test, we offer recommendations to help ensure league tables have the best chance of success.

1.    Do league tables provide an accurate and objective account of organisational performance?

  • NHS England should establish a clear feedback mechanism to help strengthen the accuracy and objectivity of the metrics and methodology underpinning league tables. We recommend three key actions:  
    • Refine the financial override mechanism to more reliably identify organisations experiencing genuine financial challenges and to take account of arrangements to improve wider system finances.
    • Avoid unreliable metrics where data quality is compromised due to systemic constraints and provide targeted support to help trusts strengthen their data infrastructure and capabilities.
    • Publicise the technical guidance detailing the scoring methodology for each metric to ensure transparency and help trusts understand their ratings.

2.    Do league tables provide a measure of the issues that matter most to the public?

  • NHS England’s review of the 2026/27 Oversight Framework should assess which metrics best support meaningful transparency and local accountability, ensuring there is greater emphasis on patient priorities in the future.

3.    Is it clear to the public what league tables do and do not show?

  • When the public-facing dashboard is launched, NHS England should provide a clear and accessible explanation of what the data does and does not show to help users understand its purpose and limitations.

4.    Has the risk of perverse incentives been mitigated?

  • NHS England should actively monitor the impact of league tables for unintended consequences, including effects on system collaboration, staff wellbeing, quality and safety and recruitment to ‘challenged’ organisations.
  • To support public transparency and accountability, NHS England should publish its findings on a regular basis – for example, by providing an update to the NHSE board every six months.  

Do league tables provide an accurate and objective account of organisational performance?

By design, NHS league tables present highly complex information in a simplified and digestible format. The unenviable challenge lies in how you ensure that this simplification – translating the performance of a diverse organisation into a single score and rank – remains both meaningful and accurate. 

This year, rankings are based on the NHSE Oversight Framework. As a result, their credibility depends heavily on the strength of the framework’s methodology. Feedback from trust and integrated care board (ICB) leaders on interim segments – which have been shared with individual organisations privately – indicates the approach does not yet achieve the right balance. We have heard multiple examples of organisations being rated poorly that are locally regarded as high performing, with the opposite true for trusts known to be experiencing major challenges. 

These discrepancies underscore the importance of the government adopting a culture of thoughtful scrutiny when it announces league tables. Politicians, the media and the public should be discouraged from making snap judgments, as these can have serious and lasting consequences for organisations, staff and individual leaders.

To increase the accuracy and objectivity of league tables, NHS England must establish a robust and transparent process for gathering feedback. This process should focus on understanding the gap between the experiences of leaders locally and the outputs of the Oversight Framework. Insights from this process should both inform immediate adjustments to the metrics and methodology and the development of the 2026/27 framework. While teething issues are worked through, trusts, ICBs and NHSE regional teams should also feel confident that concerns about the factual accuracy of ratings can be escalated in a robust and consistent manner.

Through our engagement with trust leaders, we have identified the following three issues that risk undermining the trust in the government’s league tables: 

  1. The financial override is too blunt an instrument.
  2. The availability of performance data is poor, with variation between and within sectors.
  3. Parts of the scoring methodology lack clarity leading to confusion and a lack of transparency.

Financial override

The 2025/26 Oversight Framework includes a financial override that prevents providers in deficit – or receiving deficit support – from being rated above segment 3. While trust and ICB leaders support the short-term focus on financial recovery, many believe that the current binary – in deficit or not – approach is too blunt. It risks affecting a disproportionately large number of organisations, while also failing to account for legitimate, often strategic, reasons that a trust may report a deficit. 

Trusts with previously strong financial performance have been downgraded from top to bottom segments overnight

For instance, system-wide deficits are sometimes informally and opaquely redistributed across higher-performing providers. This typically involves agreements between trusts and their ICBs to set more ambitious financial targets, helping to balance the system’s overall financial position. These arrangements place greater financial risk on the trusts involved. 

We have heard multiple examples where the financial override has penalised trusts for taking this collaborative approach. Consequently, trusts with previously strong financial performance have been downgraded from top to bottom segments overnight.

To address this, NHSE should urgently refine the financial override metric. A more nuanced approach is needed – one that can distinguish between organisations genuinely struggling to recover financially and those making strategic, system-focused decisions. The override should reward, not penalise, those displaying the collaborative leadership that the NHS needs more of to successfully recover and reform. 

Data quality

The reliability of any performance rating also depends on the quality of the data behind it. Concerns have already been raised about the accuracy and consistency of the data used to inform league tables and NOF segmentation, with a risk of variation within and between sectors. 

We expect that some of these challenges are short term and can be resolved as trusts adjust to the new performance regime, with support from their regional teams where needed. A degree of short-term discomfort is both expected and healthy if it motivates trusts to strengthen their data collection, analysis and reporting. 

However, in some instances, the variation in data quality stems from systemic issues beyond the control of individual providers. Many mental health and community trusts, for instance, lack the infrastructure to produce high-quality performance data. This is a structural issue rooted in long-term underinvestment and the use of block contracts compared to payment by results. It means there is often a lack of data analytics resource and poorly designed data management systems. 

To mitigate these issues, NHS England should avoid using metrics where data quality is known to be unreliable due to systemic constraints, while providing trusts with additional support to make the required improvements to their data infrastructure and capabilities. 

Clarity on the scoring methodology

There is a lack of clarity and confusion among some trust leaders about how metrics and segments have been determined, which risks unnecessarily undermining trust in the ratings system. 

One key issue raised is the use of a sequential scoring methodology for metrics that lack clearly defined standards or benchmarks, i.e. where there is no agreed definition of what good looks like. This approach involves evenly distributing organisations across scores of 1 to 4 based on their position in a ranked list. When performance against a metric is closely clustered across trusts, this approach can cause small variations to result in disproportionately large differences in scores and segment placements. It also means that a provider whose performance remains unchanged may appear to improve if the performance of other providers declines.

There are also concerns about the application of improvement-based metrics, such as percentage increase measures. These only take into account year-on-year changes, which overlook organisations that have delivered steady, consistent progress over several years, while disproportionately rewarding those that have recently improved from a lower baseline. 

We welcome the commitment to introduce statistical process control (SPC), which will help demonstrate statistically significant improvements or deteriorations across metrics and offer valuable context. However, we would welcome greater clarity on how individual improvement metrics will account for longer-term trajectories. 

To ensure greater transparency and to help trusts understand their ratings and rankings, we urge NHS England to publicise the technical guidance that details the specific scoring regime for each metric. This should be accompanied by a mechanism for healthcare leaders to provide feedback as the framework is implemented. 

Clearer guidance on how to measure each metric would also help reassure trust leaders who are concerned about inconsistencies in how different organisations interpret and report data. While some variation is inevitable, it is essential that organisations feel they are being assessed consistently to maintain confidence in the process.

Do league tables reflect the issues that matter most to the public?

We fully support the ambition in the 10 Year Health Plan to improve public transparency, voice and choice. League tables aim to support this by helping to present complex information in a clear and accessible way. 

By linking league tables to the Oversight Framework, the government has introduced a potential tension between regulatory and patient priorities

However, for league tables to truly serve the public, the metrics that underpin them must reflect what the public and patients care about most. By linking league tables to the Oversight Framework, the government has introduced a potential tension between regulatory and patient priorities. 

Take the financial override, for example. While financial management is an important part of organisational oversight, it is unlikely to be a key concern for patients choosing where to receive care. Instead, they want to know which hospital has the best health outcomes, patient experience and the shortest waits. This raises a concern: will people understand that a hospital ranked lower in the league table might still provide high-quality, safe care?

Similarly, the league tables will present data at a trust level, rather than the specific services or care pathways that patients engage with. Assessments by the Care Quality Commission show performance can vary substantially within hospitals. 

Research by the King’s Fund shows that, in practice, most patients choose their local provider, even when given the option to travel elsewhere. When exercising choice, decisions typically rely on their GP’s advice or on past experience. Notably, those from the most deprived socioeconomic areas are least likely to exercise choice when it was available, which may contribute to widening health inequalities. Importantly, this research also found that increasing patient awareness of choice did little to focus providers on improvement because so few patients exercise it.

The limitations outlined make league tables, on their own, ill-suited to guide patient choice. Yet, as the government rightly encourages people to take a more active role in their healthcare and wellbeing, there is a risk the league tables will be used in exactly this way.

Ahead of 2026/27, the government is reviewing the framework to incorporate work to implement the ICB operating model and align with priorities set out in the 10 Year Health Plan. As part of this process, we recommend assessing what metrics would most effectively promote meaningful transparency and local accountability including by working closely with patient representative groups. If league tables remain tied to the Oversight Framework in 2026/27, the metrics selected for public-facing league tables should place greater emphasis on patient priorities, with analysis of how the updated list of metrics will contribute to public transparency. 

Is it clear to the public what league tables do and do not show?

To minimise the risk that the public misinterprets league tables, it is essential that communications relating to league tables clearly explain what they are designed to show, and just as importantly, what they are not. Without proper context and careful framing, league tables risk presenting a partial picture and at worst, a misleading one. 

The government can help to frame the data but cannot control how league table data is presented or interpreted by others. That is another reason why it is crucial that the ratings themselves are accurate and based on what matters to the public. 

In the short term, there are steps the government can take to reduce the risks in 2025/26. When ratings are published, they should be accompanied by a clear and accessible explanation on NHS England’s website. This commentary should outline what the data does and does not show, helping users understand its purpose and limitations.

Without this context, patients may make poorly informed decisions, and politicians or the media may direct misguided scrutiny towards trust leaders. Both outcomes could further undermine the public’s trust in the NHS.  

Has the risk of perverse incentives been mitigated?

As part of its plans to introduce league tables, the government has also signalled its intention to link organisational performance to various rewards and penalties. While the goal is to further motivate improvement, if these measures are not carefully designed, they could have unintended consequences. 

NHS England and the Department of Health and Social Care should actively monitor the impact of league tables and any associated incentives for unintended consequences, including effects on system collaboration, quality and safety, recruitment to ‘challenged’ organisations and staff wellbeing and morale. To support public transparency and accountability, NHSE should publish its findings on a regular basis. It could do so, for example, by providing an update to the NHSE board every six months. 

Tunnel vision

League tables can lead to ‘measurement fixation’, whereby organisations focus narrowly on what is measured, neglecting other important but unmeasured aspects of care. This risk is heightened this year, as the Oversight Framework focuses on a smaller set of short-term, organisational-level metrics. While this should support financial and operational recovery in the short term, in the longer term healthcare leaders should also be incentivised to focus on delivering the three shifts set out in the 10 Year Health Plan. 

The balance between finances and quality must be front of mind as the government develops its reinvigorated foundation trust regime

The Francis Inquiry serves as a reminder of how performance pressure, if misapplied, can compromise quality and safety. In the Mid-Staffordshire case, the board’s pursuit of foundation trust status – which was contingent on eliminating its financial deficit – led to deep staffing cuts and years of unsafe care. The balance between finances and quality must be front of mind as the government develops its reinvigorated foundation trust regime, especially with the eventual prize being the ability to hold outcomes-focused contracts for a defined population as an integrated health organisation.

Closed cultures and reduced transparency

When pressure to meet performance targets creates a high-stress environment for staff, it risks altering behaviours or reporting to improve the appearance of performance without actually improving patient outcomes or experience. For instance, as a response to a five-minute emergency waiting time target back in the 1990s, some hospitals employed ‘hello nurses’ whose role was to greet patients within the first five minutes, simply to tick this box. 

Another example we have heard about is that using ‘distance from plan’ metrics rather than absolute performance measures can unintentionally encourage trusts to submit less ambitious plans. By setting lower targets, trusts may increase their chances of appearing successful, even if overall performance is more modest.

Undermining system collaboration

There is also a risk that a league tabling approach, which ranks organisations against one another, may discourage collaborative working across trusts. When performance is judged competitively, organisations may be less willing to take on stretch targets or additional responsibilities that carry higher risk, even if doing so benefits the wider system. 

This risk has been described in relation to the financial override (see section 1), but we have also heard examples from the past 18 months where trusts have supported neighbouring organisations by ‘load balancing’ for issues such as elective care pressures – essentially taking on extra patients to help others meet targets. While this reflects strong system collaboration, it has negatively impacted the supporting trust’s league table position. This creates a risk that such collaborative behaviour will be disincentivised in the future, undermining efforts to work as a unified system for the greater good. 

Risk aversion

If performance linked penalties – such as tying executive pay to performance – are too severe, they can make organisations more risk averse. This can discourage innovation and openness about challenges, undermining the culture of continuous learning and improvement that is essential for patient safety. 

There is also a risk that these measures will deter top leadership talent from joining the most challenged organisations. If executives know their pay could be capped or reduced due to systemic issues beyond their control, they may avoid roles in the trusts that need the strongest leadership the most. This would be perverse, as these organisations require experienced and capable leaders to drive improvement.  

Conclusion

NHS league tables have the potential to drive improvement, increase transparency and strengthen accountability across the health service. However, their effectiveness will depend on thoughtful implementation, robust data and a nuanced communication of what the metrics do – and do not – show. 

With the right safeguards in place, league tables can become a valuable tool for public awareness and promoting local accountability

To ensure league tables are fair, meaningful and can be trusted, NHS England must address known risks, support providers to improve data quality and remain responsive to feedback. With the right safeguards in place, league tables can become a valuable tool for public awareness and promoting local accountability. Without, they risk causing confusion and unintended harm.

NHS Confederation and NHS Providers logos