Guide

Demystifying clinical AI in mental health

Key considerations for adopting artificial intelligence in mental health services.

26 March 2026

Introduction

Rapid policy and technological developments in artificial intelligence (AI) have demonstrated the role these technologies can play across the NHS. Deployed well, and in the appropriate contexts, AI has the potential to help the healthcare system, including mental health providers. It can overcome a range of healthcare challenges, reducing pressures on health services both clinically and administratively, freeing up staff time to focus on transformation work. With significant pressures on NHS mental health services, AI tools can also improve access to services, support patients before and between appointments and supplement clinician-delivered therapy.

As the usage of AI evolves, there is also an opportunity to use it to address complex system problems, support evidence-based pathway redesign and unlock opportunities across the care pathway. 

However, the use of AI in mental health care and across the NHS is not without risk. With the environment moving at pace there is a need to build understanding of how AI can be deployed responsibly to achieve the biggest gains for patients and the workforce. This needs to be done while  mitigating against potential risks through robust governance, regulation, clear evidence of clinical effectiveness and a focus on patient safety and clinician wellbeing.

The complexity of some mental health conditions, requirement of multi-disciplinary care delivery and sensitivities of clinical conversations require a focus on the specificities and role of AI in mental health care. We also know that more people are turning to digital and AI tools to support their mental health, outside of their interactions with the health system. More than one-in-three adults (37 per cent) say they’ve used an AI chatbot to support their mental health or wellbeing and 66 per cent of those are using general purpose chatbots such as ChatGPT, Claude or Meta AI, rather than platforms specifically designed to provide mental health support.

There are concerns about the safety and effectiveness of AI in personal use, which has rightly increased scrutiny on how AI is being used in the health and care system. This highlights the need for an important distinction between different types of AI: those AI tools built for mental health care in the NHS that are subject to standards and regulation; consumer products for wellbeing often with less regulatory oversight; and AI tools not intended for mental health care but used by people with mental health challenges. 

While this guide focuses on clinical and administrative AI tools used as part of NHS mental health care that are subject to standards and regulation, the use of AI tools not commissioned by the NHS to support mental health and wellbeing cannot be ignored. This guide complements other activity in this space, including a new AI and mental health commission launched by Mindo. Further information on the potential benefits and risks of AI use, including the use of online chatbots for therapy purposes, can be found on the Mental Health UK website.

“The future opportunity presented by clinical AI is that someone with a mental health condition can stay supported 24/7, rather than just when they have an appointment.”
Chief Information Officer, NHS Mental Health Trust 

10 Year Health Plan for England - AI commitments 

The 10 Year Health Plan sets out a number of commitments towards an ambitious goal for AI and sets out multiple uses for AI, including to support and improve diagnosis, increase productivity and streamline clinical pathways: 

  • In 2026, the Medicines and Healthcare products Regulatory Agency (MHRA) will publish a new regulatory framework for AI in healthcare. This is being informed by the work of the National Commission into the Regulation of AI in Healthcare.
  • From 2025-2028, investment in AI infrastructure will include the development and implementation of an NHS AI strategic roadmap, enabling clearer ethical and governance frameworks for AI and roll out of new AI upskilling programmes for the workforce.
  • In 2027, a roll out of validated AI diagnostic tools and NHS-wide deployment of AI administrative tools including AI scribes.
  • By 2035, AI will be seamlessly integrated into most clinical pathways, generative AI tools will be widely adopted, and the NHS will be a global leader in deploying AI ethically.

What is clinical AI?

For the purposes of this guide, clinical AI is defined as artificial intelligence specifically designed to support core clinical functions such as diagnostics, therapeutic delivery, risk monitoring and treatment planning. Clinical AI tools may also take the form of chatbots or tools to support ongoing management and risk monitoring. Clinical AI directly contributes to patient care and clinical outcomes, meeting the rigorous evidence and safety standards required for medical interventions. Clinical AI is distinguished by four key elements:

  • Evidence-based validation: Supported by peer-reviewed studies with substantial sample sizes conducted in real clinical settings with actual patients, not wellness applications or uncontrolled environments.
  • Patient-focused design: Built for direct use by patients receiving medical care within healthcare systems.
  • Clinical-grade standards: For class IIa and above (See page 6), adherence to stringent medical device regulations and quality management standards (such as ISO 13485, ISO 14971, IEC 62304, ISO 27001, and GDPR compliance).
  • Clinical workflow integration: Designed to enhance healthcare delivery and support clinical decision-making, not just streamline administrative processes like notetaking or scheduling.
  • Key terms related to AI and examples of their use:

    TermMeaning Example of use
    Agentic AI Agentic AI consists of AI agents (which are machine learning models) that can mimic human decision-making, solving problems in real time, generating problems towards specific goals.Agentic AI tools have the potential to speed up diagnosis by prioritising urgent requests, automate administrative tasks and predict service demand.
    Deterministic AIThis follows fixed rules or logic, like a hard decision engine. If the same data goes in, the same output always comes out. For example, ‘If score X is above 15, assign pathway A.’ It does not express confidence, uncertainty, or alternative possibilities. It is essentially rule execution and it can force people into a single outcome. This works well for automation, but not for complex clinical judgement.A referral management tool built on deterministic logic might automatically direct any patient with a PHQ-9 score above 15 to a specialist service, with no ability to account for whether that score reflects a long-term condition already being managed, or an acute episode requiring a different response.
    Foundation modelA machine learning model trained on a vast amount of data so that it can be easily adapted for a wide range of applications.A common type of foundation model is large language models (LLMs), which power chatbot tools.
    Generative AI 

    A type of AI that can create new content—like text, images, music or code—by learning patterns from existing data. 

    The most common form of generative AI for creating text is the Large Language Model (LLM). LLMs are advanced AI systems trained on vast text data to understand, generate and interact with human language. 

    Generative AI is used to simulate patient data, develop virtual models for training, and generate synthetic biological data for research.

    Generative AI powers tools like ambient voice technology (AVT) used within healthcare to synthesise information and generate summaries. Within mental health this can be beneficial for lengthy assessments, which take longer to document. 

    Probabilistic AI An approach to AI that uses probability theory to model uncertainty and learn from data. Probabilistic AI models do not just generate predictions, they also estimate how confident those predictions are by calculating the probabilities of different possible outcomes. These models evaluate multiple possible explanations for data, weigh them according to likelihood, and update their beliefs as new information becomes available. This enables more reliable decision-making in complex, uncertain or changing environments.An AI-enabled self-referral and triage assistant, such as Limbic Access, uses probabilistic AI to determine the correct Anxiety Disorder Specific Measures (ADSMs) questionnaire to administer at the point of self-referral. There are six ADSMs to administer according to the NHS Talking Therapies guidance, but a probabilistic AI can help determine which is most appropriate based on a service user’s responses to their goal for therapy and key Minimum Dataset outcome measure questionnaire scores. 

How are mental health providers currently using AI tools?

Use of AI is evolving rapidly as AI technology develops and different uses are piloted. Through engagement with NHS Confederation members and a review of existing literature, the following use cases have been identified that may be in use across mental health care. Any deployment of AI tools for any purpose set out below should be subject to regulatory, governance and safeguarding processes

  • Clinical use casesNon-clinical use cases

    Clinical notation and information management 

    •  Information summarisation
    • In-session note taking using Ambient Voice Technology (AVT)
    • Post-contact/appointment notation and follow-up letter generation

    Care logistics

    • Appointment management
    • Rota development
    • Activity scheduling
    • Demand management 

    Triage

    • Chatbot to support self-referral to services eg. talking therapies
    • Summarisation of triage information
    • Prioritisation and stratification of referrals
    • Signposting and referral eligibility 

    Corporate functions

    • Finance
    • HR processes
    • Bringing together corporate services into one single interface

    Monitoring and detection

    • Text analysis
    • Voice analysis
    • Video analysis
    • Using generative AI to enhance capabilities for early detection and monitoring
    • Contactless patient monitoring

    Population health analysis 

    • Trend analysis to support strategic commissioning 

    Clinical decision support

    • Risk stratification
    • Care and treatment planning
    • Predictive analytics/ diagnostic prediction (such as administering the correct Anxiety Disorder Specific Measures) (ADSMs)
    • Predicting service user response to interventions, including medications 
     

    AI-enabled patient support tools

    • Responsive self-management support
    • Ongoing patient support
    • AI-powered virtual therapy
     

Key considerations for AI deployment

A key consideration when selecting an AI tool is whether it is the appropriate solution to the problem being faced by an organisation. Starting with a challenge is crucial to ensuring the selection of a solution that solves that issue and aligns to your organisation’s strategic priorities, rather than retro-fitting a solution to organisational need. This can also ensure that there is greater organisational support for new AI tools, which can support long-term implementation. 

Where AI has been identified as an appropriate solution to the problem your organisation is facing, there are a number of important considerations and questions to ask during early discussions to minimise barriers to AI adoption in mental health. 

Our summary table below lists key questions to consider both internally and with suppliers, when exploring the use of AI in mental health care. You can also download the table as a PDF.

These considerations are covered in detail throughout the guide, providing information on how the consideration can be addressed, how other organisations have approached this and relevant information on national policy. 

Key considerations in summary

  • Key considerations and questions for suppliers and your organisation when selecting an AI tool.

    Key considerationQuestions to ask suppliersOrganisational considerations
    What level of regulatory approval has been undertaken and what evidence exists on clinical safety? 
    • Is the product regulated as a medical device?
    • What level of regulatory approval does a product have?
    • Does the product have UKCA or CE accreditation?
    • What use does the AI have regulatory approval and certifications for? Are there any exclusions or limitations to be aware of?
    • What evidence is there supporting the safety and effectiveness of the AI? Does evidence exist in real-world settings?
    • Have regulatory frameworks changed and if so, do they impact your adoption of a new product?
    • Do you have access to evidence on documented impact and safety assessments?
    • How will post-market surveillance be managed?
    • Are incident reporting processes in place? 

     

    What governance and board assurance processes do you have in place?
    • What is the AI solution designed to do and who is it designed for?
    • Does the intended use of the AI solve issues you are currently facing?
    • Do you have internal governance processes and structures in place? How do they feed into board decision making?
    • Do you have existing policies on AI, Information governance and security, data protection and ethics? Do these need updating?
    • How will the effectiveness and safety of an AI tool be monitored once it has been adopted?
    How can you demonstrate cost-effectiveness and return on investment (ROI) of AI tools? 
    • What evidence does the supplier hold on the cost-effectiveness of the AI tool?
    • Does the intended use of the AI solve issues you are currently facing?
    • Does the tool align to organisation strategy and procurement standards and is it a logical next step?
    • What skills, capabilities and infrastructure are needed that are not already in place? How will these be funded?
    • How confident are you in the reliability of datasets?
    • What data is needed to prove ROI?
    • How will you demonstrate Return on investment for the AI tool?
    • Are there learnings from pilot projects that can inform business case development for new AI tools?
    • How do the results compare to the effectiveness without the use of AI? Is the AI adding value?

       

    Can AI tools be integrated within clinical pathways and workflows?
    • What support is available from the supplier to integrate tools into existing systems and pathways?
    • How will AI tools be integrated into existing clinical workflows?
    • What level of effort will be required to integrate the solution into existing pathways and processes?
    • Taking into account the differences between organisations and the populations they serve, are there any learnings on roll-out and implementation that can be shared between organisations?

    Is the data and digital infrastructure in place? 

     

     
    • Will changes be required in IT and digital systems and data management?
    • Have you assessed your level of data readiness and interoperability to support AI rollout?   
    • Which cybersecurity measures need to be in place?
    • Has a data protection impact assessment (DPIA) been completed?
    • What implementation support within your infrastructure is required? 
    How will the workforce be supported and trained on new tools?
    • What training support is available from the supplier?
    • What implementation support is provided by the supplier?
    • Which staff will have access to AI tools?
    • What skills and capabilities are needed to support the adoption of new tools?
    • What training opportunities will be open to staff?
    • What information will be shared with staff on the use of AI tools and the organisational position?
    • How can information on the benefits of AI tools and learnings from any pilots be shared amongst staff?
    • How will any training initiatives align with wider organisational work on digital maturity?
    • Are there sufficient ‘human in the loop’ opportunities and are staff clear on the level of checking of AI tools required?
    • How will you ensure that workforce planning reflects the evolving and changing roles of staff impacted by AI?

    How is lived and living experience being factored into development and testing of new tools? What steps will be taken to build patient understanding and compliance?

     

     
    • How do you involve people with lived and living experience in the development of AI tools?
    • What processes do you have in place to gather patient and carer feedback and insights on AI tools?
    • Is the lived and living experience voice represented in any of your AI or digital groups?
    • What mechanisms are in place to ensure clear and transparent communication and the provision of information for patients and service users, about AI?

    How can we ensure AI is ethical and doesn’t increase health inequalities?

     

    • Does the supplier uphold industry environmental standards? 

       

    • How will post-market surveillance be managed?
    • How can you ensure that real world evidence is reviewed to consider any impact on health inequalities?
    • How are equity and ethical considerations factored in from the start and monitored throughout, so consideration can be given to which groups of the health population will benefit and how any widening health inequalities can be mitigated
    • How are biases being identified when considering a proof of concept AI project?
    • How are you considering the environmental impact of the AI tool and taking steps to mitigate this if implemented?

Key considerations in detail

  1. What level of regulatory approval has been undertaken and what evidence exists on clinical safety?
  2. What governance and board assurance processes do you have in place?
  3. How can you demonstrate cost-effectiveness and return on investment of AI tools?
  4. Can AI tools be integrated within clinical pathways and workflows?
  5. Is the data and digital infrastructure in place or are steps needed to build digital maturity?
  6. How will the workforce be supported and trained on new tools?
  7. How is lived experience being factored into the development and testing of new tools? What steps will be taken to build patient understanding and compliance?
  8. How can we ensure AI doesn’t increase health inequalities?

1. What level of regulatory approval has been undertaken and what evidence exists on clinical safety?

There is a pressing need to ensure AI is implemented responsibly, with robust governance, clear evidence of clinical effectiveness, adherence to stringent medical device regulations and quality management and information standards, adherence to General Data Protection Regulation (GDPR), NHS England clinical safety standards and Data Protection Impact Assessments.

There are a number of standards, regulations and pieces of guidance that apply to digital mental health tools, including AI tools, supplied to the NHS from the Medicines and Healthcare Products Regulatory Agency (MHRA), NICE and NHS England. 

However, the proliferation of AI-enabled tools and the speed of their development means that regulatory frameworks cannot keep pace. This can slow down adoption without the appropriate regulatory and governance safeguards in place and create duplicative internal governance processes. 

Mental health leaders have called for clarity from the MHRA on which products have been regulatory approved and their uses. This would reduce confusion when considering adoption of new tools and could also streamline governance approaches within organisations.

Work is currently underway by the MHRA and the National Commission into the Regulation of AI in Healthcare to develop a new national regulatory framework for AI in healthcare, which will be published in 2026. The new framework will apply to AI tools classified as a medical device. NHS leaders must therefore consider changes and updates to regulations about the use of AI. 

The MHRA runs the Yellow Card Scheme, which collects and monitors information on suspected safety concerns involving healthcare products. If you have any safety concerns with medical devices these can be reported on the Yellow Card reporting site. 

  • Products are medical devices when they meet the definition of a medical device as set out in the UK Medical Devices Regulations 2002 (SI 2002 No. 618, as amended) (UKMDR). AI and other forms of software that have a medical purpose will fall within this definition and qualify as a medical device.

    Products classified as a medical device need to meet requirements of this regulation, which include (but not limited to):

    • risk management system to ensure risks are reduced As Far as Possible (AFAP) as described in ISO 14971.
    • clinical evidence to show benefits outweigh risks
    • post-market surveillance system to ensure the device continues to work as intended.

    When accessing and using patient data, all parties need to ensure this is used in a secure way that does not compromise patient safety. Further information on data privacy and security legislation can be found on the Gov.uk website.

  • Medical devices are classified based on risk and intended use and generally categorised into four main classes (Class I, IIa, IIb and III), with Class I being the lowest to Class III being the highest risk, according to the UK MDR 2002. 

    • Class I: Requires manufacturers to self-declare conformity with the essential requirements of the UK MDR 2002 and registration on the MHRA portal
    • Class IIa: Requires conformity assessment completed by a UK Approved Body and registration on the MHRA portal
    • Class IIa, IIb: Requires conformity assessment completed by a UK Approved Body and registration on the MHRA portal
    • Class III: Requires conformity assessment completed by a UK Approved Body and registration on the MHRA portal

    Conformity assessments requires manufacturers to demonstrate compliance to the Essential Requirements set out in the UK Medical Devices Regulations (MDR) 2002 and must be completed by a UKAS-accredited UK Approved Body

    A UKCA (or UKNI) or CE mark shows that a digital mental health device is compliant with relevant medical device regulations. MHRA registration shows the device can be placed on the GB market. This register is open to public view via the Public Access Registration Database.

    Read further information on the regulation and evaluation of digital mental health technologies from the MHRA

2. What governance and board assurance processes do you have in place?

The growing complexity of AI tools – particularly their risk and safety profiles - as well as concerns around accountability and liability, can pose significant challenges for NHS organisations. Many are required to rapidly assess products through governance, data protection and digital clinical safety processes, while also developing new governance and organisational approaches to support safe and effective AI adoption. This can also attract more scrutiny by provider boards than other digital areas, because of concerns over patient safety, data protection and concerns around AI tools. This can require additional layers of governance within organisations. 

With varying levels of digital leadership on NHS trust and foundation trust boards, clear governance processes can support organisations considering the adoption of AI. These processes should involve teams from across an organisation spanning areas including data and IT infrastructure, information governance, implementation and transformation, cyber security, clinical leadership, finance and safeguarding. Leaders have also highlighted the benefit of transparent AI policies and processes that are also referenced across all other organisational policies, to ensure everyone across the organisation is clear on the organisation’s position and AI processes. 

This practical toolkit from NHS Providers is designed to support senior leaders in adoption, implementation and governance of AI solutions. 

  • In Central and North West London NHS Foundation Trust, a trust-level design authority has been set up to oversee and encourage the effective use of AI within the trust. The group meets bi-monthly and has representation from senior clinical and operational leaders from all the divisions and departments within the trust. 

3. How can you demonstrate cost-effectiveness and return on investment of AI tools?

The revenue and capital cost of AI tools is seen by mental health leaders as one of the biggest barriers to testing, adopting and scaling tools, particularly with increasing financial challenges. In many cases, savings from AI tools are time saving rather than cash releasing, making it hard to build a financial case up front. 

Before the rollout of AI tools, leaders should consider current service demand, current inputs and costs. Understanding this before reviewing the cost and productivity benefits of AI is crucial to making a clear case and establishing where the most value can be redeemed from AI tools.

Additionally, the upfront costs of licences and infrastructure that supports system interoperability can be high. Leaders have highlighted challenges in expanding the number of licences for AI tools, with a shortage of licences limiting the potential of AI tools and the number of people within an organisation who can access the technology. 

Selecting the right supplier is also a critical factor to success. Leaders should look beyond the product itself and consider whether a supplier can act as a genuine implementation partner that offers support with change management, integration and ongoing evaluation. A collaborative supplier relationship can make it significantly easier to gather the evidence needed to build an internal case for wider roll out.

Leaders have highlighted the benefits of testing and learning from new tools through pilot programmes, before system-wide roll out. This can lead to more successful roll out and adoption in the long term and an opportunity to consider any barriers to adoption and scaling. This approach can provide the data to show how AI tools support productivity and patient and staff satisfaction, which can build a case for further roll out and can provide a controlled environment to test and govern AI projects. 

Before beginning any pilot or procurement process, leaders should establish clear success criteria and metrics, defining upfront what 'good' looks like and what data will be needed to demonstrate ROI. Pilots should be designed with data collection in mind from the outset, ensuring the right metrics are captured to evidence impact against the success criteria agreed at the start. Gradual roll out can also ensure organisations don’t spread resources too thinly, allowing time to embed any new tools. 

  • In one trust, the finance director and chief clinical officer are both board sponsors for AI. This enables both a financial and technical view when considering the adoption of AI, and early financial involvement in conversations on AI roll out and testing. 

4. Can AI tools be integrated within clinical pathways and workflows?

To maximise the benefits of AI, it should enhance and extend existing clinical pathways, workflows and digital infrastructure, rather than introducing disconnected or duplicative processes. AI tools should align with the systems and workflows that staff already rely on day to day, including Electronic Health Records (EHRs), care coordination platforms and clinician-facing tools. The goal is to ensure that AI supports decision-making within the natural flow of care, rather than requiring staff to move between disconnected systems. In some cases, this may involve direct integration with existing infrastructure. In others, AI may introduce new, complementary steps within the workflow, surfacing clinically relevant insights in real time that clinicians can easily review and incorporate into the patient record. Both approaches can be effective when designed to fit seamlessly into clinical practice.

Interoperability, the ability of systems and platforms to exchange and make use of information, remains an important consideration. However, achieving impact does not always depend on deep technical integration. Solutions that are designed to complement and enhance existing workflows can deliver meaningful value without adding unnecessary complexity. Leaders should prioritise AI tools built to internationally recognised standards that can fit within existing infrastructure. It is essential to work closely with digital and IT teams early in the procurement process to assess compatibility.

Alongside this, staff need to understand how the tool fits into and enhances their existing practice and supports improved patient outcomes, and be able to explain the technology to patients to ensure patient consent and confidence. 

Ensuring alignment with clinical pathways can reduce the risk of a negative impact on workflows, whether through poor interoperability or by lengthening existing tasks. When implemented well, this approach can deliver benefits across the entire care pathway, from reducing administrative burden and duplication of data entry, to enhancing clinical decision-making, improving consistency of care, and enabling more personalised, responsive support for patientsEmpowering staff with the right tools and training will help build confidence, improve adoption and unlock the full value of digital transformation. 

  • Bradford has transformed its talking therapies services by introducing AI-powered tools that increase access, personalise care and optimise clinical capacity. The trust’s talking therapies service implemented Limbic Access, an AI-powered clinical assessment tool (chatbot) and Limbic Care, a clinical AI companion app, which streamlined triage processes, reduced wait times and improved engagement. 

    What was the challenge the organisation needed to address?

    Bradford and Craven District talking therapies wanted to increase the number of referrals to their services and improve access for groups traditionally underrepresented by NHS Talking Therapy services including people from minority ethnic backgrounds, the LGBTQIA+ community and older adults. They already had an existing self-referral form on their website, but it led to numerous dropouts and incomplete submissions. 

    Before implementing AI-powered clinical assessment tools, the service received approximately 6,900 referrals over six months. 

    Bradford also wanted to further integrate clinical AI tools into their workflows to improve patient engagement in one-to-one sessions and groups. However, there was staff scepticism about clinical AI adoption and concern about loss of control of therapeutic sessions and whether the technology would help or complicate established practice.

    “I started off quite hesitant …but actually over time it really does benefit the sessions… it guides the patient a lot easier and just gives that extra guidance outside of the sessions as well”. Psychological Wellbeing Practitioner, Bradford and Craven District Talking Therapies.

    How did they address it?

    Bradford District and Craven NHS Talking Therapies introduced two new AI tools. 

    Limbic Access made it easier for people to seek help across the right services. Bradford sought to reduce dropout rates and reach groups such as people from minority ethnic backgrounds, the LGTBQIA+ community and older adults. 

    Limbic Care was also introduced to deliver personalised therapeutic materials to patients. The tool acts as a personal clinical assistant engaging patients between therapy sessions and on waiting lists and complements therapy sessions, by helping people better understand their mental health difficulties, access resources in their own time and complete therapeutic exercises between appointments. 

    What successes are they seeing so far?

    Implementing Limbic Access has increased referral rates and saved clinician time:

    • Since launch in 2023, more than 16,700 referrals have been processed. This includes rises in key demographics including people from minority ethnic backgrounds and LGBTQIA+ referrals.
    • 39 per cent of these were made out of office hours, supporting the working unwell
    • 88 per cent referrals were signposted to appropriate support, with early crisis detection
    • There has been a 90 per cent conversion rate from referral to assessment
    • Approximately 1,000 clinical hours saved
    • 94 per cent of patients reported positive feedback

    Implementing Limbic Care has shown measurable improvements across key areas: 

    • Administrative efficiency with the support of the clinician dashboard that provides AI-generated summaries of activities the patient has completed, realising clinical admin time to allow for more targeted sessions.
    • Enhanced therapeutic alliance with full visibility of a client’s homework completion and risk events. This allows practitioners to identify specific implementation barriers and tailor techniques to individual clients.
    • Patient engagement beyond session with an AI companion exceeded expectations. Many report supportive and stigma-free conversations with the app which means patients are supported 24/7. 77 per cent of clients return after 7 days and 46 per cent remain active by day 30. The tool enables long term support with access for 12-months post discharge.
    • 86.6 per cent of patient users report positive satisfaction with the tool.
    • 70 per cent of clients attend multiple sessions, compared to 55.7 per cent not using the app.

    Learn more about Bradford and Craven District talking therapies experience with AI tools. 

  • Rotherham, Doncaster and South Humber NHS Foundation Trust operates services in more than 100 locations across Rotherham, Doncaster and North Lincolnshire. The trust employs over 3,700 staff. 

    What was the challenge the organisation needed to address?

    The trust identified that clinicians spend 40-60 per cent of their clinical time on administration. 

    How did they address it?

    The trust tested AVT products within existing clinical pathways to summarise consultations in community mental health, neurodiversity services and inpatient mental health. The products went through a governance process supported by data protection impact assessments, regulation and digital clinical safety. An evaluation plan was also developed to enable real-world and controlled testing.

    What successes are they seeing so far?

    Early results from the pilot showed that:

    • 82 per cent of colleagues feel they benefited from AVT
    • 76 per cent of colleagues reported that using AVT saved them time
    • participating clinicians strongly agreed to continuing to use AVT in clinical notes
    • between 89 per cent and 96 per cent of patients gave consent for AVT use in their consultations (dependent on teams), though some patients declined use, citing concerns about the environmental impact of AI
    • clinicians and patients benefit from the greater presence of the clinician at the consultation.

    The pilot also identified some barriers to AVT roll out including infrastructure issues related to internet connection, sound recording and laptop battery issues and issues related to the tool’s understanding of accents. In some cases, the technology was seen to interrupt the therapeutic relationship, and some questions were raised over the accuracy of the output with 61-80 per cent of pilot participants reporting the output was correct without editing. The pilot highlighted the need for awareness of automation bias, and the need for clinicians to take full responsibility for checking what’s documented.

    Key learnings for deployment from the pilot included the need for close working with an engaged supplier, the benefit of clinical champions to support adoption, and the need for gradual roll out to ensure supplier and organisational colleagues don’t spread their resources too thinly and any barriers to adoption can be addressed.

  • Living Well Consortium (LWC) comprises more than 30 mental health care organisations, charities and social enterprises in the UK. They are the leading provider of NHS Talking Therapies in Birmingham and Solihull. Given the breadth of their services, by 2023 Living Well Consortium was processing more than 11,000 referrals a year: a 400 per cent increase over 2019. 

    What was the challenge the organisation needed to address? 

    Living Well Consortium needed to increase capacity without overburdening their workforce. In reviewing their intake pathways, they found four key focus areas:

    1. Increase capacity for demand- Increasing patient volume was resulting in longer and longer waiting lists. Meanwhile clinical staff were reporting that they felt rushed during their assessments.
    2. Optimise data collection- LWC wanted to improve the quality and scope of data collection, since only 35 per cent of new patients on average responded to key demographic questions at intake.
    3. Assess risk at self-referral- Administrative staff did not have clinical training and were therefore not taking risk assessments at the point of self-referral. As a result, there was a chance of at-risk patients sitting on waiting lists.
    4. Improve client experience- Clients were frustrated with the long wait times to self-refer via the telephone, and the inconvenience of calling during work hours. LWC aimed to make the self-referral process easy and accessible, making clients feel supported from the start. 

    How did they address it?

    To help manage growing demand alongside existing phone and professional referral systems, LWC introduced an AI-powered clinical assessment assistant (chatbot) in October 2023. 

    What successes are they seeing so far?

    • Reduced clinician workload: In their first year of activation, the clinical assessment assistant processed 42 per cent of Living Well Consortium’s self-referrals. This enabled LWC to increase capacity while reducing demand on frontline staff. Prior to adopting the AI tool, LWC couldn’t offer self-referrals outside of working hours. Now, 40 per cent of referrals happen after hours, indicating that LWC is lowering barriers to access.
    • Richer patient data: Where only 35 per cent of new patients responded to key demographic questions prior to the roll out of the clinical assessment tool, that figure now stands at 98 per cent. Clinicians have a more complete picture of each individual before their first session. And LWC now better understands the communities it serves.
    • Faster and more appropriate care: LWC developed a tailored screening and signposting process that flagged 12 per cent of incoming patients for risk, prioritising them in the system, while also directing ineligible patients to suitable services much earlier.
    • Improved patient experience: Shorter waiting lists, earlier prioritisation of at-risk patients, better data capture, faster and more personal clinical assessments, after-hours pathways into care. In the first six months, LWC has seen a 14 per cent decrease in patient dropouts.

    Find out more about the work at Living Well Consortium.

  • A multi-site evaluation explored the use of contactless patient monitoring in reducing patient harm across 29 inpatient mental health wards in six NHS mental health trusts. Contactless patient monitoring employs computer vision, interpreting visual information with advanced algorithms to provide proactive information to staff so they can intervene earlier to prevent harm occurring. Results varied by ward, but most showed decreases in event rate following adoption of contactless monitoring with meta-analyses indicating notable reductions across all five outcome measures ranging from -21.1 per cent to -28.9 per cent. (events included self-harm on acute wards; falls on older adult wards and assaults and restraints in PICUs) 

    What is contactless patient monitoring? 

    Contactless patient monitoring supports clinicians to deliver better, safer inpatient care. Advanced signal processing algorithms can derive medical-grade vital signs and sleep insights using the same underlying techniques, photoplethysmography and actigraphy, as contact monitors. Machine learning and computer vision predictive AI techniques can also be used to translate patient situational information into proactive safety alerts and ambulatory trends. For example, identifying when a patient is leaving bed to help head off falls, or when a patient has spent an extended period of time in a higher risk area.

5. Is the data and digital infrastructure in place or are steps needed to build digital maturity?

The use of AI in the NHS is limited by the digital infrastructure in place in many organisations. Ten to 50 per cent of NHS technology systems need to be modernised and some IT devices are unable to support AI. In mental health care, digital and data infrastructure is less developed than in other parts of the healthcare system as a result of historic underinvestment and limited focus on the use of data for payment and performance oversight. 

In order to fully reap the benefits of digital breakthroughs, including AI, staff working in mental health services need the essential equipment and infrastructure, such as computers, laptops, mobile devices and internal access that can run it at an appropriate capacity. Without this basic infrastructure, NHS organisations will struggle to maximise the benefits from technology. Digital maturity and a culture of digital transformation need to be in place to support the adoption and roll out of AI tools. 

As set out under key consideration four, data infrastructure and interoperability is also a barrier to AI adoption. The extent to which AI can learn from health data is limited by the extent to which data is available across different systems. Effective infrastructure will benefit from access to national data infrastructure including Electronic Health Records (EHRs). NHS leaders have highlighted that in many cases there are multiple systems that are not communicating. Without interoperability, staff are unlikely to want to adopt new tools as they will need to use more systems. 

A further digital consideration is cybersecurity, which is vital to digital transformation in the NHS. An increasing focus on digital infrastructure including AI means that protecting sensitive health data and ensuring operational resilience is paramount and ensuring patients and staff are confident that data is secure and being handled ethically.

The recent proliferation of new AI tools has increased the cybersecurity risk facing the NHS, and earlier this year NHS England had to issue a warning against using non-compliant ambient voice technology (AVT) as organisations rushed to make use of non-regulated tools. Since then, a new self-certified registry for AVT has been announced to manage demand for AVT tools and support NHS leaders to navigate an evolving marketplace. NHS leaders should take a proactive and strategic approach, embedding cyber resilience into core AI governance processes. Cybersecurity should be regularly reviewed at board level, with clear executive accountability. 

South London and Maudsley has taken an organisation-wide approach to create a digital ecosystem, which includes the adoption of AI tools and focuses on data and workforce readiness. 

In one trust, an AVT pilot highlighted that being able to use mobile devices is better than laptops for colleagues working in the community, and that this is less obtrusive to the clinical interaction. But there is a need to ensure that any AI technology is compatible, and NHS staff are equipped with the technology needed to run any AI. 

  • South London and Maudsley NHS Foundation Trust (SlaM) provides the widest range of NHS mental health services in the UK. The organisation serves a local population of 1.3 million people in South London, as well as specialist services for children and adults across the UK and beyond. The organisation employs over 7,000 people.

    SlaM is taking an organisation-wide approach to AI, exploring and testing the potential of AI for both organisational processes and clinical care. The initiative combines a focus on creating a connected digital ecosystem to ensure organisational readiness for adoption, with the introduction of intelligent tools like Microsoft Copilot agents, automation and exploring innovative technologies such as ambient voice technology. The aim of this approach is to make everyday tasks easier, streamline processes, and free up time for what matters most: delivering excellent services and outcomes for those that use the services. 

    A number of use cases are being tested:

    To streamline internal processes, SlaM is developing an organisation-wide digital ecosystem with the creation of a single seamless user interface for staff that brings together internal systems and ensures staff have streamlined access to all internal processes such as finance, HR, estates and facilities, and learning and development . This will also include AI agents and automation solutions to support use of the system and reduce pressure on staff. 

    Work is also underway at SlaM to explore the role of agentic AI and pilot this ahead of a further roll out. This includes a pilot, in collaboration with Microsoft, focusing on HR and finance functions to build, test and deploy agents such as job evaluation, report writing, finance and risk assessments in live environments. Early work is also underway to consider the role of AVT within community, CAMHS and inpatient settings. 

    SlaM has also been using Microsoft 365 Copilot for over two years as an early adopter and as part of a national Microsoft pilot to automate routine tasks such as summarising emails, meetings and sense-checking documents while ensuring that the human always remains in the loop and responsible for outputs using the technology. In December, the trust launched its AI and automation policy setting out its ethical approach to AI adoption. In March, SlaM launched an organisation-wide training programme on using Microsoft Copilot chat, delivered jointly by Microsoft, communications and digital services. Over 1,000 colleagues took part in one of 28 training sessions across foundational, advanced and drop-in formats and staff feedback showed strong interest in practical applications of AI tools and enthusiasm for continued support. Alongside online delivery, the communications and Microsoft team engaged 116 ward-based staff across all four hospital sites, helping to reach colleagues who may not regularly access email-based communications. Ten training videos have been published on the intranet with further content being reviewed for release.  

    Overcoming barriers to adoption 

    To ensure responsible deployment of tools and alignment with patient and staff needs, SLaM is focused on creating an organisation-wide digital ecosystem through data readiness to ensure interoperability between data systems, staff training, governance and assurance processes and patient engagement. This approach includes: 

    • Building staff confidence: To manage varying levels of staff confidence around AI, SLaM recently held an organisation-wide digital takeover week focused on AI training and development focussed on supporting staff to use Copilot chat.
    • A robust approach to governance: To ensure a robust approach to governance, SLaM has set up an AI governance group. This brings together colleagues from across the organisation including clinical and non-clinical roles. The AI governance group feeds in to the digital board which reports into the finance, performance and investment committee and then on to the trust board.
    • Data readiness and interoperability have been identified as key enablers of effective AI roll out. The organisation is undertaking a programme of work on data readiness, to address existing challenges with unstructured data and to ensure interoperability between systems and data. This programme of work will allow other use cases to be explored. 

6. How will the workforce be supported and trained on new tools?

There is recognition in the 10 Year NHS Workforce Plan call for evidence that the delivery of a digital-first service, and the delivery the five ‘big bets’ for healthcare reform including AI, will require a workforce equipped to use new digital technologies including AI in order to free up clinical time for care, and support staff to reach their full potential. 

Mental health leaders have told us that there are varying levels of workforce readiness to adopt AI as well as varying staff and clinical confidence in AI tools. In some areas there is concern that the adoption of new tools could take additional time rather than freeing up time. Workforce exposure to the benefits of AI tools is needed as well as tailored training on AI tools and continuous workforce development, supported by change management approaches. 

There is also an important need to ensure those across the clinical workforce are engaged in AI. While decisions are made at board level and informed by digital leaders, ensuring buy-in from across the workforce is fundamental to effective implementation.

Staff must also be engaged on the importance of ‘human in the loop’ and checking the accuracy of the outputs from AI tools. Human in the loop is a system comprising a human and an AI component, in which the human can intervene in some way, to produce the most useful results. Ensuring that all users of AI are clear on the need to check for the accuracy of the outputs from their AI use is a crucial part of any training, to realise the maximum benefit from AI tools and build staff and patient trust. 

  • Trusts are taking different approaches to staff upskilling including weekly ‘copilot cafés’ to share experience of using Copilot, weekly training sessions and questionnaires at the end of pilots to gather feedback. 

    One trust advised that showcasing the benefits of AVT changed clinical perception over its use and addressed concerns about increased workloads in adopting new tools.

    At Bradford District Care NHS Foundation Trust, staff training on new tools helped overcome uncertainty on the benefit of AI tools.

7. How is lived experience being factored into development and testing of new tools? What steps will be taken to build patient understanding and compliance?

According to a survey conducted by the Health Foundation in 2025, over half the public (54 per cent) and more than three-quarters of staff (80 per cent) said they support the use of AI for patient care, and a greater proportion supported the use of AI for administrative purposes (66 per cent of the public and 86 per cent of staff). While there is broad support from the staff and public, a significant minority of the public remains unsupportive of AI, with concerns that it could make care quality worse and could impact the social and relational aspects of healthcare. The research found that public support for AI tools falls when AI is used without staff checking of outputs. 

In mental health specifically, polling commissioned by Mental Health UK found that two-in-five UK adults (37 per cent) wouldn’t consider using AI to support their mental health, with trust and safety remaining key barriers. 

This variation in public perception means that AI must be embedded into mental health clinical pathways in a way that promotes patient understanding, ensures appropriate consent, gives confidence that the tools are there to support patients in their care and ensures that considerations about usability are considered. 

Digital innovations and products, including AI, must be co-designed using the expertise of people with lived and living experience of mental ill health, across all stages of the AI development and adoption journey. This can ensure the needs of patients (and staff) are considered and their concerns heard, in turn supporting implementation and adoption of new tools. This can also ensure any information or guidance produced on an AI tool is tailored to the people using them and help people make informed decisions on use.

A consistent approach for transitioning patients to tools across organisations is also needed. System leaders have told us that this is down to individual clinical choice and can mean that access and patient experience is variable within and across organisations. Factoring in patient experience of their engagement with AI tools can ensure consistency of experience for those that choose to use AI tools. 

  • At South London and Maudsley NHS Foundation Trust, ensuring the patient and carer view in AI decisions is a crucial part of building confidence in AI tools. Representatives from the patient and carer reference group have been invited to attend the AI reference group and then feed back to their wider group. This ensures patient and carer insights are reflected in decision-making and their concerns and expectations about AI technologies are considered. The trust will continue to work with patients and carers throughout the AI transformation journey. 

    Evaluation from an AVT pilot in another trust considering patient consent found that whether consent is given or not is affected by the confidence of the clinician in explaining the technology. This also found that while there were variable reasons for declining the use of AVT which require further research, some patients are choosing to decline as a result of concerns about the environmental impact of AI.  

8. Ethical and inequalities considerations - How can we ensure AI doesn’t increase health inequalities?

The way AI is trained, and the data it is trained on, is likely to impact the way it affects health inequalities. As set out in a 2025 report on AI and health inequalities from Health Innovation Oxford and Thames Valley, there are a number of key considerations when it comes to AI and health inequalities but currently a limited evidence base to get a full picture of the impact of  AI on health inequalities. Bias is a key area of concern for clinicians and patients, with concerns that coding of demographic information is inconsistent, leading to errors in training data. 

Furthermore, there are some groups who might find AI services and tools hard to use, leading to inequalities in access and digital exclusion. AI tools and digital services must be designed in a way that recognises the challenges faced by those excluded from accessing the internet, or those who don’t have the skills, interest or capabilities to engage in digital services. 

On the other hand, clinical AI tools can also play a role in making it easier for people to access mental health services, who may have not chosen to access them by other means. With lengthening waiting lists, clinical AI tools can improve access to support, including for those less likely to seek support. 

To ensure maximum impact it is important to consider the ways to minimise any health inequalities resulting from the use of AI tools and the role it could play in connecting people to services, when discussing an AI proof-of-concept project. Views from safeguarding leads and lived and living experience perspectives should be considered as part of governance processes, to ensure focus is given to consent, transparency and safeguarding. 

Consideration should also be given to the environmental impact of AI resulting from the infrastructure, carbon intensity and water usage requirements to run AI, and this must be explored as part of any engagement with AI suppliers. Mental health leaders should consider whether the health or system gains from AI outweigh the environmental impact of tools and how the environmental impact can be mitigated for example through ensuring the digital and data infrastructure is in place for tools. This is to ensure alignment with the NHS's commitment to reaching net zero for the emissions that will be controlled directly by 2040.

  • Everyturn Talking Therapies is part of the NHS Talking Therapies service and provides specialist services on behalf of the NHS including talking therapies, crisis support, dementia care, services for children and young people, specialist nursing and hospital step-down, and community wellbeing support. 

    What was the challenge the organisation needed to address?

    In 2018, the Government Equalities Office issued a report exploring the experience of LGBTQ+ people in the UK. Over 108,000 LGBTQ+ individuals completed the survey. With a focus on mental health, the survey found that 24 per cent of respondents had access mental health services in the 12 months preceding the survey and 28 per cent of those who had accessed or tried to access services, said it had not been easy at all. The most commonly cited reason was waiting lists, given by 72 per cent of respondents and around a fifth (22 per cent) said their GP was not supportive. 

    How did they address it?

    Everyturn Talking Therapies used an AI-powered clinical assessment assistant as a digital front door for people self-referring into the service. It supports patients to be signposted or referred to services sooner in the process, reducing duplication of assessments and delays for more complex or high-risk patients.

    What successes are they seeing?

    ‘Charlie’ (pseudonym), a non-binary individual, embarked on their talking therapy journey using the clinical assessment assistant within the Everyturn Talking Therapies service. They first encountered the clinical assessment assistant on the Everyturn website, and it allowed them to self-refer by expressing their emotions and thoughts freely. 

    At the point of referral, Charlie indicated they were struggling with ‘extreme stress and anxiety’ and chose anxiety as their primary therapeutic focus. The unique insights and information gathered enables the service to review initial PHQ9 and GAD7 scores before assessment, helps to monitor any potential risk concerns promptly, and allows patients to choose and book their assessment at the end of the referral. This has positively increased access to the services and created a streamlined pathway for patients.

    The clinical assessment assistant supports patients to be signposted or referred to other services sooner in the process, reducing duplication of assessments and repeating their stories. It reduces delays for more complex or high-risk patients accessing appropriate care via triage from the internal risk team within the service who liaise with local crisis services. This also means assessors can focus on talking therapy appropriate assessments, which can reduce burnout for staff. 

    Charlie’s primary presenting problem and other information gathered at referral were then confirmed in their assessment with a psychological wellbeing practitioner (PWP). The information allowed the practitioner to focus on Charlie’s specific issues from the start. 

    In collaboration with Charlie, a treatment plan was agreed, and they were offered low intensity computerised cognitive behavioural therapy (cCBT), supported by a PWP, through the Space from Generalised Anxiety Disorder programme on Silvercloud. 

    Charlie completed their cCBT with regular reviews with a PWP. Everyturn’s cCBT review calls and email content is always personalised to the specific issues and content shared by the patient. All correspondence with Charlie was personalised and focused on their specific challenges with a focus on supporting change and acceptance. They were discharged from service in recovery. Charlie’s scores changed from moderate PHQ9 score and Severe GAD7 score at referral to non-clinical scores on both PHQ9 and GAD7 at the end of treatment. They reported that the ‘therapy was starting to help to get down to the root of my problems, so I feel less anxious about situations than usual.’ At the end of their treatment. Charlie was able to apply for jobs and go to job interviews which they had not been able to achieve before. They also had ongoing access to their cCBT programme for 12 months following the completion of treatment. 

    All of this was achieved with Charlie referring and completing their assessment within seven days. This allowed them to access appropriate therapy promptly, minimising the escalation of their symptoms and helping them to stay focused on their goals. They were discharged from the service within two months of referral in recovery.

    Further information on this case study can be found in NHS Talking Therapies for Anxiety and Depression: LGBTQ+ Positive Practice Guide (2024) (p54).

  • In a real-world study published in Nature Medicine involving 129,400 patients across 28 NHS Talking Therapies services in England, it was observed that services implementing Limbic’s clinical assessment assistant experienced a significant 179 per cent increase in non-binary individuals seeking support (Habicht et al., 2024). Using qualitative feedback from individuals, the study found that non-binary individuals value the human-free aspect of the tool more, highlighting the importance of alternative and inclusive self-referral methods for LGBTQIA+ individuals. (Habicht et al. 2024).

Additional support

There are several resources that can support AI adopters with navigating the complex environment. Key resources are suggested below, but should be checked for developments as the landscape is very fast moving. Information is also provided below on a number of engagement forums for best practice sharing and shared learning. 

Contact us

If you are interested in sharing your experience of using AI within your organisation, or getting involved in future work the NHS Confederation is doing in this space, please contact:

If you’d like to find out more about how we can support your leadership teams on AI with free and bespoke development sessions and workshops, please contact:

Limbic

About Limbic

Limbic develops regulated clinical AI solutions that support mental health care providers to increase the capacity and effectiveness of clinicians across the care pathway. Currently used by over 650,000 patients across 66 per cent of NHS England's Integrated Care Boards, their tools help patients access services and provide support between therapy sessions. Limbic is working with the NHS Confederation to improve knowledge and understanding of how to adopt and implement safe, high-quality clinical AI solutions.  

For further information about Limbic please contact: