| 20 January 2026
The state of AI in UK healthcare
Artificial intelligence is already being used across UK healthcare and the NHS, from ambient clinical documentation and administrative automation to early decision-support tools. Yet for many clinicians, practice managers and healthcare leaders, the conversation about AI in healthcare remains polarised: excitement about time savings and burnout reduction sits alongside real concern about safety, governance and accountability.
The 10 Year Health Plan for England, which aims to “make the NHS the most AI-enabled care system in the world”, reflects growing political and institutional support for AI. That ambition is backed by recent UK trials and research showing that AI can deliver measurable benefits when deployed responsibly – particularly in reducing administrative burden and freeing up clinician time for patient care.
But the evidence also underlines a critical point: successful AI adoption in healthcare depends as much on training, governance and human oversight as on the technology itself.
This article looks at the current AI healthcare landscape, the risks and opportunities clinicians need to understand, and the practical steps organisations can take to adopt AI safely and effectively.
AI in healthcare: what the latest UK data shows
Before exploring the detail, it’s helpful to look at what current data tells us at a glance. Recent UK surveys and trials point to cautious but growing acceptance, especially for administrative use cases.
| Static | Value | What it tells us |
| Public support for AI in patient care | 54% | Over half of the public support the use of AI in patient care when safeguards are in place |
| NHS staff support for AI in admin | 81 | Strong staff backing for AI to reduce administrative workload |
| Copilot (AI assistant) time saved per staff member | 43 minutes/day | Average daily time saving per staff member in a Copilot trial |
| Potential Copilot time saved at scale | 400,000 hours/month | Estimated monthly time savings if deployed across the NHS |
| GP adoption (reported AI use) | 28% (598/2,108) | Proportion of GPs reporting current AI use in clinical practice |
Taken together, the data suggests a clear pattern: support is highest where AI reduces workload without replacing clinical judgement.
How AI is being used in healthcare today
Despite the NHS’ ambition to “make AI every nurse’s and doctor’s trusted assistant […] supporting them in decision making“, real-world adoption remains more cautious. Most current use focuses on low-risk, high-impact applications, particularly administrative support. Ambient scribes, automated summarisation and workflow tools are already demonstrating time savings and improved clinician experience in pilot programmes.
One of the most cited examples is the GOSH-led trial of AI scribe technology, which reported “transformative” benefits for both patients and clinicians across London:
- More focus on patients: AI scribes led to 23.5% more direct patient interaction time and an 8.2% shorter appointment length
- Faster patient flow: In A&E, patient throughput increased by 13.4% per shift with AI scribe support
Crucially, these benefits are strongest when AI tools are implemented with clear boundaries, appropriate training and clinical oversight, rather than as “set and forget” solutions.
Why clinicians are cautious about AI – and why that matters
While growing interest in AI in healthcare is evident, clinicians’ use of AI varies sharply by task. Evidence shows strong uptake for administrative and documentation support, but far more limited use in clinical decision-making.
Data from the Nuffield Trust’s study of 2,108 GPs illustrates this clearly. Among the 597 respondents who reported which tasks they use AI tools for, over half (57%) were using AI for clinical documentation and note taking. Around four in ten used AI tools for professional development (45%) and administrative tasks (44%), but fewer than a third (28%) reported using AI to support clinical decision-making.
This drop-off is not driven by instinctive scepticism about technology. Instead, the same research points to medico-legal liability as the dominant barrier to adoption. Among non-users of AI tools, 89% cited professional liability and medico-legal risk as a major concern. Importantly, these concerns were also widely shared by clinicians who do use AI – indicating that experience does not remove anxiety about accountability.
As one senior GP involved in the study explained:
“Currently the safeguards are not adequately in place from a patient safety and medico-legal perspective.” – Participant 22, Senior career, Portfolio GP, non-user
Alongside liability, GPs highlighted a lack of clear regulatory oversight of AI in healthcare. Participants raised concerns about misleading or incorrect outputs – often referred to as “hallucinations” – and uncertainty about how such errors should be identified, documented and managed within existing clinical governance frameworks.
The study also revealed inconsistent policy and guidance across the NHS. Focus group participants described wide variation in local approaches, with some Integrated Care Boards (ICBs) discouraging or prohibiting AI use altogether, while others actively encourage safe use and piloting.
This inconsistency leaves clinicians navigating AI adoption without shared standards. As the Nuffield Trust findings emphasise, GPs highlighted the need for clear national guidance on AI use in healthcare, supported by local policies and aligned training, to enable safe and confident adoption.
Taken together, these findings suggest that clinician caution is not resistance to innovation. It is a rational response to unresolved questions around accountability, regulation and risk. Addressing these foundations will be central to enabling AI to be used more confidently and consistently in clinical decision-making.
AI training and governance in healthcare
The evidence suggests that addressing clinician caution about AI in healthcare is less about persuasion and more about putting the right safeguards in place. Healthcare organisations that are seeing early success with AI adoption tend to share a small number of characteristics that directly respond to concerns around accountability, regulation and trust.
It is also important to recognise that caution around AI-supported clinical decision support is shaped by the current regulatory environment for AI in healthcare. Unlike administrative tools, using AI to support clinical decisions can trigger additional legal and safety requirements designed for regulated medical technologies. These frameworks – originally written for traditional medical devices and clinical IT systems – were not developed with generative AI in mind, and how they apply in practice is not always clear.
Even for organisations willing to engage with this complexity, there is limited legal precedent to clarify how existing healthcare AI regulations apply if something goes wrong. As a result, many healthcare providers are taking a deliberately cautious approach to clinical decision-support use cases while awaiting clearer national guidance. Although regulators have indicated that updated AI-specific frameworks are forthcoming, governance for AI-supported decision-making remains uncertain in the interim.
Against this backdrop, organisations that are making progress with safe and responsible AI adoption in healthcare typically focus on three foundations:
- Clear governance frameworks
Defined clinical responsibility, audit trails and documented review processes for AI outputs, ensuring accountability remains clear when AI is used to support care. - Practical AI training for clinicians
Training focused on understanding limitations, recognising misleading or incorrect outputs, validating AI-generated content and knowing when not to rely on AI. - Transparency with patients
Clear, consistent communication about when and how AI tools are used, reinforcing that AI supports – rather than replaces – human judgement.
Regulators are explicit on this point. As the CQC’s GP Mythbuster 109 notes:
“As AI technologies are new, you do need to tell people that you are using them” and “demonstrate that AI is being used as a support tool – not a replacement for human oversight.”
Without these foundations, even technically strong AI tools struggle to gain clinician trust, address medico-legal concerns or scale safely across organisations.
Starting small: how to adopt and safely scale AI in healthcare
Concerns around professional liability and regulatory oversight are not unique to AI non-users. The Nuffield Trust’s 2025 survey found that almost nine in ten non-users and eight in ten users cited medico-legal risk as a key concern, followed closely by uncertainty about regulation.
What distinguishes AI users from non-users is not lower concern, but direct experience of time-saving and efficiency benefits within clearly bounded use cases.
As a result, the evidence increasingly points to a phased approach to adoption:
- Begin with administrative and documentation support
- Measure time savings and clinician experience
- Embed training and governance early
- Only then explore more advanced decision-support use cases
This approach allows organisations to realise benefits quickly while building confidence, consistency and shared standards, protecting patient safety and professional judgement as AI use expands.
A human-led future for AI in healthcare
AI will not replace clinicians, but clinicians who understand how to use AI safely will be better equipped to meet rising demand, administrative pressure and workforce constraints.
As Tim Horton, Assistant Director (Insight and Analysis) at the Health Foundation, puts it:
“It’s clear the public want a human to remain ‘in the loop’ for many uses of AI in health care.”
The next phase of AI adoption in healthcare is therefore less about new tools and more about capability building – including training, governance, clear standards and readiness for evolving regulatory frameworks – while ensuring that human expertise remains central.
Healthcare organisations that invest in these foundations now will be best placed to benefit from AI, without compromising trust, safety or care quality.
Next steps
If you work in healthcare and are exploring how to approach AI responsibly, we’re compiling practical guidance based on real-world evidence and frontline experience. Sign up to receive updates and upcoming news.