Introduction

The UK does not yet have a single, AI‑specific health statute; instead, AI in healthcare is governed by a patchwork of existing laws, regulators, and guidance spanning healthcare services, medical devices, data protection, and professional standards. For organisations deploying or supplying AI solutions into health and social care, understanding how these regimes intersect is now a strategic and compliance imperative, not a theoretical legal question.​

At the core of this framework sit sector regulators such as the Care Quality Commission (CQC), Healthcare Improvement Scotland (HIS), and the Medicines and Healthcare products Regulatory Agency (MHRA), supported by cross‑cutting policy such as the UK government's “pro‑innovation” AI regulation white paper. These bodies increasingly coordinate through joint initiatives like the AI and Digital Regulations Service (AIDRS), aiming to give developers and adopters a clearer route through the regulatory maze.​

Regulated activities and CQC registration

In England, the Health and Social Care Act 2008 (Regulated Activities) Regulations 2014 define activities such as triage, diagnosis and direct patient care as “regulated activities” that require provider registration with the CQC. If an AI system forms part of how a service undertakes triage or clinical assessment – for example, symptom checkers, decision‑support tools or automated risk stratification – organisations must assess whether they themselves are carrying out a regulated activity and therefore need CQC registration.​

CQC guidance emphasises that introducing AI into GP or primary care services does not lessen regulatory obligations: providers remain accountable for ensuring systems are safe, effective, explainable in practice and used within their intended scope. CQC's State of Care reporting also highlights uneven digital maturity and challenges in data sharing, underlining that governance around AI must sit within a broader strategy for safe digital care and information use.​

Different approaches across the UK

Regulatory approaches are not uniform across the UK, and each devolved nation has its own health service regulator and policy environment. In Scotland, for example, Healthcare Improvement Scotland (HIS) oversees independent healthcare services and takes a slightly different stance to England regarding when remote or digital health services, including AI‑enabled offerings, trigger registration obligations.​

Wales and Northern Ireland follow their own governance arrangements and regulators, adding further nuance for organisations offering cross‑border AI‑enabled services or digital clinics. For operational leaders, this means commercial models, service pathways and risk assessments should be designed from the outset with jurisdictional differences in mind rather than assuming a single UK‑wide rulebook.​

MHRA: software and AI as a medical device

Alongside service regulation, the MHRA regulates software and AI that qualify as medical devices, including software as a medical device (SaMD) and AI as a medical device (AIaMD). Updated guidance, most recently enhanced in 2024 and 2025, explains when software meets the definition of a medical device, how it should be classified, and what evidence is required across the lifecycle.​

MHRA's Software and AI Change Programme and its AI regulatory strategy are moving towards more risk‑proportionate oversight that recognises the adaptive nature of AI, focusing on transparency, explainability, post‑market surveillance and Good Machine Learning Practice (GMLP). Organisations whose products qualify as AIaMD must align with this evolving framework, including robust clinical evaluation, human‑factors analysis and change‑control processes for model updates.​

The wider UK AI policy context

The UK government's AI white paper, “A pro‑innovation approach to AI regulation,” sets out cross‑sector principles – such as safety, transparency, fairness and accountability – that regulators like MHRA, CQC and NICE are expected to embed into their domain‑specific rules. Rather than a single AI regulator, the UK relies on existing regulators applying these principles in context, which is particularly visible in life sciences and healthcare.​

In life sciences, guidance from MHRA and international bodies (such as IMDRF GMLP principles) is increasingly shaping expectations around explainability, data quality, bias mitigation and lifecycle monitoring for AI used in diagnostics, digital therapeutics and decision support. For organisations, this policy environment rewards early investment in governance, documentation and assurance mechanisms that demonstrate not only technical performance but also ethical and societal considerations.​

Practical implications for organisations

For developers, healthcare providers and investors, the practical impact of this regulatory ecosystem is that AI projects must be designed as regulated services and/or medical devices from day one, not retrofitted for compliance. Key implications include the need for clear intended‑use statements, early engagement with device classification, robust clinical and real‑world validation, and alignment with CQC or other regulators' expectations on safety, quality and patient involvement.​

Tools such as the AI and Digital Regulations Service (AIDRS) have been created to help organisations understand which regulators apply, what evidence they need, and how to navigate approval and adoption pathways more efficiently. Nonetheless, regulatory expectations are tightening: CQC, MHRA and partners increasingly expect evidence of continuous monitoring, bias and performance tracking, and clear accountability lines when AI influences clinical decisions.​

Q&A: Key questions for AI in UK healthcare

Q1: Does every AI solution used in healthcare need CQC registration?
Not all AI solutions trigger CQC registration; the critical question is whether the organisation is carrying out a regulated activity such as triage, diagnosis or direct patient care under the 2014 Regulations. AI used solely for back‑office functions (for example, rostering or generic analytics) is unlikely to fall within CQC's scope, whereas tools that shape clinical assessment or treatment recommendations often will.​

Q2: How do service regulation and medical device regulation interact?
Service regulators such as CQC oversee how care is delivered and whether providers use technology safely, whereas MHRA regulates the AI product itself as a medical device, including its design, performance and post‑market surveillance. In practice, a provider may need CQC registration for its service while the manufacturer must ensure the AI is correctly classified, approved and monitored as a medical device.​

Q3: What are the main recent developments that organisations should be aware of?
Recent MHRA guidance consolidates expectations for SaMD and AIaMD, clarifying classification and lifecycle obligations and highlighting the move toward risk‑proportionate regulation and GMLP principles. At the same time, regulators have launched the AI and Digital Regulations Service and updated collaborative strategies to help developers and adopters navigate overlapping guidance more easily.​

Q4: How are regulators addressing the “black box” nature of AI?
Regulators increasingly emphasise transparency, explainability and the ability for clinicians to understand and challenge AI‑driven outputs, especially where safety‑critical decisions are involved. Guidance around GMLP and human‑factors design underscores the need for clear user interfaces, comprehensible risk information and governance structures that ensure humans remain meaningfully in control.​

Q5: What should organisations do now to future‑proof their AI strategies?
Organisations should map their AI portfolio against both service and device regulatory requirements, embedding regulatory and clinical input into product design, procurement and deployment decisions. Building capabilities in data governance, model monitoring, fairness assessment and clinical safety will position organisations to adapt as UK regulators refine their frameworks and as international standards converge.​

Conclusion

AI offers powerful opportunities to improve access, quality and efficiency in UK healthcare, but those opportunities sit within a regulatory landscape that is intricate, multi‑layered and rapidly evolving. For organisations whose operations involve AI‑enabled care, success will depend on treating regulation not as a barrier but as a design parameter: aligning innovation with clear intended use, robust evidence, and the expectations of CQC, MHRA and their counterparts across the devolved nations.​

By engaging early with guidance such as MHRA's software and AI roadmap, CQC's AI‑related expectations and services like AIDRS, organisations can build trustworthy AI that is safe, compliant and ready to scale in a health system moving quickly from analogue to digital.