techUK response to MHRA’s call for evidence on AI regulation
24 Feb 2026 12:35 PM
techUK has submitted its response to the Medicines and Healthcare products Regulatory Agency call for evidence on AI regulation in healthcare, reflecting input from across the sector. The response sets out industry recommendations on risk-based regulation, classification, liability, post-market surveillance and the need for clearer coordination across the regulatory and NHS landscape.
Context
The Medicines and Healthcare products Regulatory Agency (MHRA) launched in December 2025 a call for evidence on the regulation of artificial intelligence in healthcare. The call for evidence invited views on the adequacy of the current regulatory framework across several domains, including safety and performance standards, data governance, transparency, clinical evidence requirements, and post-market surveillance. It also sought input on how the framework affects innovation, how liability should be allocated when AI tools are involved in adverse patient outcomes, and how responsibility for safe use should be shared across manufacturers, healthcare providers, and clinicians.
How techUK engaged
techUK coordinated a response drawing on the expertise of our membership. Our approach combined dedicated member forums and an in person roundtable with MHRA officials.
Please access techUK’s consultation response here, what follows is a synthesis of the themes and reccomedations that emerged from industry.
techUK response to MHRA’s Call for Evidence on AI Regulation
Key industry recommendations
- Industry advocates for a pragmatic calibration of regulation by risk and use. We suggest risk-based assessment pathways, potentially providing a “dual-path” that allows for a low-risk fast track and a higher-risk full route.
- Clarify medical device classification for AI low-risk scenarios to reduce unintended capture of “shadow or general” AI incentives.
- Adopt a more centralised “once-assessed unlock-access" reuse mechanism, like the Innovator Passport to tackle NHS duplication, which significantly hinders adoption and scaling post-compliance.
- Improve consistency and clarity of guidance, especially on liability.
- Define post-market surveillance expectations that are aligned with dynamic AI tools (drift monitoring, improper use, escalation tiggers).
- Align parallel regulatory regimes and develop simplified “front door”. There are complex overlaps across assurance regimes and there is a need for clearer navigation and joint accountability.
Thematic discussions
- Over-classification of AI as “medical device” and blurred boundaries with general-purpose AI. AI classification is the gateway to regulatory obligations, timelines, evidence burden, and market access. Miscalibration drives unintended incentives and “shadow/grey AI” adoption.
- Regulatory timelines are misaligned with AI development cycles. Slow compliance cycles can delay beneficial adoption and push clinicians and patients toward unregulated alternatives.
- Duplicative assurance and procurement burdens across the NHS.
Barriers to adoptions go beyond MHRA compliance and include multiple NHS assurance and procurement steps across Trusts. Even a well-calibrated regulatory framework may not translate into real-world deployment without centralised mechanisms.
- Fragmented liability fosters a pervasive fear environment both from a deployment and development perspective. Furthermore, unclear responsibility allocation drives duplicative local assurance and procurement friction and inconsistent monitoring for incident response behaviour.
- Member cautioned that overly restrictive reforms may not work unless MHRA has sufficient capacity to respond to the upcoming influx of AI and provide practical support.
Conclusion: what techUK is calling for
Our response to the MHRA's call for evidence reflects a clear industry position: the framework requires significant reform, centred on proportionate regulation, reusability of assurance, and clearer coordination across the regulatory and NHS landscape. Specifically, we are advocating for a dual-lane approach distinguishing lower- and higher-risk use cases, clearer classification guidance, faster pathways for low-risk updates within intended use, and regulatory clarity that gives developers confidence to invest in compliant, UK-focused innovation.
Get involved: techUK continues to engage with MHRA and wider government on the future of AI regulation in health. To learn more about this work, contribute to future submissions, or join our health programme's policy discussions, please contact the team at viola.pastorino@techUK.org