Knowledge

The UK stands at a pivotal moment in healthcare AI regulation. Here's why interpretation services must be part of the conversation – and what's at stake when they're not.
The Medicines and Healthcare products Regulatory Agency (MHRA) has launched a groundbreaking Call for Evidence on AI regulation in healthcare, inviting input from patients, clinicians, industry leaders, and the public until 2 February 2026.
This initiative, led by the newly formed National Commission into the Regulation of AI in Healthcare, represents a crucial moment to influence how AI technologies are deployed across the NHS – including interpretation services that connect patients and healthcare providers across language barriers.
The commission is seeking views on three critical areas:
While the MHRA's call addresses all AI medical devices, interpretation services represent a particularly high-stakes area where regulatory oversight is essential. Recent evidence from asylum and immigration contexts demonstrates the devastating consequences when AI translation operates without proper safeguards.
The cost of inadequate AI interpretation isn't theoretical – it's measured in lives disrupted and rights denied.
The Afghan Asylum Case: A single pronoun destroyed an asylum claim. An AI tool translated "I" as "we" in a woman's testimony about trauma she experienced alone. The discrepancy was significant enough for a judge to reject her entire application. Only when crisis translator Uma Mirkhail manually reviewed the documents was the error discovered – too late to reverse the decision.
Carlos from Brazil: After fleeing cartel violence, Carlos spent six months in immigration detention while struggling with AI-translated asylum papers that translator Samara Zuza described as "full of insane mistakes" – wrong city names, reversed sentences, and critical information misplaced. It took three years working with a human translator before he was released.
The "Mi Jefe" Translation: A woman fleeing domestic violence described her violent father as "mi jefe" – a common Spanish term meaning "pappa" or "head of household." The CBP One app translated it literally as "my boss." Authorities believed she was fleeing workplace problems, not family violence. Her application was denied.
These aren't isolated incidents and, while these are focused on asylum cases, there is reason to concerned about the results that may come from healthcare applications.
AI has the power to break down barriers in efficiency, but as we can see through so many examples, it cannot always replace a human, especially in the cultural context of translation. It's fine to use it if you're translating a menu, but in healthcare, the stakes are sometimes life or death.
The fundamental problem with AI-only interpretation in high-stakes situations:
Even OpenAI recognised these limitations, prohibiting ChatGPT use for "high-risk government decision-making" including immigration and asylum cases.
When the MHRA finalises its recommendations in 2026, interpretation services operating in NHS settings must meet rigorous standards.
Here's what we believe should be non-negotiable:
Healthcare interpretation involves highly sensitive personal and medical information. Any AI system must demonstrate:
The most effective approach combines AI efficiency with human expertise:
Not all languages are equal in AI systems.
Healthcare providers need:
Healthcare doesn't wait for business hours. Interpretation services must be:
Our AI translator, PretPal, exemplifies what robust regulation should require:
Security-first architecture:
The best of both worlds:
Proven quality:
The MHRA's call for evidence represents a rare opportunity to shape how AI operates in healthcare before frameworks are finalised. When you respond, consider advocating for:
Mandatory security standards including ISO 27001 and Cyber Essentials certification for any AI handling patient data
Human oversight requirements ensuring certified interpreters are readily available
Transparent quality metrics for AI performance across different languages and contexts
Patient rights protections including data deletion, EU-only processing, and full informed consent
Regular auditing and accountability with clear responsibility chains when AI fails
The MHRA's call for evidence runs until 11:59 PM on 2 February 2026. Anyone can participate – no technical knowledge required.
How to respond:
The stakes are too high to leave AI regulation to chance. Whether you're a patient, healthcare provider, or concerned citizen, your input can help ensure that when AI is deployed in healthcare settings, it operates with the security, oversight, and human judgment that patients deserve.
The lessons we have already seen in critical cases are clear: when AI interpretation operates without proper safeguards, the consequences can be devastating. Let's ensure the NHS learns from these failures rather than repeating them.
To talk more with us about PretPal for your own service, get in touch via the footer or our contact form.