Knowledge

Why your voice matters: The MHRA's Call for Evidence on AI in Healthcare

The UK stands at a pivotal moment in healthcare AI regulation. Here's why interpretation services must be part of the conversation – and what's at stake when they're not.

Shape healthcare AI standards

The Medicines and Healthcare products Regulatory Agency (MHRA) has launched a groundbreaking Call for Evidence on AI regulation in healthcare, inviting input from patients, clinicians, industry leaders, and the public until 2 February 2026.

This initiative, led by the newly formed National Commission into the Regulation of AI in Healthcare, represents a crucial moment to influence how AI technologies are deployed across the NHS – including interpretation services that connect patients and healthcare providers across language barriers.

The commission is seeking views on three critical areas:

  • Modernising regulation: Are current frameworks fit for purpose as AI evolves?
  • Patient safety: How can we identify and address problems with adaptive AI systems?
  • Responsibility distribution: Who should be accountable when AI is used in healthcare?

Why translation services need tobust AI regulation

While the MHRA's call addresses all AI medical devices, interpretation services represent a particularly high-stakes area where regulatory oversight is essential. Recent evidence from asylum and immigration contexts demonstrates the devastating consequences when AI translation operates without proper safeguards.

When AI Translation Goes Wrong: Lessons from the Field

The cost of inadequate AI interpretation isn't theoretical – it's measured in lives disrupted and rights denied.

The Afghan Asylum Case: A single pronoun destroyed an asylum claim. An AI tool translated "I" as "we" in a woman's testimony about trauma she experienced alone. The discrepancy was significant enough for a judge to reject her entire application. Only when crisis translator Uma Mirkhail manually reviewed the documents was the error discovered – too late to reverse the decision.

Carlos from Brazil: After fleeing cartel violence, Carlos spent six months in immigration detention while struggling with AI-translated asylum papers that translator Samara Zuza described as "full of insane mistakes" – wrong city names, reversed sentences, and critical information misplaced. It took three years working with a human translator before he was released.

The "Mi Jefe" Translation: A woman fleeing domestic violence described her violent father as "mi jefe" – a common Spanish term meaning "pappa" or "head of household." The CBP One app translated it literally as "my boss." Authorities believed she was fleeing workplace problems, not family violence. Her application was denied.

These aren't isolated incidents and, while these are focused on asylum cases, there is reason to concerned about the results that may come from healthcare applications.

Why "Human-in-the-Loop" isn't optional

AI has the power to break down barriers in efficiency, but as we can see through so many examples, it cannot always replace a human, especially in the cultural context of translation. It's fine to use it if you're translating a menu, but in healthcare, the stakes are sometimes life or death.

The fundamental problem with AI-only interpretation in high-stakes situations:

  1. No cultural awareness – AI cannot read between lines or understand idioms
  2. No quality control – When someone speaks only one language, errors go undetected
  3. No trauma-informed approach – AI lacks empathy to convey emotions appropriately
  4. Low-resource language failures – Some languages can have limited AI resource

Even OpenAI recognised these limitations, prohibiting ChatGPT use for "high-risk government decision-making" including immigration and asylum cases.

The case for security-first AI interpretation

When the MHRA finalises its recommendations in 2026, interpretation services operating in NHS settings must meet rigorous standards.

Here's what we believe should be non-negotiable:

1. Data security and privacy

Healthcare interpretation involves highly sensitive personal and medical information. Any AI system must demonstrate:

  • ISO 27001 and Cyber Essentials certifications for information security management
  • GDPR compliance with data handled exclusively within the EU
  • End-to-end encryption for all communications

2. The hybrid model: AI when appropriate, human when essential

The most effective approach combines AI efficiency with human expertise:

  • AI interpretation for routine, lower-stakes conversations
  • Instant connection to certified human interpreters when situations require nuance, cultural understanding, or trauma-informed communication
  • Seamless switching between modes based on context

3. Quality-assured language coverage

Not all languages are equal in AI systems.

Healthcare providers need:

  • Multiple language pairs tested and validated for medical contexts
  • Recognition of which languages require human interpretation
  • Transparent quality metrics for AI performance across languages

4. 24/7 availability

Healthcare doesn't wait for business hours. Interpretation services must be:

  • Available around the clock
  • Accessible with minimal technical barriers
  • Reliable during emergencies

How we demonstrate best practice

Our AI translator, PretPal, exemplifies what robust regulation should require:

Security-first architecture:

  • ISO 27001 and Cyber Essentials certified for information security
  • All communication encrypted with data handled within the EU
  • GDPR-compliant with automatic data deletion after calls
  • No permanent storage of conversation data

The best of both worlds:

  • Advanced AI for seamless real-time speech-to-speech translation
  • Immediate connection to certified human interpreters when needed
  • Both transcribed and translated messages displayed as text for verification
  • One-click activation – no complex navigation

Proven quality:

  • 35+ quality-assured language pairs
  • Continuous fluent dialogue without interruption
  • Availability 24/7
  • Designed specifically for public sector security requirements

Your response matters

The MHRA's call for evidence represents a rare opportunity to shape how AI operates in healthcare before frameworks are finalised. When you respond, consider advocating for:

Mandatory security standards including ISO 27001 and Cyber Essentials certification for any AI handling patient data

Human oversight requirements ensuring certified interpreters are readily available

Transparent quality metrics for AI performance across different languages and contexts

Patient rights protections including data deletion, EU-only processing, and full informed consent

Regular auditing and accountability with clear responsibility chains when AI fails

Submit your evidence

The MHRA's call for evidence runs until 11:59 PM on 2 February 2026. Anyone can participate – no technical knowledge required.

How to respond:

The stakes are too high to leave AI regulation to chance. Whether you're a patient, healthcare provider, or concerned citizen, your input can help ensure that when AI is deployed in healthcare settings, it operates with the security, oversight, and human judgment that patients deserve.

The lessons we have already seen in critical cases are clear: when AI interpretation operates without proper safeguards, the consequences can be devastating. Let's ensure the NHS learns from these failures rather than repeating them.

To talk more with us about PretPal for your own service, get in touch via the footer or our contact form.