SEIZE THE $50 BILLION SITE-OF-CARE SHIFT OPPORTUNITY
Get the tools, data, and insights to drive growth.
Learn more
RECALIBRATE YOUR HEALTHCARE STRATEGY
Learn 4 strategic pivots for 2025 and beyond.
Learn more

Expert Insight

Q&A: Addressing limitations in AI language interpretation tools, with GLOBO CEO Dipak Patel

For people with limited English proficiency, accessing healthcare can be daunting. We talked with innovator Dipak Patel, CEO of GLOBO, about his research on the limitations of AI interpretation tools in a medical setting and how those limitations can be mitigated to best serve patients.
Dipak Patel

For people with limited English proficiency (LEP), accessing healthcare can be daunting.1 When medical interpreters are unavailable, patients with LEP must often use ad-hoc interpretation methods, such as friends or family, and even untested AI apps to communicate with providers.2 Uneven communication can lead to worse outcomes down the line3 due to misdiagnosis, difficulty managing health conditions, and a reduced ability to seek out referrals and other services.4 Artificial intelligence (AI) interpretation is poised to connect patients with LEP to providers, but more research is needed to better map AI’s possibilities and pitfalls.  

We talked to GLOBO’s CEO Dipak Patel, an innovator with more than 20 years of experience in healthcare technology, consulting, and private equity. Read on for a deep dive into his research on the limitations of AI interpretation tools in a medical setting and how those limitations can be mitigated to best serve patients.

 

You have been researching the specific challenges associated with a range of AI interpretation tools that are available right now. Why were you initially interested in digging into this topic?

We started researching the capabilities of AI interpretation tools to get a better sense of how well they perform and what role they can or should play in a healthcare setting. 

I wanted to focus on whether AI could fill the role of the interpreter, who does much more than interpret one language to the next. For example, medical interpreters are often tasked to provide clarification: If the physician or patient says something unclear, the interpreter can check the meaning with the speaker.

Medical interpreters are also cultural brokers. If the physician asks the patient to do something that doesn't align with that patient’s culture, background, or religion, the interpreter can advocate for the patient — especially if they feel the patient is not speaking up or is uncomfortable with something. 

Ultimately, the role of a medical interpreter is much broader than most think. Understanding whether AI can interpret in a medical setting involves a more nuanced look at how AI interpretation tools perform in different use cases.

Why is it so important to understand and address challenges with AI interpretation tools?

Rushing to use AI interpretation tools without truly understanding challenges is risky. First, failing to address AI’s limitations could lead to medical errorspatient harm, and even data breaches.

There is also the danger of inequity. My vision is to enable patients to interact in the language they prefer throughout their journey, even for non-medical needs like reading a menu in the hospital cafeteria. Patients are going to be left behind iAI tools work well for certain populations but not others.

Any of these issues could erode trust, both in the technologies used to improve quality, efficiency, and even healthcare itself. e conducted our research to identify the general limitations of AI interpretation tools and prevent negative outcomes.

Can you give me a broad overview of how you approached your research?

AI interpretation tools use a three-step process: transcription, translation, and speech. When someone , transcription captures it as the written word. The words are then translated into a second language converted back into speech. This threestepprocess failure pointsand we wanted to investigate the challenges that could arise with each step.

With the help of other subject matter experts — including developers, medical interpreters, and people with PhD’s who understand AI — we created custom metrics paint a picture of what great transcription and text-to-speech looks like.

To measure the effectiveness of AI interpretation tools, we focused on four metrics: accuracy, realism, latency, and cost. “Accuracy” simply means that the message is conveyed correctly, both in terms of using the right words and in conveying the right tone and register. “Realism” means that the message correctly interprets the care situation and setting so that emotion and empathy are conveyed appropriately. “Latency” refers to the time it takes to interpret speech, which includes transcribing and translating words then speaking them back to the patient. Finally, “Cost” includes the financial resources needed to make the AI interpretation tool run effectively.

What are some of the most common limitations of the AI interpretation tools that are out there right now?

From our research, we’ve learned that many different interpreting scenarios can play out in medical settings. We’re seeing that AI tools perform well in mainstream use cases, but disastrous in edge use cases.

A good analogy is self-driving cars. self-driving car can navigate from point A to point B with no obstacles or problems. However, what happens when a street is unexpectedly blocked off? Can the car still drive to point B when dealing with something unexpected?

Similar roadblocks pop up when you're evaluating AI interpretation tools in healthcare scenarios. Here’s an example: A physician speaks English, and a patient speaks Spanish. The patient says something in Spanish, but before the AI tool can interpret what they’ve said to the physician, the physician responds in English. In this case, the crosstalk between the physician and patient causes the AI to pause and stop interpreting the Spanish-speaking patient, instead interpret what the English-speaking doctor said. This results in lost patient dialogue and can create confusion, which isn’t what we want it to do.

Also, when a human is interpreting, they will clarify when questions come up. For example, if the doctor recommends taking “15 grams” of something, it may sound very similar to “50 grams.” A human medical interpreter can clarify by asking, “Did you say ‘15’ or ‘50’? I really want to make sure I heard that correctly.”

Since AI common sense, it often does nothing instead of seeking clarification when something is unclear. This is why you can't just turn on an AI interpretation tool and expect it to work without human intervention. hen we test AI tools in real life, we have a medically certified interpreter there to navigate and identify pitfalls we haven’t identified yet.

Which issues do you feel will be the most difficult to address?

Bias makes using AI more complicated. AI is based on historical data and information, which is problematic for non-English-speaking populations that don’t have a lot of data to train the AI on. Since many AI technologies are developed by English-speakers, they will default to the English word versus words spoken in other languages, such as Spanish.

Here’s an example that I heard recently: “Sí” in Spanish means “yes.” But if an AI interpretation tool is biased toward English, it may hear “she” or “see” instead of “sí.” The Mandarin word “xi” might also sound like “she” in English. This can create confusion about what language the person is actually speaking. Is it Spanish, English, or Mandarin? And that's just a simple example with one word. Imagine doing that with medical terminology for an entire encounter.

Medical interpreters also serve as advocates when talking to a patient who may seem confused or hesitant to speak. AI tools don’t have that ability yet. ther aspects of interpretation difficult for AI to address in the near term include nonverbal cues — like tone of voice, facial expressions, or gestures. That’s why medical interpreters prefer in-person or video interpretation, which provides other cues besides voice.

What are some of the strategies to overcome some of those challenges?

Issues with accuracy could potentially be addressed by developing mechanisms that help AI clarify what patients mean. For example, I foresee incorporating cross-checking and repetition into AI interpretation tools. The AI can be programmed to always clarify and repeat anytime the interpretation involves something critical, such as dosage information.

AI interpretation tools can also be programmed to produce summaries, either during the interaction or after it, giving the clinician a summary of the interpretation to ensure alignment between what the clinician heard and the patient said. For example, the doctor can make sure that the AI interpretation tool told the patient to take a dosage of “15 grams” instead of “50 grams.”

Training AI on data and information for a broader range of languages and cultures is a way to mitigate the interpretation bias that favors English speakers.  Historically, non-English-speaking populations haven’t had as much data available to feed AI engines, but GLOBO and other language providers are building robust databases as we speak. Training AI with many different cultures and languages will gradually enable us to become better cultural brokers and advocates — while helping AI better to clarify any discrepancies.

By observing a range of use cases in our research, we are also able to create new, more effective solutions. This is also analogous to self-driving cars, which were initially tested in less populated, rural areas.  It is now possible to hail an autonomous taxi in San Francisco because he self-driving cars understand more complex roadways than when they were first deployed. The same thing going to happen with AI in language solutions.

That said, I believe we will always need medically certified interpreters to turn to when things go awry.

How should healthcare organizations approach AI interpretation when they think about its limitations and solutions?

While AI is advancing rapidly, our research shows that AI interpretation tools need to be carefully and implemented with human oversight. o off-the-shelf model exist today that are specifically optimized for medical interpretation use. This is why we need to think hard about creating a healthcare-centric product and putting the right safety controls around it.

I recommend that health systems wanting to test AI language solutions start with small-scale pilots.

Prior to launching a pilot, I also encourage health systems to fully understand the strengths and limitations of the AI interpretation tool they plan to use and evaluate how they will affectclinicians and patients. Talk to your clinicians to see if they are open to using AI tools. Think about how you are going to communicate with patients about these tools. How will you train providers and staff to use them? How will you involve human interpreters? How will you address potential errors and issues?

Also, it is important to understand that there are no medical interpretation standards right now assess in-person, telephone, and video medical interpretation. You’ll need to set your own standards for both human and AI interpretation. Additionally, there is no shared understanding of what quality looks like for human interpreters — some are great, while others are ineffective, or somewhere inbetween. No human or tool is accurate 100% of the time, so what is acceptable?

Ultimately, it is important to note that these tools will continuously get better, but it will take time and involvement from many different people, including clinicians and staff. This means healthcare systems aren’t just users when they adopt these tools. They are helping to shape how good AI interpretation tools will get.

How can healthcare organizations encourage trust in AI interpretation tools among their patients?

Initially, it’s safe to say you’ll have a segment of patients and clinicians who fundamentally won’t trust AI interpretation tools, another segment that will, and others who have no idea.

But it’s more complex than the conclusion that most people won’t trust AI. When a patient or family sits down with a clinician, and they are unable to communicate with each other because they don’t have access to an interpreter, providing a simple translation tool — like a translator app — can be viewed as helpful even if it’s not 100% accurate.

If you ask the clinician or patient, “Would you rather just try and communicate with no tool whatsoever,” they would probably say “no.”

That said, it is important to be transparent about what AI interpretation tools can and can’t do. Also, expect conversations with patients as they become more aware that these technologies exist. For example, from a privacy standpoint, patients may have concerns about whether you save recordings and how you will protect their private health information.

Remember, at the end of the day, what really matters is that the tools are helping, rather than negatively impacting the provider's ability to care for that patient. 

1 Improving care for people with limited English proficiency. Centers for Medicare and Medicaid. January 4, 2024.

2 Schenker Y, et al. Patterns of interpreter use for hospitalized patients with limited English proficiency. J Gen Intern Med. February 19, 2011.

3 Gonzalez-Barrera A, et al. Language barriers in health care: findings from the KFF survey on racism, discrimination, and health. KFF. May 16, 2024.

4 Centers for Medicare and Medicaid. Improving care for people with limited English proficiency. August 2023. 


About the sponsor

GLOBO Language Solutions ("GLOBO") is ranked in Nimdzi’s 2024 top 10 U.S. healthcare interpreting companies facilitating effective patient communication between healthcare providers and LEP patients. The company manages an independent global network of more than 8,000 linguists who speak 430+ languages and dialects. GLOBO supports leading healthcare organizations across the country through on-demand audio, video, on-site, and sign language interpreting; actionable insights; and translation of documents, emails, texts, and chats in a single AI-powered platform. GLOBO has been listed on the Inc. 500|5000 eight times and is a 2024 Vendors Division Semi-Finalist in Healthcare Innovation Magazine's annual Innovator Awards Program. Become a fan of GLOBO on LinkedIn.

Learn more about GLOBO.

This article is sponsored by GLOBO Language Solutions ("GLOBO"), an Advisory Board member organization. Representatives of GLOBO helped select the topics and issues addressed. Advisory Board experts wrote the article, maintained final editorial approval, and conducted the underlying research independently and objectively. Advisory Board does not endorse any company, organization, product or brand mentioned herein.

To learn more, view our editorial guidelines.


Sponsored by

This article is sponsored by GLOBO Language Solutions ("GLOBO"). Advisory Board experts wrote the article, maintained final editorial approval, and conducted the underlying research independently and objectively.

Learn more about GLOBO.


SPONSORED BY

INTENDED AUDIENCE

AFTER YOU READ THIS

AUTHORS

Jennifer Fierke

Senior writer and editor, Sponsorship

TOPICS

INDUSTRY SECTORS

Don't miss out on the latest Advisory Board insights

Create your free account to access 1 resource, including the latest research and webinars.

Want access without creating an account?

   

You have 1 free members-only resource remaining this month.

1 free members-only resources remaining

1 free members-only resources remaining

You've reached your limit of free insights

Become a member to access all of Advisory Board's resources, events, and experts

Never miss out on the latest innovative health care content tailored to you.

Benefits include:

Unlimited access to research and resources
Member-only access to events and trainings
Expert-led consultation and facilitation
The latest content delivered to your inbox

You've reached your limit of free insights

Become a member to access all of Advisory Board's resources, events, and experts

Never miss out on the latest innovative health care content tailored to you.

Benefits include:

Unlimited access to research and resources
Member-only access to events and trainings
Expert-led consultation and facilitation
The latest content delivered to your inbox
AB
Thank you! Your updates have been made successfully.
Oh no! There was a problem with your request.
Error in form submission. Please try again.