With artificial intelligence (AI) chatbots growing in prominence, many people are testing their potential applications in many areas, including healthcare. However, experts say that significant work will likely need to be done before these chatbots can be a useful resource for either patients or providers.
According to USA Today, AI chatbots, such as ChatGPT and others, "promise to upend medical care … providing patients with more data than a simple online search and explaining conditions and treatments nonexperts can understand."
So far, the potential uses for AI chatbots in healthcare are varied, including:
Medical education
Recently, two AI language models passed the U.S. Medical Licensing Exam, which medical students often spend hundreds of studying for. According to STAT, this development suggests that medical students may be able to use AI chatbots as a studying aid as they prepare for exams.
For example, students may prompt a chatbot to create unique memory devices or explain complex concepts in simpler language to help their understanding. Chatbots could also generate practice questions with detailed explanations for both correct and incorrect answers.
This technology could also potentially simulate a patient interaction to help a physician refine their diagnostic skills and clinical acumen — although caution would be required since chatbots should not be used as a primary source of information without verification.
Administrative tasks
In 2018, 70% of physicians said they spent at least 10 hours a week on paperwork and other administrative tasks, and almost a third said they spent 20 hours or more. These nonclinical tasks often take away physicians' time with patients and contribute to burnout.
Some potential tasks that chatbots could handle including identifying billing codes for medical procedures or services and writing letters to insurers and third-party contractors to advocate for patients' needs. Although there are still potential areas of improvement, particularly when it comes to medical code accuracy, using chatbots can help providers save time on nonclinical tasks.
Clinical support
According to STAT, using chatbots in clinical medicine "should be approached with greater caution than its promise in educational and administrative work" since the risk of inaccurate information is "significant."
In clinical practice, chatbots could assist with the documentation process, generate medical charts, progress notes, and discharge instructions. For example, Jeremy Faust, an emergency medicine physician at Brigham and Women's Hospital, said the chart template ChatGPT provided for a fictional patient with a cough was "eerily good."
Chatbots could also help providers brainstorm potential diagnoses. In an informal study, Ateev Mehrota, a professor of healthcare policy at Harvard Medical School and a hospitalist at Beth Israel Deaconess Medical Center, said ChatGPT provided the correct diagnosis and appropriate care recommendations about as well as doctors — and much better than online symptom checkers — when given hypothetical vignettes.
"If you gave me those answers, I'd give you a good grade in terms of your knowledge and how thoughtful you were," Mehrotra said.
Although there are potential benefits to AI chatbots in healthcare, some language experts have argued that the technology is not appropriate sources of medical information.
"It isn't a machine that knows things," said Emily Bender, a linguistics professor at the University of Washington. "All it knows is the information about the distribution of words."
Although chatbots can predict which words are likely to come next, Bender said this is not the same as making a reasoned response to a question. For example, if someone asks, "what's the best treatment for diabetes," a chatbot might respond with "metformin," not based on any research but because the word often comes up alongside "diabetes treatment."
Bender also noted that there is a potential for racism and bias in chatbots since the datasets they're trained on may include it. "Language models are very sensitive to this kind of pattern and very good at reproducing them," she said.
Another issue with AI chatbots is that the sources they're trained on are often not disclosed. Although some chatbots are being trained specifically on peer-reviewed academic literature, others rely on text from internet sources that may not be vetted in the same way.
Without any identifiable sources, chats could use "flagrantly wrong information and medical scams" to generate responses, USA Today writes.
There is also a potential for chatbots to deceive users by generating false information. For example, Wenda Gao, a pharmaceutical executive, asked ChatGPT for studies on a gene involved in the immune system, but all three studies the chatbot provided were fake and did not exist.
"It looks so real," Gao said, adding that the answers given by chatbots "should be fact-based, not fabricated by the program."
According to Robert Pearl, former CEO of Kaiser Permanente, AI chatbots may be rough now, but they will continue to improve over time as they take in more feedback and "learn," just like a new medical intern improves with more experience.
"I am certain that five to 10 years from now, every physician will be using this technology," Pearl said. And if providers use chatbots to help empower their patients, "we can improve the health of this nation."
Although there is significant potential for chatbots and other AI technology, John Halamka, president of Mayo Clinic Platform, said there needs to be "guardrails and guidelines" to ensure that it is used safely and effectively.
Currently, the Coalition for Health AI, which includes 150 experts from academic institutions, government agencies, and tech companies, is working to develop guidelines for AI algorithms in healthcare.
Halamka, who is part of the Coalition, said that some of his recommendations include medical chatbots being required to disclose their training sources, as well as ongoing monitoring of how well AI technology performs so the public can be aware of both positive and negative outcomes.
Overall, experts say that, despite all the potential concerns and problems associated with AI, the technology will likely continue to grow in prominence, including in healthcare, as more companies develop and introduce their own AI technologies.
"The idea that we would tell patients they shouldn't use these tools seems implausible. They're going to use these tools," Mehrota said. "The best thing we can do for patients and the general public is (say), 'hey, this may be a useful resource, it has a lot of useful information – but it often will make a mistake and don't act on this information only in your decision-making process.'" (Weintraub, USA Today, 2/26; Doshi/Bajaj, STAT, 2/1)
Create your free account to access 1 resource, including the latest research and webinars.
You have 1 free members-only resource remaining this month.
1 free members-only resources remaining
1 free members-only resources remaining
Never miss out on the latest innovative health care content tailored to you.