As interest in artificial intelligence (AI) chatbots, including ChatGPT, continues to grow, many healthcare organizations are taking notice and considering potential uses of the technology — as well as areas they might need to be more cautious with as AI continues to develop.
ChatGPT is a "generative AI" program developed by OpenAI that can produce text, images, audio, and videos based on prompts. Since it was first released, there has been growing interest in ChatGPT's potential use in the healthcare industry, especially as the program continues to improve.
In January, ChatGPT successfully passed the U.S. Medical Licensing Exam (USLME), which medical students often spend hundreds of hours studying for. According to researchers, the program surpassed the typical USMLE passing threshold of 60% in most of their analyses.
In the short time since then, ChatGPT has continued to improve. In March, the latest version of ChatGPT, GPT-4, was able to score over 20 points higher than the passing score needed for the USLME. In addition, Google's own medically focused generative AI model scored 85% on a USMLE practice test.
As AI models become more sophisticated, many healthcare organizations are working to incorporate ChatGPT and other models into their regular processes and tasks.
For example, the University of Kansas Health System (UKHS) is using generative AI technology to assist with clinical note taking. The health system is currently working with Abridge, a medical AI company, to summarize clinical conversations during patient visits using recorded audio.
According to Greg Ator, UKHS' chief medical informatics officer, medical records are a logical place to start applying AI because clinicians can quickly determine where the AI-produced information collected its data. Clinicians can also easily listen to a recording of a visit again if the AI misses important information.
Currently, UKHS is working to implement the AI technology over the next few months, with Ator saying he is optimistic that it will be completed by the end of the year.
So far, most early adopters of AI technology in healthcare have largely focused on its administrative rather than clinical or decision-making uses. Because the technology is still new and prone to mistakes, human oversight is still needed in many areas, and AI models are unlikely to replace human workers in healthcare anytime soon.
"I think that for some time forward, we're going to continue to need to have humans in the loop because the AI is far from perfect," said Erik Brynjolfsson, director of the digital economy lab at Stanford University's Institute for Human Centered AI. "It can't do a lot of things."
With AI technology continuing to grow in prominence, healthcare leaders have expressed both excitement over its potential uses, as well as concerns about the possible problems it could bring for both organizations and individuals.
At ViVE 2023, Craig Richardville, chief digital and information officer at Intermountain Health, said that "ChatGPT has bumped it [AI] up to the next level" and could provide an " opportunity to take advantage and drive value for our health system."
AI is "probably the most transformative technological shift in decades," said Justin Norden, a partner at GSR Ventures and an adjunct professor at Stanford Medicine. "We've just leapfrogged previous technologies and companies trying to build AI solutions to automate something, and in some cases, we're watching that kind of technology be solved overnight."
Michael Hasselberg, chief digital health officer at the University of Rochester Medical Center, said he had "never been more excited about technology advancements in healthcare than what I've seen over the last several months."
Instead of AI eventually replacing nurses and physicians, Hasselberg said he's "convinced" that the technology will actually keep providers from leaving the profession. He added that he's had clinicians "literally break down in tears when they see the possibilities of ChatGPT 4."
Although there has been enthusiasm from certain leaders, technology experts in the healthcare field have expressed concerns about what AI will mean for privacy and security issues.
"I'm concerned my clinicians" will use AI to generate patient notes "with no recognition whatsoever about the security risk, where that data's being stored, where it's coming from, and what they're doing with it," said chief compliance at privacy officer at Erlanger Health System.
"There's definitely a privacy concern," said Jesse Fasolo, head of technology infrastructure and cybersecurity at St. Joseph's Health. "… It's in an infantile state where people want to try it, and I think [organizations should have] boundaries and borders around it and communication because we're seeing people wanting to try it, access it."
Overall, Micky Tripathi, national coordinator for health information technology at HHS, said it was "remarkable" how many ideas people have had about using AI in healthcare. However, even with the "tremendous excitement" around AI, he noted that people "ought to feel tremendous fear also."
"[I]nappropriately used, there are equity issues, there are safety issues, there are quality issues, there's all sorts of issues without appropriate transparency and appropriate governance over how these are used at a local level," Tripathi said. (Turner, Modern Healthcare, 3/28; Nori et al., Microsoft, 3/24; Gates, GatesNotes, 3/21; Kayser, Becker's Health IT, 4/5; Landi et al., Fierce Healthcare, 4/4)
Create your free account to access 1 resource, including the latest research and webinars.
You have 1 free members-only resource remaining this month.
1 free members-only resources remaining
1 free members-only resources remaining
Never miss out on the latest innovative health care content tailored to you.