Over the last year, interest in generative artificial intelligence (AI) has grown rapidly, especially in healthcare. However, lawmakers and healthcare leaders have expressed concerns about the potential risks of AI and called for more regulations on the technology to protect both providers and patients.
In late 2022, OpenAI's generative AI model ChatGPT was released publicly. Since then, interest in generative AI has surged. Many other companies, including Google, Amazon, Epic Systems, and Oracle Health, have also either released their own generative AI models or added generative AI capabilities to their existing products.
Healthcare leaders in particular have been interested in utilizing generative AI technology in their organizations. According to a survey from UPMC's Center for Connected Medicine and KLAS Research, AI is "dominating the thoughts of many executives at health systems." Almost 80% of respondents said that AI was the most exciting emerging technology.
"You can't go through a day without having a conversation about AI," said Kevin Vigilante, EVP at Booz Allen's healthcare business. "Clients are asking about it. Everyone wants to do something."
Eileen Tanghal, founder and general partner of Black Opal Ventures, noted that the current rise in generative AI is similar to the rise of the internet back in 1995. "Almost every company now has some aspect of AI in their pitches or in their value proposition," Tanghal said. "It's ubiquitous. It's becoming more accessible to people who are not necessarily data scientists or developers."
According to an October survey from GSR Ventures, 87% of digital health venture capitalist investors said they had altered their strategies due to generative AI models like ChatGPT and others on the market.
As generative AI becomes more widespread, many healthcare leaders have expressed concerns about its potential risks.
For example, researchers from Stanford University found that generative AI tools like ChatGPT and Google's Bard promoted race-based medicine. Because of the potential for harm, the researchers said the technology was not ready for clinical use or integration.
According to David Newman-Toker, director of the division of neurovisual and vestibular disorders at Johns Hopkins University School of Medicine neurology department, AI models should be trained on "gold-standard data sets" to ensure healthcare providers aren't "converting human racial bias into hard and fast AI-determined rules."
Generative AI can also quickly spread disinformation on health topics, such as vaccines. "The risk is, as we've seen in some of early trials of ChatGPT, is that it can create its own answers and argue it's right," said Intermountain Health president and CEO Rob Allen.
Federal lawmakers have also expressed their own concerns about the use of AI in healthcare. During a recent House Energy and Commerce Health Subcommittee meeting, lawmakers questioned AI and healthcare experts on potential risks of the technology, including bias, medical liability, physician burnout, privacy, and more.
"It is critical that safeguards are put in place to protect the privacy and security of patient's data," said Rep. Frank Pallone Jr. (D-N.J.).
Last month, President Joe Biden signed an executive order establishing the first standards for AI in healthcare and other industries. Under the order, HHS has 90 days to establish an AI Task Force to develop policies and frameworks on how to responsibly use and deploy AI and AI-enabled technologies in health and human services. HHS, the Department of Veterans Affairs, and the Department of Defense will also develop a framework to help identify and capture clinical errors that occur with AI in healthcare settings.
The American Medical Association (AMA) has also released its own guidelines on how to limit the risks of AI technology. The guidelines cover seven key areas, including oversight, transparency, disclosure and documentation, generative AI, privacy, mitigating bias, and liability.
"The AMA recognizes the immense potential of health care AI in enhancing diagnostic accuracy, treatment outcomes, and patient care," said AMA president Jesse Ehrenfeld. "However, this transformative power comes with ethical considerations and potential risks that demand a proactive and principled approach to the oversight and governance of health care AI." (Perna, Modern Healthcare, 11/30; Firth, MedPage Today, 11/30; McFarlane, The Hill, 11/29; Hollowell, Becker's Clinical Leadership, 11/30)
AI is changing the healthcare industry, but not all organizations are adopting it at the same pace. Some are early adopters, and others are waiting to see how things play out. Unpack the different strategies for AI adoption and learn about the approach that may be right for your organization.
Create your free account to access 1 resource, including the latest research and webinars.
You have 1 free members-only resource remaining this month.
1 free members-only resources remaining
1 free members-only resources remaining
Never miss out on the latest innovative health care content tailored to you.