SEIZE THE $50 BILLION SITE-OF-CARE SHIFT OPPORTUNITY
Get the tools, data, and insights to drive growth.
Learn more
RECALIBRATE YOUR HEALTHCARE STRATEGY
Learn 4 strategic pivots for 2025 and beyond.
Learn more

Daily Briefing

The promise (and peril) of hospital AI, according to the Wall Street Journal


EDs and ICUs around the United States are using artificial intelligence (AI) to identify and treat high-risk patients with notable success—but several factors can cause algorithms to fail, potentially resulting in missed diagnoses and harmful recommendations, Laura Landro reports for the Wall Street Journal.

5 questions about the reality of artificial intelligence in health care

AI in health care

According to Landro, AI algorithms process large sets of data in EHRs to identify patterns and predict patient outcomes and necessary treatments. They can create early-warning systems that help hospitals staff identify subtle—sometimes serious—changes in patient that are often missed in a busy unit.

Currently, there is a wide range of AI projects in the health care industry aimed at helping staff identify and treat high-risk patients, including platforms that help detect cancer in radiology images, and programs that determine which drugs to test on patients with various diseases.

However, AI prediction technology holds significant promise "to transform care and improve patient safety in ER and ICU cases—as long as the systems can be designed to avoid some of the medical, technological and ethical concerns that have emerged in mixing the science of machine learning with the art of medicine," Landro writes.

"Clinicians still have to be in the driver's seat, but [AI] and predictive models provide us with a way to put the most insights gleaned from voluminous amounts of data at their fingertips, so at the right moment of care it can improve patient outcomes," said Vincent Liu, a researcher and intensive-care specialist at Kaiser Permanente.

How hospitals are using AI

In one example, Duke University Hospital developed its own machine-learning model to predict sepsis using data from its own EHRs after noticing that a model commonly used to detect sepsis was triggering false alarms—a fairly common issue since algorithms from outside vendors and developers are typically based on data that is not relevant to a given hospital's patients, Landro writes.

To create the model, called Sepsis Watch, Duke University School of Medicine hospital-medicine physician and assistant professor Cara O'Brien led a team of doctors and nurses who trained the model with over 32 million data points, including vital-sign measurements, lab reports, and medication administration from over 42,000 inpatient interactions analyzed over a 14-month period, of which 21.3% of patients were diagnosed with sepsis.

Every five minutes, the model culls data from a patient's vital signs, medications, and lab results, Landro writes. Then, it analyzes 86 different variables, samples them multiple times, and identifies patterns that could indicate an onset of sepsis.

In the 15 months since it started Sepsis Watch, Duke has increased its compliance to 64%, compared with 31% compliance in the previous 18 months. According to Mark Sendak, a physician and clinical-data scientist at Duke who co-led the project, Duke is conducting a final analysis, but he noted that mortality seems to be down.

Similarly, HCA Healthcare developed a predictive algorithm called Sepsis Prediction and Optimization of Therapy, or SPOT, that continuously monitors patient data to identify potentially impending sepsis cases. According to Landro, the algorithm is able to detect sepsis six hours earlier—and more accurately—than clinicians, enabling the health care system to cut sepsis mortality rates across 160 hospitals by nearly 30%.

Separately, Kaiser Permanente recently developed Advance Alert Monitor, a predictive model that can identify roughly 50% of patients who are at risk of imminent decline and triggering a Code Blue. According to Landro, the model provides scores predicting patient's risk of ICU transfer or death by continuously monitoring patient data—and those scores are shared early enough that staff can care for patients while they are still comparatively stable and may need only enhanced screening or additional monitoring.

Further, to decrease the risk of alert fatigue, the model's results are overseen remotely by specially training nurses, thereby ensuring that bedside nurses can focus on immediate patient care, Landro writes. If and when a patient's score crosses a key threshold, the remote staff gets in touch with the rapid-response nurse on the ward, who then initiates a formal assessment and loops in the patient's physician.

Overall, according to a study published in November 2021, hospitals that used the model, when compared with hospitals that did not, had lower hospital mortality rates, lower rates of ICU admission, and shorter lengths of stay. According to Landro, the model is now in use across 21 Kaiser Permanente hospitals.

Challenges with using AI

As AI systems take on a broader role in hospitals, researchers will continue to search for ways to better identify when and why they fail, Landro writes. As she explains, while algorithms use statistics to track patterns in clinical data and predict patient outcomes, several factors can result in a mismatch between the data the algorithm is based on and its real-world use—flaws that if undetected "could cause an algorithm to fail to diagnose severely ill patients or recommend harmful treatments."

According to Karandeep Singh, an assistant professor of learning health sciences and internal medicine at the University of Michigan and chair for Michigan Medicine's clinical-intelligence committee, issues sometimes occur when developers take a model that was specifically trained using data from one health system and start using it in a different one with completely different patient demographics. They also may use a model in a hospital over an extended period without updating the data, or they may apply predictive models based on one ethnic or racial population to another without factoring in more inclusive data sets.

"Right now, hospitals are overwhelmed by the number of AI models available to them," Singh said. To safely use the tools in future, hospitals must "understand when AI is not working as intended, and prioritize problems based on whether they are solvable rather than simply what AI tools are available." (Landro, Wall Street Journal, 4/10)


SPONSORED BY

INTENDED AUDIENCE

AFTER YOU READ THIS

AUTHORS

TOPICS

INDUSTRY SECTORS

Don't miss out on the latest Advisory Board insights

Create your free account to access 1 resource, including the latest research and webinars.

Want access without creating an account?

   

You have 1 free members-only resource remaining this month.

1 free members-only resources remaining

1 free members-only resources remaining

You've reached your limit of free insights

Become a member to access all of Advisory Board's resources, events, and experts

Never miss out on the latest innovative health care content tailored to you.

Benefits include:

Unlimited access to research and resources
Member-only access to events and trainings
Expert-led consultation and facilitation
The latest content delivered to your inbox

You've reached your limit of free insights

Become a member to access all of Advisory Board's resources, events, and experts

Never miss out on the latest innovative health care content tailored to you.

Benefits include:

Unlimited access to research and resources
Member-only access to events and trainings
Expert-led consultation and facilitation
The latest content delivered to your inbox
AB
Thank you! Your updates have been made successfully.
Oh no! There was a problem with your request.
Error in form submission. Please try again.