RECALIBRATE YOUR HEALTHCARE STRATEGY
Learn 4 strategic pivots for 2025 and beyond.
Learn more

Daily Briefing

Charted: How patients really feel about AI in health care


Although most patients feel that artificial intelligence (AI) would improve health care, many still express concerns about potential consequences, including misdiagnoses and privacy breaches, according to a new study from Yale Cancer Center.

Patients' attitudes toward AI in health care

For the study, researchers surveyed 926 patients from Yale Cancer Center (50.9% women and 49.1% men) between Dec. 3, and Dec. 18, 2019.

Overall, most respondents (55.4%) said AI would make health care either somewhat or much better going forward. In comparison, only 6.2% of respondents said AI would make health care either somewhat or much worse.

When asked about AI's role in their own diagnoses or treatments, 66% of respondents said it was "very important" that AI was being used, and 29.8% said it was somewhat important.

The researchers also found that respondents' comfort levels with AI varied depending on its clinical application. For example, 55% of respondents said they were comfortable with AI reading chest radiographs, but only 31.2% said they were comfortable with AI making cancer diagnoses.

 

In addition, between 58% and 71.5% of respondents said they would be uncomfortable if an AI algorithm diagnosed them with a condition without a clear rationale, even if the algorithm had a high accuracy rate.

"Our group was somewhat surprised that there were different levels of comfort regarding AI across clinical venues," said Sanjay Aneja, an assistant professor of therapeutic radiology at Yale Cancer Center and Smilow Cancer Hospital and a senior author on the study. "Much of the work in medical AI is focused on trying to identify the various arenas in which AI can successfully impact healthcare for patients, but rarely do we ask ourselves which areas patients really want AI to impact their health care."

Many respondents also voiced concerns about the potential consequences of AI, such as misdiagnoses (91.5%), privacy breaches (70.8%), less time with providers (69.6%), and increased health care costs (68.4%).

"In many ways, our work highlights a potential blind spot among AI researchers, which needs to be addressed as these technologies become more common in clinical practice," Aneja said. "Patient education, concerns, and comfort levels should be taken into consideration when planning for integration of AI." (Gonzalez, Becker's Hospital Review, 5/10; Gaudette, Yale School of Medicine, 5/9; Khullar et al., JAMA Network Open, 5/4)


SPONSORED BY

INTENDED AUDIENCE

AFTER YOU READ THIS

AUTHORS

TOPICS

INDUSTRY SECTORS

MORE FROM TODAY'S DAILY BRIEFING

Don't miss out on the latest Advisory Board insights

Create your free account to access 1 resource, including the latest research and webinars.

Want access without creating an account?

   

You have 1 free members-only resource remaining this month.

1 free members-only resources remaining

1 free members-only resources remaining

You've reached your limit of free insights

Become a member to access all of Advisory Board's resources, events, and experts

Never miss out on the latest innovative health care content tailored to you.

Benefits include:

Unlimited access to research and resources
Member-only access to events and trainings
Expert-led consultation and facilitation
The latest content delivered to your inbox

You've reached your limit of free insights

Become a member to access all of Advisory Board's resources, events, and experts

Never miss out on the latest innovative health care content tailored to you.

Benefits include:

Unlimited access to research and resources
Member-only access to events and trainings
Expert-led consultation and facilitation
The latest content delivered to your inbox
AB
Thank you! Your updates have been made successfully.
Oh no! There was a problem with your request.
Error in form submission. Please try again.