SEIZE THE $50 BILLION SITE-OF-CARE SHIFT OPPORTUNITY
Get the tools, data, and insights to drive growth.
Learn more
RECALIBRATE YOUR HEALTHCARE STRATEGY
Learn 4 strategic pivots for 2025 and beyond.
Learn more

Market Insights

3 principles for the equitable use of AI in healthcare

Ensuring the equitable use of artificial intelligence (AI) is crucial for addressing existing healthcare disparities. Learn how healthcare organizations can implement three key principles to promote equity in the design, deployment, and financing of AI solutions.

The principles

1. Avoid bias in model design

Why does this matter? 

Poor design choices can both exacerbate existing inequities for patients and create new ones. AI has the potential to perpetuate human and societal biases that would further marginalize minority communities in healthcare. Understanding the various ways in which inequity shows up in AI can help your organization actively avoid mistakes that further inequity.

Ways bias manifests in AI design

The data and algorithms that shape AI models are influenced by humans with conscious and unconscious biases. Those designing AI solutions can overlook or miss biases in the data, make assumptions about the people using the AI model, or fail to anticipate unintended consequences of an application. Moreover, AI models can unknowingly be trained on ingrained historical biases and stereotypes already present in healthcare processes and data.

A study examining the relationship between race and receiving pain medication in an emergency department (ED) found that risk of receiving no pain medication was 66% greater for Black patients.1 A similar study measuring the relationship between pain management and race for children with appendicitis in the ED found that Black patients with moderate and severe pain were less likely to be prescribed pain medication.2 An AI model built to predict the need for pain medication could therefore reinforce inequities in pain treatment for Black patients if the developers used such data.

Personal biases, historical biases, stereotypes, and poor data quality all contribute to algorithmic bias. Algorithmic bias is more likely to be perpetuated when there is a lack of diversity among the coders, programmers, and stakeholders involved in designing and evaluating AI models. The absence of diverse demographics and perspectives increases the chances of overlooked biases and makes the AI model more applicable to individuals who share a similar background with the model's designers.3

What can you do about it?

Organizations should take these actions to avoid bias in model design. Each step is meant to build off the previous one. 
 

  • Ensure a demographically diverse team of programmers and coders. Individuals from different backgrounds (race, gender, sexual orientation, ability, etc.) are more likely to consider the implications of an algorithm on people who share similar backgrounds.3 If you’re partnering with vendors, ask about their diversity, equity, and inclusion efforts and the diversity of their programmers. Not doing so will produce costly long-term consequences such as lawsuits, smaller consumer segments, and loss of credibility due to potentially discriminatory AI models.
  • Involve a diverse group of stakeholders in evaluating AI model design. Health systems and vendors should collaborate with health equity leaders, clinicians, and patients to gain insights on how AI can best serve your diverse patient population. Health equity leaders possess extensive knowledge of your patient’s needs and the most effective ways to distribute patient-facing AI solutions equitably. Clinicians can identify potential biases and ensure that the solution aligns with clinical best practices. Patients can also identify biases and pain points that impact their use of the AI solution.
  • Design algorithms with diverse and representative training data. Algorithmic bias appears when the training data for a model does not match the patient population it serves. Developers of AI models must integrate a wide range of data that represents patients’ demographic, socioeconomic, and geographic backgrounds. Techniques such as oversampling, undersampling, or data augmentation can be used to balance the representation of different groups in the training data. AI models can reduce biases against underrepresented groups by including a wide range of diverse and representative data.
  • Conduct a thorough review of your training data. Racism, stereotypes, and historical biases can inadvertently be incorporated into AI models through various sources such as biased literature, journals, internet sources, and people. Train developers to critically examine literature and data to prevent biased resources from entering the training data. While a large quantity of data is important, it should not come at the risk of introducing bias into AI.
  • Conduct regular bias audits, monitoring, and evaluation. Regular bias audits involve evaluating the AI system's outputs for potential biases across different demographic groups. By addressing biases early on, you can make necessary adjustments to improve the fairness and accuracy of the AI model. Continuous monitoring and evaluation of the AI model’s performance is necessary to mitigate biases that may emerge over time. This involves analyzing the system's performance across different demographic groups, monitoring for disparate impacts, and making necessary adjustments to ensure equity.

SPONSORED BY

INTENDED AUDIENCE
  • Hospitals and health systems

AFTER YOU READ THIS
  • You will understand the importance of promoting equity in the design, deployment, and financing of AI solutions.

  • You will know how to prevent AI from exacerbating and creating new inequities.

Don't miss out on the latest Advisory Board insights

Create your free account to access 1 resource, including the latest research and webinars.

Want access without creating an account?

   

You have 1 free members-only resource remaining this month.

1 free members-only resources remaining

1 free members-only resources remaining

You've reached your limit of free insights

Become a member to access all of Advisory Board's resources, events, and experts

Never miss out on the latest innovative health care content tailored to you.

Benefits include:

Unlimited access to research and resources
Member-only access to events and trainings
Expert-led consultation and facilitation
The latest content delivered to your inbox

You've reached your limit of free insights

Become a member to access all of Advisory Board's resources, events, and experts

Never miss out on the latest innovative health care content tailored to you.

Benefits include:

Unlimited access to research and resources
Member-only access to events and trainings
Expert-led consultation and facilitation
The latest content delivered to your inbox
AB
Thank you! Your updates have been made successfully.
Oh no! There was a problem with your request.
Error in form submission. Please try again.