Read Advisory Board's take: How providers can guard against bias in AI
While artificial intelligence "holds tremendous potential to improve medicine," if applied incorrectly, the technology could worsen existing health disparities, Dhruv Khullar, a physician and researcher, writes in a New York Times opinion piece.
"[AI] is beginning to meet (and sometimes exceed) assessments by doctors," Khullar writes. The technology "can now diagnose skin cancer like dermatologists, seizures like neurologists, and diabetic retinopathy like ophthalmologists," Khullar writes. And algorithms are continuing to advance.
But while many question the technical aspects of AI—can it detect pneumonia or cancer—few consider the potential "ethical" consequences, according to Khullar. "And in a health system riddled with inequity, we have to ask: Could the use of AI in medicine worsen health disparities?" Khullar writes.
He outlines "three reasons to believe" AI could exacerbate existing health disparities in medicine.
1) AI may be trained with narrow, unrepresentative data
"The first is a training problem," he writes. In order for diagnoses to be reliable, "AI must learn to diagnose disease on large data sets," Khullar writes. But he notes that "medicine has long struggled to include enough women and minorities in research," even though they experience different symptoms of and risk factors for disease, Khullar writes. The lack of diversity leads to "erroneous conclusions" and "delays in treatment" for women and minority patients.
For instance, one study found that certain facial recognition technology incorrectly identifies less than 1% of light-skinned men, but more than 33% of dark-skinned women. Khullar asks, "What happens when we rely on such algorithms to diagnose melanoma on light versus dark skin?" He adds, "Will using AI to tell us who might have a stroke, or which patients will benefit a clinical trial, codify these concerns into algorithms that prove less effective for underrepresented groups?"
2) AI may be trained on 'real-world' data that perpetuates real-world biases
Further, since AI uses "real-world data, it risks incorporating, entrenching, and perpetuating the economic and social biases that contribute to health disparities in the first place," Khullar writes.
In medicine, that means there is the potential to reinforce existing biases, particularly in circumstances when a patient has a condition with "complex trade-offs and high degrees of uncertainty," Khullar writes.
For example, if patients with lower incomes "do worse after organ transplantation ... machine learning algorithms may conclude such patients are less likely to benefit from further treatment—and recommend against it," Khullar writes.
3) Even 'fair, neutral' AI may be implemented in ways that disproportionately hurt certain groups
Even if AI is "ostensibly fair, neutral," Khullar writes, the technology still "has the potential to worsen disparities if its implementation has disproportionate effects for certain groups."
As an example he cites an AI program that could tell clinicians whether a patient should be sent home or to a rehab facility after being discharged for knee surgery. "If an algorithm incorporates residence in a low-income neighborhood as a marker for poor social support, it may recommend minority patients go to nursing facilities" over home-based physical therapy. And "a program designed to maximize efficiency or lower medical costs might discourage operating on those patients altogether" because nursing facilities come with higher costs than home-based physical therapy, Khullar writes.
"The risk with AI is that these biases become automated and invisible," Khullar writes. But if the technology is properly implemented, AI could make health care "more efficient, more accurate, and ... more equitable," Khullar writes.
The key, Khullar writes, is "being aware of the potential for bias and guarding against it." He adds, "In some cases, this will necessitate counter-bias algorithms that hunt for and correct subtle, systematic discrimination. But most fundamentally, it means recognizing that humans, not machines, are still responsible for caring for patients," Khullar writes. "It is our duty to ensure that we're using AI as another tool at our disposal—not the other way around" (Khullar, New York Times, 1/31).
Greg Kuhnen, Senior Director and Andrew Rebhan, Consultant, Health Care IT Advisor
The article brings up an issue that industry stakeholders are increasingly aware of: The US health care system is riddled with health disparities. Our medical research has historically failed to accurately reflect the variability in health outcomes stemming from differences in race, sex, economic status, and other social determinants of health. As we start to rely more on algorithms to automate certain aspects of patient care, we run the risk of creating AI that replicates much of our flawed thinking.
“We run the risk of creating AI that replicates much of our flawed thinking.”
We've covered in our research the basics of how machines generally "learn," and the steps organizations need to take to train predictive and prescriptive models. To summarize:
Without this cycle of periodic human evaluation, AI could produce conclusions that are not only wrong, but that magnify existing health disparities. An algorithm is only as good as the data used to train it, and if a model is trained using incomplete data from homogenous populations or by humans who carry implicit biases, the quality of the output will inevitably suffer.
To combat this problem, you need constant vigilance. AI is not an out-of-the-box solution that you simply set up, press a start button and walk away from. Rather, algorithms require extensive tuning as they're being built, and consistent oversight once they're deployed, to see how they directly impact all patients (not just the "average" patient). We should be using AI to help augment clinical decision making, but not become complacent as these machines become smarter. AI may eventually reach a threshold of maturity where humans will step back and let machines handle various aspects of patient care, but for now humans are still—and need to remain—very much a part of the equation.
To learn more about artificial intelligence, it's implications for providers, applications, and considerations for the future, make sure to download our report on the Artificial Intelligence Ecosystem.
Whether AI is a short-term or long-term opportunity, executives need to build a strong foundation for an AI investment. What infrastructure, staff, policies, or other requirements will be required over time?
Create your free account to access 1 resource, including the latest research and webinars.
You have 1 free members-only resource remaining this month.
1 free members-only resources remaining
1 free members-only resources remaining
Never miss out on the latest innovative health care content tailored to you.