How A ‘Neutral’ Health Algorithm Ended Up Hurting Black Patients

How A ‘Neutral’ Health Algorithm Ended Up Hurting Black Patients

A health care algorithm used in hospitals across the U.S. has been discriminating against black patients, according to new research. The study found that the algorithm consistently prioritised less-sick white patients and screened out black patients from a program meant to help people who need more intensive care.

Predictive algorithms have found their way into many areas of society, including health care. But plenty of research has shown these AIs can have the same sort of biases that their creators do, despite being designed to be “neutral.” These biases exist even in medicine, where systematic racial and gender discrimination toward patients remains commonplace.

According to the authors behind the new paper, though, researchers have rarely had the opportunity to study up close how and why bias can creep into these algorithms. Many algorithms are proprietary, meaning the exact details of how they were programmed—including the sources of data used to train them—are off-limits to independent scientists. That didn’t turn out to be the case in this study, published Thursday in Science.

The authors looked at data from an algorithm developed by the company Optum that’s widely used in hospitals and health care centres, including the hospital where some of the authors worked.

The AI was meant to weigh in on which patients would most benefit from access to a high-risk health care management program. Among other things, the program would allow these patients to have dedicated health care staff when sick and extra appointment slots to visit their doctor as outpatients. But when they compared the risk score generated by the AI to other measures of health in their real-life patients, such as how many chronic illnesses a patient had, black patients were consistently undervalued. Under the AI’s estimate, for instance, 18 per cent of patients who deserved to be in these programs would be black; but the authors estimated that the real number should be closer to 47 per cent.

[referenced url=” thumb=” title=” excerpt=”]

“This is an extremely important study that indicates why we should not blindly trust AI to solve our most pressing social and societal problems,” Desmond Patton, a data scientist at Columbia University’s School of Social Work who isn’t unaffiliated with the new research, told Gizmodo.

The AI’s decision-making process was designed to be race-neutral. As the authors found out, though, the other assumptions it was programmed with biased it against black people. A key variable it studied was how much money had been spent on patients’ health care up until then, with those who had the most money spent being considered more in need of the program. But black patients don’t see the doctor or get medical care as much as white patients, often because they’re poorer. This is compounded by the fact that black patients are then typically sicker by the time they visit a hospital, because their chronic health problems had gone untreated.

“The bias arises because the algorithm predicts health care costs rather than illness, but unequal access to care means that we spend less money caring for Black patients than for White patients,” the authors wrote.

These disparities in medicine and elsewhere aren’t exactly a secret. But if an AI isn’t programmed to account for them or trained with lots of different groups of people, then they go ignored, according to Atul Butte, a senior researcher in biomedical informatics at the University of California San Francisco.

“The analogy I have used in the past is that you or I probably would not be comfortable getting into a self-driving car trained only in Mountain View, California,” Butte, who was not involved in the new research, told Gizmodo. “So we really should be wary about medical algorithms trained with only a small population or in just one race or ethnicity.”

The findings, according to Jessie Tenenbaum, an assistant professor of biostatistics and bioinformatics at Duke University, also not involved in the new work, show why it’s important for outside scientists and companies to work together on improving algorithms once they enter the real world.

“I’m a fan of using AI where it can be helpful, but it’s going to be impossible to anticipate all of the ways these biases can creep in and affect results,” she said. “What’s important, then, is to think about how biased data could affect a given application, to check results for such bias, and as much as possible to use AI methods that enable explainability—understanding why an algorithm came to the conclusion it did.”

To that end, the authors of the current study told the Washington Post that they are already working with Optum to recalibrate the algorithm and that they hope other companies will have their AIs audited as well.

“It’s truly inconceivable to me that anyone else’s algorithm doesn’t suffer from this,” senior study author Sendhil Mullainathan, a professor of computation and behavioural science at the University of Chicago Booth School of Business, told the Washington Post. “I’m hopeful that this causes the entire industry to say, ‘Oh, my, we’ve got to fix this.’”

Regulatory agencies like the U.S. Food and Drug Administration should also proactively enforce better training of these algorithms and require transparent data-sharing from the companies that make them, Butte said.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.