Data bias can make clinical risk models inaccurate, creating ethical problems

Disease and treatment 26. may 2022 3 min Assistant Professor Kristin M. Kostick-Quenet Written by Kristian Sjøgren

Risk models are increasingly used to assess patient risk. These models are often based on artificial intelligence and machine-learning algorithms that can make predictions using large data sets. However, the data sets are often not comprehensive, and the models themselves can become imprecise. According to a researcher, this is known to create both clinical and ethical problems.

Artificial intelligence is reshaping healthcare systems because algorithms help physicians and patients make vital decisions in choosing treatment.

However, the algorithms are no better than the data on which they are based, and bias in the data can affect the algorithms’ recommendations.

According to a new study, this poses clinical, ethical and legal dilemmas.

“The main message is that even if a certain data set does not initially appear to be biased, there may well be racial bias further upstream in the factors that influence whether certain patients are included in the data set. This may result from clinical or social factors or explicit bias from healthcare professionals involved in treatment decisions. No legislation requires the developers of the algorithms or clinicians to ensure that the data sets are inclusive and bias-free, and this means that using the algorithms in practice can be unethical or unintentionally harmful in the worst-case scenario,” explains Dr. Kristin M. Kostick-Quenet, Assistant Professor, Baylor College of Medicine, Houston, Texas, United States.

The research has been published in the Journal of Law, Medicine & Ethics.

Algorithms play an increasing role in clinical decisions

The background for the study is the development of an algorithm that helps people with severe heart failure to decide whether to have a mechanical left ventricular assist device (LVAD) implanted to assist their heart in pumping blood.

The decision is difficult because the LVAD can save a person’s life but requires changing certain aspects of one’s lifestyle, which can affect some people’s perceived quality of life. For example, many people with an LVAD are dependent on a caregiver, which can affect feelings of independence and autonomy. Further, the person with an LVAD may view it as burdensome to carry a battery belt for the rest of their life and to ensure that the battery is connected at all times or face potentially fatal consequences.

The algorithm uses a wealth of available data on previous patients who have had an LVAD implanted to determine whether an individual is at high risk of dying after the LVAD is inserted.

According to Kostick-Quenet, knowing these individual estimates can help physicians and patients to make informed decisions about whether to get the LVAD inserted.

“Considering potential bias in the data is important. For example, men or people of colour may have a higher average risk of heart failure than women or white people. We examined the United States database for LVAD and found no differences in LVAD outcomes based on race and that race does not seem to be a factor in data bias,” says Kostick-Quenet.

Bias in collecting data

However, this is not the whole story.

The article argues that studies of data bias can be misleading and cover up sources of bias deeper in the data, such as who receives the option of getting an LVAD.

For example, research has shown that people of colour have a greater risk of developing heart failure, so one might expect that a higher percentage should be getting an LVAD implanted.

Conversely, various factors may prevent people of colour from being considered “good” candidates for an LVAD implant. For example, one prerequisite for candidacy may be having someone to take care of the person at home. Racial differences in domestic conditions can therefore contribute to people of colour being considered less frequently for an LVAD implant than whites.

Clinical factors may also influence how often people of different races are offered an LVAD. Various comorbidities could be more common among people of colour than among white people because of structural or other upstream conditions, which in turn may result in them being offered an LVAD less often.

“When we look upstream in the data sets, there may well be racial differences in who is included, with more white people included than people of colour. But no legislation ensures that this is considered when algorithms are developed and used in practice,” explains Kostick-Quenet.

Not representative of all patients

According to Kostick-Quenet, deficiencies in algorithms for assisting in making clinical decisions resulting from lack of inclusion in the data sets are a problem.

In the example of the mechanical LVAD, people need to be informed about what will happen if they get one implanted and what will happen if they do not.

For a Black woman in her thirties with severe heart failure, predictions based on a risk calculator that do not consider the data representing her demographics may not be meaningful.

“The risk assessment may not be able to consider people like her. However, this is impossible to know, because patients like her may never be offered an LVAD and thus would never be included in data sets to document LVAD outcomes for her. Predictions made based on limited or nonrepresentative data sets may therefore not be relevant to her. The fact that the algorithm may not equally apply to everyone is both a clinical and ethical problem,” says Kostick-Quenet.

No easy solutions

Kostick-Quenet thinks that the study highlights a problem for which the researchers have no easy solutions.

Since no legislation requires specific people to eliminate or even address data bias by looking upstream at the inclusivity of the data sets, it is unclear both ethically and practically who should actually be held responsible for investigating these upstream factors or making people aware of them.

“This requires understanding the broader societal factors that affect data quality, which affects the quality of the predictions the algorithms can make. Examining data quality alone can be misleading, since it can mask the sources of bias, which are harder to identify and rectify. The challenge is to determine how the algorithms we develop can enable us to identify these hidden sources of bias and then change this,” concludes Kostick-Quenet.

Mitigating racial bias in machine learning” has been published in the Journal of Law, Medicine & Ethics. Kristin M. Kostick-Quenet collaborates with Timo Minssen, Professor, Faculty of Law, University of Copenhagen. He received a grant from the Novo Nordisk Foundation in 2017 for establishing the Collaborative Research Programme in Biomedical Innovation Law (CeBIL).

Kristin Kostick-Quenet is a bioethicist and medical anthropologist whose research focuses on ethical, social and cultural factors related to emerging...

English
© All rights reserved, Sciencenews 2020