Algorithms and artificial intelligence challenge healthcare systems

Disease and treatment 2. apr 2023 3 min Executive Director Carmel Shachar Written by Kristian Sjøgren

Artificial intelligence and algorithms are increasingly being used to assist in diagnosing patients. However, a lawyer and researcher says that avoiding bias in the algorithms is a huge challenge, and the question of assigning responsibility when the algorithms make erroneous recommendations needs to be clarified.

The world is changing, and especially in information technology, where ChatGPT and other types of artificial intelligence algorithms have achieved prominence in recent months.

The winds of change have also reached healthcare systems, with algorithms and artificial intelligence increasingly being used in diagnostics and resource allocation.

The purpose of artificial intelligence and algorithms is to improve healthcare. Nevertheless, computer-based decisions may not always achieve this.

This is the view of Carmel Shachar, Executive Director, Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School, Cambridge, MA, USA.

In a viewpoint article in JAMA, she describes how artificial intelligence and smart algorithms pose challenges and risks of bias in healthcare systems.

“Recently, there has been much discussion about artificial intelligence and ChatGPT and how they can be used. I conclude that although building an algorithm like ChatGPT is morally neutral, if implemented poorly, it may not necessarily be useful or a good thing. The concern is all the greater in implementing artificial intelligence and algorithms in healthcare systems, with decisions involving life and death,” explains Carmel Shachar.

Intense focus by the FDA and EMA

Carmel Shachar has spent much of her career studying the ethical, legal and regulatory aspects of implementing artificial intelligence, including in healthcare.

Artificial intelligence provides very interesting opportunities, since computers can facilitate the work of doctors and nurses, who are currently running ever faster to keep up.

However, implementing artificial intelligence must achieve precisely what is intended and not introduce errors and bias that harm patients.

“Both the United States Food and Drug Administration and the European Medicines Agency are focusing intensely on this, because they have realised that something must be done to avoid flooding healthcare systems with algorithms that have the potential to cause harm. The classic saying about algorithms is garbage in, garbage out. The same applies to bias. If artificial intelligence uses biased data to create algorithms, the algorithms will also be biased, and this can harm patients,” says Carmel Shachar.

Bias favouring White patients

Carmel Shachar says that several studies show that bias in data sets and algorithms is a serious problem in healthcare systems.

A study published in Science referred to an algorithm designed to assist in deciding about extra care for patients.

The algorithm was designed to create a risk score for patients to predict who would need extra care and who could manage without it.

The problem was that the algorithm was based on the expenditure used to treat patients, and in the United States, more money is spent on White patients than Black patients.

The algorithm therefore used the data and underestimated how often Black patients need extra care compared with White patients, falsely concluding that Black patients are healthier than equally sick White patients.

“Reformulating the algorithm so that it no longer uses costs as a proxy for needs helps to eliminate the racial bias in predicting who needs extra care,” explains Carmel Shachar.

Another example is an algorithm designed to assess whether a woman has breast cancer. The algorithm was developed based on data from White women in the United Kingdom but often failed when applied to African-American women.

“In these cases, public authorities need to intervene, because we cannot have algorithms that discriminate in healthcare systems,” says Carmel Shachar.

Difficult to assign responsibility

According to Carmel Shachar, another challenge related to using artificial intelligence and algorithms is that thus far doctors have been responsible for decisions affecting patients. They have therefore also had to take responsibility when they make mistakes.

But what happens when a doctor makes a decision based on a recommendation from an algorithm? Who is responsible?

Some would think that doctors are still responsible for making the right decision; others say that the developers of the algorithm are responsible.

In the United States, where lawsuits regarding medical errors can result in huge amounts in compensation, neither wants to assume liability.

The developers can argue that they are just developing software and do not understand medicine. The doctors can argue that they understand medicine but are now also being asked to be data experts.

“One purpose of the JAMA article is to focus on the need for a regulatory system that ensures that there is no bias in algorithms in healthcare systems and clarifies the responsibility for any incorrect healthcare decisions based on recommendations from an algorithm,” explains Carmel Shachar.

Algorithms are self-altering

Carmel Shachar elaborates that this problem is only in its infancy and will just get bigger in the future.

A separate problem is that algorithms using artificial intelligence are constantly being developed based on the data that is presented to the algorithm.

For example, the FDA could scrutinise an algorithm at one point and approve it, but then the algorithm is completely different a few months later because it is self-altering.

“These issues are becoming increasingly problematic and should be taken care of early, also because the use of artificial intelligence in healthcare systems is not yet on a set trajectory. However, remember that bias in artificial intelligence is not inherently good or bad. Artificial intelligence is a tool we must learn to use optimally so that it does no harm,” concludes Carmel Shachar.

Prevention of bias and discrimination in clinical practice algorithms” has been published in JAMA. The article is part of the research under the Centre for Advanced Studies in Biomedical Innovation Law, University of Copenhagen, led by Timo Minssen, Head of Centre and Professor, Faculty of Law, to which the Novo Nordisk Foundation awarded a grant of DKK 35 million in 2017.

Carmel Shachar, JD, MPH, is the Executive Director of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School...

English
© All rights reserved, Sciencenews 2020