Europe strikes first: the global implications of the new EU Artificial Intelligence Act

Tech Science 3. nov 2024 5 min Professor Timo Minssen Written by Morten Busch

The European Union (EU) Artificial Intelligence (AI) Act, as the first extensive legislation to regulate AI, sets a global precedent. In a new analysis, researchers highlight the Act’s dual aim to foster innovation while ensuring safety in AI technologies, especially in healthcare. The analysis also discusses implementation challenges and the potential to influence international standards. “Being first is always nice, but sometimes this does not mean you do the best thing,” reflects the researcher behind the study.

As AI reshapes healthcare, the EU AI Act sets the stage for regulating these advances, especially for digital medical products. Although the Act aims to ensure safety and innovation, researchers emphasise the complexities of applying these rules to AI-powered medical devices, in which regulation must keep pace with rapid technological progress.

“The EU AI Act is the first robust legislative framework targeting the broad spectrum of AI applications, especially in healthcare. This analysis pinpoints significant challenges in implementation, especially in the medical device sector in which existing systems struggle to adapt. We need a balance – too much regulation stifles innovation, but too little exposes risks. It is crucial to refine these regulations to effectively harness AI’s potential without compromising safety,” explains Timo Minssen, Professor and Head of the Center for Advanced Studies in Bioscience Innovation Law at the University of Copenhagen in Denmark.

Pioneering progress

In a new article, Mateo Aboy, Director of Research in Technology & Law at the University of Cambridge, United Kingdom, Effy Vayena, Professor of Bioethics at ETH Zurich, Switzerland and Timo Minssen, Head of the Center for Advanced Studies in Bioscience Innovation Law at the University of Copenhagen, Denmark have closely analysed the potential consequences of the new EU AI Act for regulating digital medical products.

The integration of AI into regulated digital medical products promises transformative advances in healthcare, especially in diagnostics and device functionality. For instance, AI can improve diagnostic accuracy in imaging devices such as those used for magnetic resonance imaging and computed tomography by identifying patterns undetectable to the human eye.

However, as these innovations unfold, companies face significant hurdles.

“One of the greatest challenges is navigating the regulatory landscape, which can feel like a jungle for companies trying to implement AI technologies efficiently and feasibly,” explains Timo Minssen. “The lack of guidance, capacity and expertise in regulatory bodies creates uncertainty for many stakeholders about how to decide to comply within a complex network of regulations that interact and apply to many advanced applications.”

The EU was first

This is especially evident in emerging fields such as gene-editing therapies, in which AI could potentially predict the outcomes of genetic modifications, or in the medical device sector. But regulatory bodies struggle to evaluate such advanced applications.

“In such rapidly evolving fields, clear legal frameworks are crucial,” Minssen adds. “The EU has made significant progress with the AI Act, aiming to harmonise the regulatory environment across Europe, similar to how the General Data Protection Regulation (GDPR) affected data privacy.”

This regulation affects technologies such as AI-driven chatbots used in patient management, ensuring that they comply with safety standards.

“The EU was first. We were not necessarily best, but we were first. Being first is always nice, but sometimes this does not mean that you do the best thing. We think that we risk being overregulated in some areas and not having the capacity to implement it properly. This overregulation risks stifling innovation, especially affecting smaller entities that cannot bear these regulatory burdens.” Minssen adds: As we mentioned in another recent article published in NEJM AI, the AI Act includes some safeguards that are supposed to support such smaller entities and opens up for new guidance, more adaptive forms of regulation and updates by the European Commission. But these measures require much regulatory capacity, and how effectively they can be implemented throughout Europe remains to be seen.

A new era of AI in healthcare

However, this does not diminish the EU's commitment to establishing robust regulatory frameworks.

“With the AI Act, the EU is setting a global precedent with comprehensive AI regulations that enforce the highest standards of safety and privacy, regardless of where the manufacturer is located. Every AI product deployed within EU borders must comply with these stringent requirements, ensuring consistent and robust protection across all Member States.”

A clear example is mobile health apps, in which small developers may find the extensive compliance requirements prohibitive, potentially limiting the introduction of innovative health solutions. For example, a health app that uses AI to monitor heart rate and to suggest health interventions must now ensure that these suggestions are reliably safe and secure the users’ data across all EU countries.

“This regulation introduces substantial challenges for global manufacturers, applying universally to any entity whose products are used within the EU. This encompasses a wide range of products, including the more than 950 AI- or machine learning–enabled medical devices already approved by the United States Food and Drug Administration, underscoring the extensive reach and impact of the EU’s legislative framework.”

Balancing acts and high stakes

As these manufacturers navigate these challenges, they must also adapt to the rigorous new classifications established by the EU.

“The EU AI Act introduces stringent categories for AI systems, especially those used in medical products, marking some as high-risk. This classification requires thorough compliance and rigorous oversight, necessitating that developers implement robust risk management frameworks to ensure that these tools meet high safety and efficacy standards for clinical use. This is about balancing innovation with consumer safety across all EU markets.”

An example is an AI system used in oncology to predict how patients respond to various cancer treatments, which now requires a comprehensive validation process to ensure reliability and safety.

“Given that these AI systems are classified as high-risk, the requirement for robust compliance frameworks is crucial. This means deploying stringent risk management and data governance protocols to align with both the AI Act and GDPR. The goal is to ensure that these technologies are developed and applied under the highest standards of safety and privacy.”

Must routinely update their algorithms

In this context, providers must also address the complexities of maintaining compliance and ensuring data integrity.

“Providers must not only navigate the complex jungle of AI regulation compliance but also ensure high standards of data integrity and transparency. This requires a robust framework for continual updates and monitoring, essential for adapting to both advancing technologies and the evolving regulatory landscape.”

With the new AI Act, developers of AI diagnostic tools must routinely update their algorithms and re-evaluate to meet the evolving standards, ensuring that they remain safe and effective for clinical use.

“The EU AI Act is pioneering comprehensive regulations for AI, requiring high standards of safety and privacy regardless of the manufacturer’s location. Any product used within the EU must meet these stringent requirements, ensuring uniform protection across all markets. This Act aims to be the first of its kind that is so comprehensive, setting a benchmark globally.”

Leading but not necessarily best

However, this ambitious approach presents both opportunities and challenges.

“This poses a great opportunity and a big risk. We risk overregulating, which could stifle the very innovation we want to foster,” notes Timo Minssen. “This is about finding that sweet spot in which the regulation and protection of fundamental values can coexist without dampening Europe’s competitive edge in digital healthcare and innovation.”

In practical terms, the Act introduces stringent compliance demands, especially for AI and machine learning medical devices. Developers must adhere to both the EU Medical Devices Regulation and the AI Act, as well as other regulations, which sometimes overlap but can also conflict, potentially leading to confusion and increased regulatory burden. For example, an AI application designed for predictive analytics in heart disease may require extensive validation for both AI-specific and general medical device standards, leading to prolonged development cycles.

“This is especially daunting for small and medium-sized enterprises, which may struggle with the resource allocation required to meet these standards,” adds Timo Minssen.

Navigating complexities in a regulatory jungle

According to Minssen, the greatest challenge with the EU AI Act is its implementation across various Member States, creating a complex regulatory landscape, especially for small and medium-sized enterprises.

“Small and medium-sized enterprises are particularly vulnerable to the compliance burdens because many lack the resources needed to navigate the complex requirements,” he explains.

For start-ups, especially those developing AI-based mental health platforms or other innovative solutions, the dual compliance landscape may become prohibitively costly. The success of the AI Act hinges on the readiness and cooperation of regulators, AI providers and manufacturers of digital products.

Minssen emphasises the difficulty of aligning AI regulation across countries.

“AI regulation is a moving target. Efforts like the European Health Data Space are a step in the right direction, but the complexity of these frameworks makes implementation challenging. Without careful management, there is a risk of delays or confusion, such as in cases in which new technologies, like AI-driven robotic surgery devices, are held up because of differing interpretations of compliance standards across Member States.”

Navigating the EU AI Act: implications for regulated digital medical products” has been published in NPJ Digital Medicine. The research was supported by The European Union and a Novo Nordisk Foundation grant for a scientifically independent International Collaborative Bioscience Innovation & Law Programme at the University of Copenhagen.

CeBIL, the Centre for Advanced Studies in Bioscience Innovation Law at the University of Copenhagen, focuses on advancing health and life science inno...

Explore topics

Exciting topics

English
© All rights reserved, Sciencenews 2020