The EU AI Act: How Will It Impact Medical Device Manufacturers?

A look at the European Union's endorsement of the pioneering AI Act, focusing on its implications for medical device manufacturers, challenges faced by notified bodies in conformity assessments, provisions to support small- and medium-sized businesses and concerns regarding conflicting definitions and potential discrepancies with existing regulations.

Heather R. Johnson

February 27, 2024

8 Min Read
AI EU Act
Image Credit: Tanaonte via iStock/Getty Images

At a Glance

  • European Union Member States have endorsed the world's first comprehensive legal framework regulating AI.
  • Notified bodies face increased burdens and the need for specialized expertise with the introduction of the AI Act.
  • The AI Act proposes measures to assist small- and medium-sized medical technology companies in compliance efforts.

On February 2, European Union Member States endorsed the world’s first comprehensive legal framework regulating artificial intelligence in the EU. The EU’s Artificial Intelligence Act (AI Act) applies across industries—from medical device manufacturers to consumer app developers—to both the AI systems themselves and the output produced by those devices.

Once the European Parliament and European Commission formally adopt the AI Act, which is expected this spring, developers of medical devices with AI-based products and components will have an additional set of requirements to meet. The requirements for AI systems will augment what’s already required under the EU’s Medical Device Regulation (MDR) and In-Vitro Diagnostic Regulation (IVDR).

MDR has stressed the resources of medical device companies of all sizes due to notified body backlogs, more rigorous reporting requirements, and ambiguities in the regulations themselves. Requirements under the AI Act may create more of the same.

The Notified Body Burden

Notified bodies, that perform the conformity assessments required under MDR and IVDR, will soon have an additional set of regulations to monitor. And concerning the AI Act, that means reviewing data on sophisticated technology that’s changing by the minute.

Related:Getting a Handle on EU MDR

“Currently, notified bodies are not experts in AI systems and the training, validation, and testing involved,” said Rory Carrillo, who leads a medical device product, quality, and regulatory consulting firm based in San Francisco. “With the AI Act, notified bodies will likely have to find highly skilled experts to help with the requirements added as part of the conformity assessment for medical devices. This will lead to more delays and higher costs.”

Team NB, The European Association for Medical Devices of Notified Bodies, agrees. In a June 2023 position paper, the organization admitted that “a high level of technical and regulatory expertise is necessary for the notified bodies to be able to assess the technical documentation content of a medical AI system.”

The organization does not appear to believe additional accreditation is the answer. Doing so “would not bring more expertise, but just increase the administrative burden, and by this reduce the already limited number of notified bodies and their capacity,” the position paper states. Team NB does, however, urge the European Commission to recruit additional AI experts to build up the expertise necessary for AI Act conformity assessments.

AI Requirements for Medical Devices

Related:When Will the EU MDR Burden Begin to Ease?

Providers of high-risk AI systems (that includes all regulated medical devices with AI-based components or systems) must comply with several obligations. Many of them will sound familiar to anyone pursuing CE Marking under MDR.

Requirements under the AI Act include:

  • that consists of a “continuous, iterative process that is planned and run throughout the entire lifecycle of a high-risk AI system.”

  • Conduct data governance, ensuring that training, validation, and testing datasets are relevant, sufficiently representative, and, to the best extent possible, complete, and free of errors.

  • Draft technical documentation that demonstrates compliance and provides authorities with information to assess that compliance.

  • Design the system so it automatically records events relevant for identifying national-level risks and substantial modifications throughout the system’s lifecycle.

  • Provide instructions for use to downstream deployers to facilitate compliance.

  • Design the system so that deployers can implement human oversight.

  • Design the system with appropriate levels of accuracy, robustness, and cybersecurity.

  • Establish a quality management system to ensure compliance.

The intent is for the AI Act to “harmonize” with MDR and IVDR. According to the current legal text, the application of regulations should be simultaneous and complimentary, with at least a little flexibility on how to meet the applicable requirements of both.

Related:EU MDR: Something's Gotta Give

“We commend EU legislators’ progress in aligning the requirements and processes of the AI Act with MDR/IVDR and other sectoral legislation,” said Alexander Olbrechts, director of digital health for MedTech Europe, in a written statement. “We welcome the approach taken by legislators favoring a single conformity assessment and a single, integrated, technical documentation, which we believe is crucial to facilitate investment and innovation in AI in the EU's digital economy while at the same time ensuring legal certainty for all actors in the AI ecosystem.”

Concessions for Small and Midsize Businesses

Given the additional requirements will likely disproportionately affect small- and medium-size medical technology companies as much, if not more, than MDR, the European Commission has proposed several provisions intended to remove barriers for these companies. Those provisions include priority access to forthcoming “regulatory sandboxes,” which are physical, digital, or hybrid environments set up in member states where companies can develop and test technology with regulatory oversight.

The AI Act also requests that member states support smaller entities and start-ups with compliance-related education and information. The Act states European Digital Innovation Hubs should provide further technical and scientific support.

The AI Act further suggests that notified bodies ease conformity assessment fees and translation costs for smaller entities, as well as allow “microenterprises” to establish simplified quality management systems. It doesn’t specify whether simplified versions must conform to ISO standards.

While all the considerations directed toward small- to medium-sized businesses are intended to promote innovation, it may not be enough given the time and cost required for certification. And the potential for QMS leniency may unnecessarily increase risk without helping innovation.

“It doesn’t make sense for smaller organizations to follow less-strict safety standards, but they should be supported in other ways,” said Janneke van Oirschot, research officer for Health Action International, an organization dedicated to global, equitable access to affordable medicines and healthcare, with a focus on creating a health-centric approach to AI. “We also see a potential risk of larger companies launching startups for new medical devices, which is a possibility to circumvent.”

AI Act in EU

Conflicting Terms and Definitions

Another concern raised by multiple organizations relates to conflicting definitions and duplicative requirements. An earlier iteration of the AI Act was criticized for its overly broad definition of “artificial intelligence system.” MedTech Europe states that the definition has since been updated to align with the definition given by the Organization for Economic Cooperation and Development (OECD). van Oirschot said verbiage that captures the range of systems that pose risks to human health, safety, and fundamental rights would be of value.

“We don’t think AI systems used to evaluate candidates for a job should adhere to the same requirements as medical devices,” she said. “The standardization bodies will have to grapple with the question of how to develop broad horizontal standards which can apply to every field where AI is being used. For some standards, such as on accuracy and robustness, some sector- specific layers may need to be added.”

Terms including “provider” and “manufacturer” differ from MDR definitions. However, future guidance from the European Commission may clarify the discrepancy.

The agency may also need to clarify some misalignment between the AI Act, MDR, and GDPR. Carrillo points out that the AI Act requires the use of demographic data (eg, patient data) for training and validation of AI systems without factoring in GDPR requirements around healthcare data privacy and transparency. The AI Act does appear to include an exception for debiasing AI systems, however.

And given the massive datasets needed to adequately train and validate an AI-based tool, it’s difficult to know when training is “complete” in the eyes of the Act. This aspect of AI-enabled device development could impact medical technology more so than other industries, given the sensitive nature of the data needed to train and validate AI-based medical technology.

“It’s already difficult and complicated to obtain the data they do get—and very costly,” Carrillo said. “The requirements are valid and generally appropriate, but it will be very difficult for startups and smaller companies to obtain such data. Some may not understand the appropriate methods for ensuring training and validation are complete; rather, many smaller companies are limited by time and cash and are always trying to get the least amount of data to train and validate. Hopefully, the AI Act will help companies take this more seriously, but it will come at a cost.”

In Pursuit of Trustworthy AI

If medical device companies and start-ups do put more time, money, and data into training, validating, and retraining their algorithms, the AI Act will serve its intended purpose: to promote human-centric, trustworthy AI while protecting the health and safety of the individuals who use it. Time will tell whether the benefits to human health will cause more medical devices to leave or skip the EU market.

“Specific requirements around data quality, monitoring, and human oversight, as well as the fundamental rights impact assessments required for high-risk AI, are improvements for patient safety,” said van Oirschot. “We do think the AI Act could have been stronger in some areas, but it is a step forward for protecting against the harmful effects of AI while reaping the benefits from trustworthy new technologies which improve healthcare.”

Olbrechts, on behalf of MedTech Europe, affirms that there is more work to be done to clarify AI Act requirements across sectors and technologies, but sees it as a step forward. “Should this alignment [across legislation] ultimately prove successful, the EU will make major strides forward in facilitating investment and innovation in AI in the EU's digital economy while at the same time ensuring legal certainty for actors in the AI ecosystem.”

About the Author(s)

Heather R. Johnson

Heather R. Johnson is a consultant and writer for the medical and clinical technology industries. She’s based in the San Francisco Bay Area.

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like