How to Meet Compliance Requirements for the AI Act

An expert weighs in on what you need to know now that the European Union's Artificial Intelligence Act is being enforced.

Omar Ford

August 13, 2024

9 Min Read
Image Created in Canva

At a Glance

  • The Act defines AI as systems that demonstrate autonomy, adaptiveness, and the ability to infer outputs from inputs.
  • The Act's phased enforcement began August 1st with different provisions taking effect in stages up to August 2027.
  • Penalties for non-compliance with the EU AI Act can reach up to €35 million ($38.5 million) or 7% of global annual turnover.

On May 21st the EU Council gave final agreement to what is being called the first significant law that would regulate artificial intelligence. 

There are many questions regarding the law and Anne-Gabrielle Haie, a partner with Steptoe LLP, spoke with MD+DI about commonly asked questions regarding the law and how companies can get into compliance.

In March the European Parliament approved the EU AI Act – the most comprehensive set of rules for artificial intelligence to date. Before we step into the implications of the act – let me ask perhaps a basic and broad question. How does the act define AI? What is considered artificial intelligence, as spelled out in the EU/AI act?

Haie: The European legislators opted for a broad and comprehensive definition of AI, aiming to keep it technology-neutral and innovation-proof. This is intended to ensure the regulation's sustainability as the technology continues to develop. According to this definition, three criteria must be satisfied: 

  • Autonomy : this implies that the system must possess some degree of independence in its actions, and have the ability to function without human intervention;

  • Adaptiveness after deployment: this requires the system to demonstrate self-learning capabilities, allowing it to evolve while in use This requires the system to demonstrate self-learning capabilities, allowing it to evolve while in use; and

  • Capability to infer, from the input the system receives, how to generate outputs: This is a key characteristic of AI systems. This refers to the process of obtaining outputs that can influence the environments, and to the capability of deriving models or algorithms, or both, from inputs. It goes beyond basic data processing and can be enabled through techniques such as machine learning and logic-and knowledge-based approaches.

Related:How to Meet Compliance Requirements for the AI Act

This definition is designed to differentiate AI systems from more simpler traditional software systems or programming methods.

 It should be noted that, although not initially encompassed in the scope of the draft regulation, the EU AI Act in its adopted form also regulates General-Purpose AI (GPAI) models.  These are AI models that display significant generality and are capable of competently performing a wide range of distinct tasks regardless of the way the models are placed on the market and that can be integrated into a variety of downstream systems or applications.

IMG_2024-08-13-181958.jpg

Let’s talk about a timeline for enforcement. My understanding is that the act will start sometime in August. What does a complete timeline of the act look like?

Related:Could AI Be Your Design & Engineering Partner During Product Development?

Haie: The EU AI Act entered into force on August 1, 2024. However, it is not yet applicable and organizations still have some time to comply with it. It envisions a phased implementation, with different obligations and provisions taking effect at varying stages. Here are the key dates to keep in mind:

  • February 2, 2025: Entry into the application of the provisions related to Prohibited AI practices;

  • August 2, 2025: Entry into the application of obligations applicable to GPAI models;

  • August 2, 2026: Entry into the application of:

Obligations applicable to High-risk AI systems referred to in Annex III;

Obligations applicable to AI systems that are subject to specific transparency obligations

  • August 2, 2027: Entry into the application of obligations applicable to High-risk AI systems intended to be used as a safety component of a product/which are themselves products (i) covered by EU legislations listed under Annex I; and (ii) subject to a third-party conformity assessment procedure.

 Exceptions apply to AI systems already placed on the market or put into service in the EU, as well as GPAI models already placed on the EU market. Essentially, the compliance deadlines will be extended for these AI systems and GPAI models.

Can you briefly discuss the four levels of risk the Act spells out: Unacceptable risk, high risk, limited risk, and minimal risk?

Haie: The EU AI Act classifies AI systems and GPAI models on the basis of the level of risk they pose and the purpose they have. In a nutshell: 

  • Some AI systems will be prohibited in the EU; 

  • Some AI systems will be classified as high-risk; 

  • Some AI systems will be subject to specific transparency obligations; 

  • Specific rules will apply to GPAI models; and

  • Some AI systems are considered to present minimal risk. 

 Prohibited AI systems are essentially those whose intended use purposes could significantly threaten EU fundamental rights (e.g., social scoring). These will be completely banned in the EU from February 2, 2025, and it will not be possible to place them, put them into service, or use them on the EU market.

 High-risk AI systems are those that pose significant risks to health, safety, or fundamental rights. This is due either because they are used as a safety component of a product or a product itself falling within the scope of the stringent EU product regulatory framework (e.g., Medical Devices Regulation, In vitro Diagnosis Medical Devices Regulation, etc.), or due to their intended uses (e.g., access to essential private or public services and benefits, employment, law enforcement, etc.). AI systems classified as "high-risk" will be subject to rigorous pre-market and post-market obligations. 

In addition, certain AI systems will be subject to enhanced transparency obligations. This is primarily to ensure that it can be clearly identified that the output they generate is AI-based. This notably includes chatbots, generative AI, etc. It is important to note that the categories of "AI systems subject to specific transparency obligations" and "high-risk AI systems" are not mutually exclusive. A high-risk AI system could also fall within the category of an AI system subject to specific transparency obligations, thus being subject to both the obligations applicable to high-risk AI systems and the enhanced transparency obligations concurrently.

As previously mentioned, GPAI models are also regulated by the EU AI Act and will be subject to stringent obligations, which are more or less equivalent - but not identical - to those applicable to high-risk AI systems.

Lastly, some AI systems are considered to pose a minimal risk to health, safety, and fundamental rights. While these AI systems are not excluded from the scope of the EU AI Act, they will be subject to a limited obligation pertaining to AI literacy.

Obviously, there is some time before the more significant parts of the AI EU Act go into effect. How can companies prepare? Also, how have companies been preparing? What have you been seeing or experiencing?

Haie: I would say that anticipation and planning are key. The regulation may seem daunting due to its complexity, technicity, and length. Therefore, the sooner and in the more structured approach you familiarize yourself with, the smoother your compliance efforts will be.  Compliance with the EU AI Act is a journey that will require significant resources, so it is absolutely essential to start now.

 We typically advise clients to follow this 10-step process:

  1. Compile an inventory of your AI systems/models and assess whether they fall within the scope of the EU AI Act

  2. Classify your AI systems/models 

  3. Identify your role for each AI system/model

  4. Map your obligations for each AI system/model

  5. Identify any regulatory overlaps

  6. Prepare an inventory of existing documentation and processes, then conduct a gap analysis

  7. Identify your internal resources and needs

  8. Prepare a roadmap and assign responsibilities

  9. Monitor regulatory developments

  10. Get involved in regulatory sandboxes, codes of practice, codes of conduct & standardization. 

 While the adoption of the EU AI Act has generated significant interest worldwide and across sectors, I believe that there is still a lack of awareness about its far-reaching impacts. This is not a regulation that only targets tech companies. It will affect all industry sectors and all entities across the AI value chain. Companies outside the EU must pay attention to it and determine whether they need to comply if they are active in the EU market.

IMG_2024-08-13-182906.jpg

Are the penalties steep for any violations of the Act?

Haie: Taking stock of the success story of the General Data Protection Regulation (GDPR), EU legislators have indeed decided to impose severe penalties for non-compliance with the EU AI Act to ensure its effectiveness. In summary, fines up to €35 million ($38.5 million) or 7% of worldwide annual turnover could be levied for non-compliance with provisions related to prohibited AI systems. Additionally, fines up to €16.5 million or 3% of worldwide annual turnover could be imposed for non-compliance with provisions concerning high-risk AI systems, AI subject to specific transparency obligations, and GPAI models.

EU regulators are already preparing, which likely indicates that we can expect rigorous enforcement.

This act is broad-sweeping across industries – But I’m wondering how it can change the life sciences and medical device industries? Are these industries better suited to weather the provisions of the ACT than other industries (i.e. automotive, aviation)?

Haie: The Life Sciences and Medical Device industries are certainly significantly affected by this new regulation. It is highly likely that AI systems developed or used in these industries will be classified as high-risk AI systems, thereby subjecting them to the most onerous obligations.

One of the main challenges in these industries will be navigating the regulatory overlaps between EU sector-specific laws and the EU AI Act. This creates legal uncertainty, and it will not be a straightforward task to reconcile the regulatory obligations arising from these different legal frameworks.

However, since the Life Sciences and Medical Devices industries are already heavily regulated, I believe that the compliance journey will likely be smoother for companies operating in these sectors. Indeed, they are accustomed to dealing with complex compliance requirements and will be able to build upon existing processes and procedures.

Finally, what is your advice to companies that might have questions on compliance?
Haie: In my view, one of the keys to successful compliance with the EU AI Act is ensuring the right compliance team is assembled. It should be a cross-departmental effort, incorporating both legal and technical expertise. Forming such a diverse compliance team will likely assist in resolving many questions that may arise. In the 10-step approach I mentioned earlier, we advise clients to identify their internal resources and needs. This is crucial as it will help organizations assess whether they need external support and budget accordingly. If external support is indeed necessary, it is advisable to connect with advisors who are well-acquainted with the EU legal landscape and its intricacies, and who have close ties with EU regulators.
Contrary to what it may seem, it is possible to engage in dialogue with EU regulators and share difficulties and challenges encountered. Lastly, it is essential to continually monitor regulatory developments. EU regulators are expected to issue guidelines clarifying some aspects of the EU AI Act and templates. These should not be overlooked.

About the Author

Omar Ford

Omar Ford is a veteran reporter in the field of medical technology and healthcare journalism. As Editor-in-Chief of MD+DI (Medical Device and Diagnostics Industry), a leading publication in the industry, Ford has established himself as an authoritative voice and a trusted source of information.

Ford, who has a bachelor's degree in print journalism from the University of South Carolina, has dedicated his career to reporting on the latest advancements and trends in the medical device and diagnostic sector.

During his tenure at MD+DI, Ford has covered a wide range of topics, including emerging medical technologies, regulatory developments, market trends, and the rise of artificial intelligence. He has interviewed influential leaders and key opinion leaders in the field, providing readers with valuable perspectives and expert analysis.

 

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like