AI-Enabled Medical Devices: Regulatory Trends in the United States & Beyond
Attrayee Chakraborty, quality system engineer at Analog Devices, discusses how to prepare for the rigorous regulatory requirements surrounding artificial intelligence.
September 12, 2024
At a Glance
- AI medical devices face strict regulatory requirements on data, model development, and post-market oversight.
- Early regulatory planning is crucial due to diverse international standards for AI devices.
- FDA stresses transparency and bias management in AI, with new standards guiding ethical practices.
Medical devices with artificial intelligence (AI), machine learning (ML), and similar “intelligent” systems come with additional layers of quality and regulatory scrutiny. In addition to proof of safety and effectiveness, medical device manufacturers/developers must show that the statistical models and algorithms, the data used to train and validate the software, and anticipated device changes all align with regulatory requirements.
To meet those requirements, most experts advise medical device developers to dedicate time as early in development as possible to regulatory strategy. That strategy may include not only the U.S. and the EU, but also the UK, Canada, Australia, Asia, and South America—regions that define and regulate AI-enabled devices a little or a lot differently than the top two markets.
To get a better handle on how FDA considers AI, MD+DI spoke with Attrayee Chakraborty, quality system engineer at Analog Devices, a global semiconductor company with roots in healthcare. Here, she breaks down the various guidance documents, standards, and strategies that relate to AI while also providing some guidance of her own.
To date, FDA has authorized 950 AI/ML-enabled medical devices. For medical device manufacturers that want to join this list, what additional data would the manufacturer/developer need to present to FDA for a product with AI-based components?
Chakraborty: The data presented to FDA differs for every medical device. Most of the additional data for an AI-based component would be around data management (collection, data quality, data storage, bias monitoring, data annotation, pre-processing, and version control), model development (model preparation, training, evaluation, documentation, and validation), clinical validation, post-market surveillance and measures surrounding continuous learning systems.
There are more considerations on change management regarding continuous learning systems as compared to locked algorithms as well. The core philosophy of the type of data required specifically for AI systems stems from ethical principles around AI—transparency and explainability, robustness, real-world performance, autonomy, accountability, and privacy.
FDA has actively communicated its expectations through its alignment with the Good Machine Learning Principles (GMLP) and discussion papers.
Many communities are also publishing their interpretations of FDA’s expectations. Though there is no set checklist for medical devices incorporating AI, one of the most comprehensive checklists I have come across is a technical report published by the Focus Group on Artificial Intelligence for Health (FG-AI4H) which provides a step-by-step checklist for the implementation of safety and effectiveness of AI/ML based medical devices.
How would a manufacturer/developer determine when it’s time for a regulatory re-review of its device?
Chakraborty: FDA notes in its discussion paper that regulatory re-review is necessary if changes impact performance (analytical and clinical performance), input data type, and intended use of the medical device. Software modifications may impact either of these three, so analysis and justification should be provided as to the extent of impact of the changes to these three aspects.
The principles of change management, risk management, post-market real-world data, and documentation are held in assessing changes to the product throughout the total product lifecycle (TPLC). If changes made do not impact the three parameters upon risk evaluation, then this decision should be confirmed by successful, routine verification and validation activities as per FDA.
To obtain CE Marking in the EU, a medical device manufacturer/developer would need to assure compliance with EU MDR and the requirements for high-risk technology under the AI Act. In the U.S., what do regulators want to see with the initial 510(k) or PMA submission for an AI/ML-enabled device, above and beyond what’s required for a device without AI/ML?
Chakraborty: Lately, the predetermined change control plan (PCCP) has been getting a lot of attention as a submission strategy for AI/ML-enabled medical devices as a proactive way to pre-specify and seek premarket authorization for intended modifications (and their method of implementation) to machine learning-enabled device software functions without necessitating additional marketing submissions for each modification.
Adding a PCCP along with an initial submission can save companies time and money to implement modifications that generally would otherwise require additional marketing submissions prior to implementation and be described in the 510(k) summary, De Novo decision summary, or PMA summary of safety and effectiveness document (SSED) and approval order.
The PCCP should be included as a standalone section within the marketing submission. So far, the acceptable modification types are related to quantitative measures, performance specifications, and changes to device inputs for machine learning-enabled device software functions. The PCCP can also to products that are not exclusively AI. For example, a manufacturer anticipates they’re going to change suppliers sometime soon.
What proof does FDA want to see around data bias?
Chakraborty: Data sets that do not represent the population using a device can lead to model bias, where the algorithm reflects or even amplified biases. The third guiding principle of Good Machine Learning Practices (GMLPs) states “data collection protocols should ensure that the relevant characteristics of the intended patient population (for example, in terms of age, gender, sex, race, and ethnicity), use, and measurement inputs are sufficiently represented in a sample of adequate size in the clinical study and training and test datasets so that results can be reasonably generalized to the population of interest. This is important to manage any bias, promote appropriate and generalizable performance across the intended patient population, assess usability, and identify circumstances where the model may underperform.”
FDA has recognized the lack of metrics and methods to analyze training and test methods to understand, measure, and minimize bias of AI-enabled devices and is actively working to address these through its Artificial Intelligence Program. FDA has called out ISO/IEC TR 24027:2021 in defining bias as well.
For manufacturers looking for more specific directions on assessing and mitigating bias, I would recommend using ITU’s data management requirements to get a clearer idea.
Should a manufacturer develop a QMS tailored to AI? If so, how and why?
Chakraborty: I highly recommend manufacturers to incorporate AI-specific requirements in their Quality Management System (QMS). Incorporating regulatory expectations throughout the total product development lifecycle is something that the FDA recommends too.
To keep the answer short, I will say the hierarchy for building a QMS is similar for medical devices whether they are incorporating AI or not: policy on top, with procedures, work instructions, and records following it.
Before creating a QMS, it is important to align on quality policies—ISO 42001: 2023 advocates the development of IT-AI-Management systems to create an AI policy. Several templates are available to align cross-functionally on AI policy—which I will be sharing in my presentation.
The principles of risk management, change management, and design review still apply, and it is important to keep track of standards specific to AI systems that need to be incorporated in the product development lifecycle. The big questions are: how do we integrate all these standards and best practices for all segments of a product? And when do we do it? I will be elaborating on all these aspects during my presentation.
When teams are moving fast, as is the case for AI-enabled medical devices, adequate assessment and documentation of justification is necessary to provide as evidence of maintaining compliance. As the popular saying goes: if you strive for quality, you will be compliant. The other way around may not be true.
What standards must AI/ML-enabled devices follow to ensure they develop safe, effective, and ethical AI/ML-based products? (eg, AAMI, NAM code of conduct) And how do they do so when technology changes so fast?
Chakraborty: Currently, several standards are either recently published or in the development stage. Off the top of my head, I could list BS/AAMI 34971:2023 (Application of ISO 14971 to machine learning in artificial intelligence—Guide) as an FDA-recognized standard that organizations can use to guide risk management, along with ISO 23918.
ISO 24029:2021 talks of performance testing of algorithms and IEC 82304-1 can be mapped onto the IMDRF’s guidelines for the clinical evaluation of algorithms. Standards around Machine Learning performance evaluation (IEC 63521) are still under development. IEC TC62 is developing IEC 63450, a new standard specifically addressing the technical verification and validation processes for AI-enabled medical devices. The popular software development standard for medical devices—IEC 62304—is also under a second edition revision to incorporate considerations around AI. IEC 60601-1 (4th edition) is expected to incorporate AI considerations as well.
The principles of ethical AI in medical devices can be elucidated in the quality/AI policy of an organization that would drive the QMS. All these domain-specific standards contribute to establishing ethical AI in medical devices.
I agree the technological landscape is evolving fast! To keep up, organizations would need to incorporate regulatory intelligence reviews—at minimum once a year—as a standard practice. Regulatory intelligence monitoring systems (RIMS) can be used as an option as well.
How can a medical device software developer decide whether to adopt advanced AI/ML techniques that would require FDA oversight or develop a more straightforward product that would not require regulation?
Chakraborty: I understand the hype in using AI for every system—however, it is important to evaluate whether the particular use case needs AI or not. The ultimate question to ask is whether incorporating AI will protect patient safety and improve efficacy or not.
Regulatory strategists are responsible for finding the ideal path forward with the product, while quality management ensures that the said product complies with applicable regulations. Incorporating them in the product development lifecycle early on can help the business understand in navigating the best path forward.
For example, defining the scope and intended use of a product is necessary for classification of a device, which often drives the regulatory pathway as well. It is not uncommon to see devices toeing the line between a wellness product and a product aiming for diagnosis—this may be a path of lesser resistance; however, it all depends on what population the product eventually wants to cater to and what it intends to do; and how critical are the algorithms to the final use case.
Anything else?
Chakraborty: While there is a lot of discussion on AI regulation for healthcare in the US and EU, several other countries are also updating/have updated their regulatory documents to incorporate considerations relating to AI. My recent paper talks about these in detail. In summary: there is a lot to watch out for in other nations as well, and it is an exciting time to work in the medical device space!
Chakraborty will be giving a presentation titled Navigating Global AI Healthcare Regulations & Quality Requirements: Current Trends and Future Perspectives at MD&M Minneapolis on Wednesday, October 16, 2024.
NOTE: The views, opinions, and statements expressed herein are solely those of the author and do not reflect the views, opinions, or official positions of any other agency, organization, employer or company.
About the Author
You May Also Like