Sponsored By

AI: Considering the Regulatory and Legal Implications

Understanding data privacy and protection compliance considerations for development or deployment of AI in clinical research.

8 Min Read
Artificial Intelligence
Chor muang / iStock via Getty Images

The use of artificial intelligence (AI) technologies[1] in clinical research for human drug development can improve the efficiency and effectiveness of this type of research. For example, AI algorithms can be used to analyze electronic health records and other databases to help identify potential participants who meet trial criteria, helping to streamline patient recruitment, reducing recruitment times. Similarly, AI and predictive analytics can be used to discern potential clinical trial participants who are more likely to experience positive outcomes or adverse events from proposed therapies, allowing researchers to focus on subpopulations that might benefit the most from a new treatment.

When it comes to data collection and monitoring, AI powered devices and mobile applications can help collect real-time data, which in turn, helps provide more accurate data when it comes to evaluating a clinical trial participant’s health throughout a study. AI can help ensure the quality and integrity of the study data by identifying inconsistencies, errors, and missing information in real-time, which helps reduce the risk of data discrepancies. The benefits are vast and industry stakeholders are continuing to evaluate use cases for AI in clinical trials. It is undoubtedly an area of tremendous potential and growth.

As AI technology and use cases evolve across various industries, AI governance and regulation has become the subject of ongoing and robust discussions among regulators and governing bodies globally. The European Union (EU) has introduced the EU-AI Act. In the United States, there are several draft federal legislative efforts being proposed, in addition to other non-binding frameworks and initiatives like the White House’s AI Bill of Rights, the National Institute of Standards and Technology’s AI Risk Management Framework and the Voluntary Commitments from Leading AI Companies to Manage the Risks Posed by AI. Several US states are proposing or have passed legislation to govern the use of AI in certain areas or for certain functions, including California, Maryland, Maine, Massachusetts, New Jersey, New York, Idaho, Vermont, Washington and Florida, among others. According to the National Conference of State Legislatures, in the 2023 legislative session, at least 25 US states and territories had introduced AI governance bills and 15 states and Puerto Rico adopted resolutions or enacted legislation governing AI.

In May of this year, FDA released a discussion paper, “Using Artificial Intelligence and Machine Learning in the Development of Drug and Biological Products,” which provided an overview of how AI technologies are currently being used in drug development. FDA’s discussion paper seeks feedback from stakeholders on the relevant considerations for the use of AI in the development of human drugs and biological products. As of May 16, 2023, the agency has stated that it intends to develop and adopt “a risk-based regulatory framework that promotes innovation and protects patient safety.”

One example of a risk-based framework regulating the use of AI technologies is the EU-AI Act. It provides a classification system with four risk tiers: unacceptable, high, limited, and minimal. The proposed regulation prohibits the use of AI systems that pose an unacceptable risk, like the use of biometric identification systems in public spaces. High-risk AI systems, including autonomous vehicles and medical devices, are permitted but require compliance with strict regulations for rigorous testing, proper documentation of data quality, and implementation of an accountability framework that details human oversight. Because the EU’s General Data Protection Regulation (GDPR) has become a cornerstone for data privacy regulation even outside of the European Economic Area (EEA) and given that AI depends on data to develop and learn, it may be that the EU-AI Act becomes a guidepost for AI legislation and governance transnationally.

As the legislative and regulatory efforts governing the use of AI continue to take shape, there are certain core ethical and legal considerations that are significant for industry stakeholders deploying this technology regardless of industry. Those core tenets include transparency, fairness, accountability, accuracy, privacy, and bias. These tenets are the foundation of most of the legislative and regulatory frameworks that are being enacted or considered. Consequently, those core principles should be evaluated and incorporated into internal frameworks by private actors who are developing and/or deploying AI in their businesses.

Among the most important of the core principles industry-wide when developing or implementing AI systems is data privacy and security. The concept of safeguarding sensitive information through the implementation of technical and organizational measures for data privacy and security is not a new concept in risk management and has long been recognized as a fundamental practice. However, securing the data is not the only privacy concern when it comes to legal and regulatory compliance. The right to collect data and use it in an AI model is of paramount importance in clinical research for the development of human drugs and biological products because it involves the processing of personal data and health information. Not only does clinical research generate a significant amount of sensitive data, but it also relies upon large amounts of existing personal data and health information of study subjects. Thus, compliance with all applicable data privacy and protection regulations, such as the Health Insurance Portability and Accountability Act (HIPAA) and/or GDPR, among others, is critical to protecting trial participants’ privacy rights and avoiding potential liability in the form of costly fines and penalties.

Development and deployment of AI models to, among other things, identify trial candidates for recruitment, optimize the design of clinical trials, and collect and analyze trial data, requires knowledge of the data being used to train the AI model, and whether use or access to that data is restricted. Determining what data use is permissible and the scope of how it may be used begins with the notices, consents and/or authorizations provided to and obtained from the data subjects, whether trial participants or other individuals.

In order to incorporate the core tenets of transparency, accuracy, fairness, and accountability when developing or implementing AI in clinical research, individuals should be notified of the purpose for which their data may be used and authorize or consent to the use of their personal data, whether it be for purposes of AI training, algorithm development or otherwise. Depending on various factors, including the nature of the information collected and the locations of the actors collecting and using the data, this notification and consent may come in the form of an informed consent form, HIPAA authorization, and/or data privacy notice. Additionally, for the authorizations, consents, and disclosures to comply with applicable data privacy and protection regulations, they should communicate certain information in lay terms. That information includes what data will be collected from the individual, how the data will be collected, how the data collected will be used, who will use the data, how the data will be stored, and how long the data will be maintained. There may be other requirements depending on where your company is located, where the research is being conducted, and where the individuals whose data is being collected and used reside.

When it comes to using AI models in clinical research, maintaining the proper documentation to support the use of certain data while complying with applicable statutes and regulations is challenging. In the US, for example, collecting personal data and protecting personal health information for use in AI systems must comply with state and federal laws. At a minimum, that likely includes explicitly disclosing to the individual how his or her data may be used in an AI model or system, as well as obtaining the individual’s informed consent and authorization permitting the collection of such data.

In the EU, GDPR requires that the party processing the data for use in an AI system have a legal basis for doing so. Explicit consent is a valid legal basis for processing sensitive personal data such as health information pursuant to GDPR. Nonetheless, there are complex legal and regulatory questions regarding whether a data subject, who is also a clinical trial participant, can provide valid explicit consent for the processing of their personal data. There may be other provisions of GDPR that provide a legal basis for collecting and processing data, but the inquiry is often fact-specific.

AI can make the data collection and analysis process involved in clinical research more efficient; it can help the research team identify patterns, and correlations and evaluate efficacy and safety. Nonetheless, from a legal and regulatory perspective, compliance with data privacy and protection regulations is paramount as AI systems and models rely on data inputs. That can be challenging given the evolving legal and regulatory landscape, and requires input from various teams including legal, clinical, compliance and others.

Michael J. Halaiko, Esq. CIPP/E, is a partner at Nelson Mullins’ Baltimore office and leads the firm’s clinical trial team and is a member of the firm’s healthcare team.

Alexandra Moylan, Esq. CIPP/US is a partner of Nelson Mullins’ Baltimore office, and a member of the firm’s healthcare and clinical trial teams.
[1] There are various definitions of AI in proposed legislative and regulatory efforts, frameworks and sector-specific standards and recommendations. For purposes of this article, Artificial Intelligence is used broadly to encompass the ability of machines to perform tasks that normally require human input, human intelligence, and human thought, and includes Machine Learning, Deep Learning and Generative AI.

About the Author(s)

Michael J. Halaiko

Partner, Nelson Mullins

Michael J. Halaiko, Esq. CIPP/E, is a partner at Nelson Mullins’ Baltimore office and leads the firm’s clinical trial team and is a member of the firm’s healthcare team.

Alexandra Moylan

Partner, Nelson Mullins

Alexandra Moylan, Esq. CIPP/US is a partner of Nelson Mullins’ Baltimore office, and a member of the firm’s healthcare and clinical trial teams.

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like