Risk-based validation is critical for ensuring software performs correctly.

Erin Wright, director, product management

July 27, 2021

7 Min Read
image001web.jpg
Image by By Gorodenkoff/Adobe Stock

Software in the healthcare industry has come a long way over the past 20 years. Medical device companies, like most life sciences companies, were a bit hesitant to incorporate software and automation into their processes. However, software is now ubiquitous to varying degrees in every medical device company. For years, insulin pumps and pacemakers relied on physical devices with embedded software to run properly. Now, users can control them via Bluetooth. Even cochlear implants can connect to smart phones directly via Bluetooth. Now we’ve reached the point where software is the medical device. While that seemed like sci-fi 20 years ago, advances in artificial intelligence (AI) have paved the way for software as a medical device (SaMD).

Properly validating software is an important component of ensuring compliance, patient safety, and product quality. SaMD is no exception. It’s new enough that government agencies are still determining how it should be regulated. At the beginning of the year, the U.S. Food and Drug Administration (FDA) released its “Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan,” but it’s light on validation. By looking at the agency’s attitude toward validation in general, SaMD companies can effectively validate their software and ensure continuous improvement.

The Risk-Based Approach to Software as a Medical Device

FDA put out its last final guidance for computer software validation (CSV) in 2002. For the past few years, FDA’s Center for Devices and Radiological Health (CDRH) has decided an update is in order and switched the conversation to computer software assurance (CSA). CSA puts more emphasis on critical thinking and proper risk assessment and is meant to lessen the burden that traditional CSV places on life sciences companies. While the final guidance isn’t out yet, FDA has spoken enough about the concepts involved to provide a clear picture of what it will include. And, since CDRH regulates medical devices, their attitude toward validation will likely carry over into SaMD.

Instead of validating all the functionality of a software, medical device companies should focus on those areas that present the biggest risk. From FDA’s viewpoint, the riskiest components are those that could affect patient safety or product quality. These have always been concerns with software, but with SaMD it’s even more important to know the software works as intended. One of the ways to ensure that is by using a continuous integration/continuous deployment (CI/CD) approach to software development. This lets you fix brittle areas of your code faster and works better when enhanced by AI.

CI/CD Development

The point of validation isn’t to prove that software works perfectly. No software works perfectly. The point is to ensure that the defects in the software won’t drastically change overall quality or compromise safety. Using AI, you can lay the groundwork for CI/CD embedded validation by identifying usage paths and automating them into your validation cycle. These usage paths are based on how your end users, either patients or doctors, use the SaMD—not on how you think it should be used. You can determine these through tracking customer use of the software. These customer usage patterns can be optimized for catching those edge-case usage defects before the software update is ever released. Knowing how the SaMD is used is critical to fixing any defects in it before it ever reaches the hands of patients.

Once you know about a defect, you can write code to fix it. In the interest of ensuring the cure isn’t worse than the disease, the fix needs to be tested before it’s deployed. And it needs to be tested against all usage flows, not just the ones directly related to the defect. After testing it in your pipeline, if it doesn’t break anything else and does fix the defect as expected, it can be deployed. That might seem like an oversimplification, but the most complicated part of the process is capturing every use case and factoring in all patients involved. If this isn’t done correctly, your data can introduce bias into the process, which invalidates the results. To avoid bias, you need to look beyond the specific use case of the defect and consider multiple scenarios that involve it. Embedding testing for the deployment pipeline prevents bias being introduced to the validated data.

Bringing AI into CI/CD means the validation test algorithm can automatically change the validation plan or suggest areas that need more attention based on regression and forecasted data. But bias is a well-known problem in AI and healthcare. So, if SaMD validation testing is trained on biased data, it’s going to make incorrect assumptions and perform poorly. That’s why it’s so important to ensure your usage paths are based on a large, representative sample of patients and end-user scenarios. When the product is released, the real-world data from patients will train the AI to be more accurate and improve over time. This quickly increases the maturity of your validation process. Patient use highlights brittle areas of code that need more testing from a validation standpoint, but there are other sources of data for common SaMD problems.

FDA Form 483

The term “Form 483” is enough to make anyone in the medical device industry shudder. FDA inspections are stressful, but receiving a Form 483 after the fact just makes things worse. Looking at the data from Form 483s as a whole does give the industry an idea of what FDA’s priorities are and where the biggest problems have been for other companies historically. Two of those problems in the medical device industry have been change management and risk assessments, both of which are vital when it comes to software and validating.

Risk management is almost always subjective. It’s a question of what you or your company consider to be the biggest risks based on your own risk tolerance. Different companies will have different views on the criticality of different software functionality based on their professional and industry experience. These subjective experiences color the risk level attributed to different uses, even when attempting to remove the subjective bias by using formalized risk matrices. Machine learning (ML), a subset of AI, can remove some of that subjectivity (although, again, your bias will be present in the data it’s trained on). If data is incorporated from multiple sources, the risk assessment moves toward becoming objective with evidence backing up the risk score instead of just instinct.

Change control is vital in software because it is meant to be changed and updated over time, and that’s also the case with SaMD. SaMD that involves AI/ML will naturally change in a different way. The whole point of using AI/ML is that the more data the program is exposed to, the more it learns and the more accurate it becomes. FDA’s action plan describes a “Predetermined Change Control Plan,” which would include what aspects of the SaMD will be changed through AI/ML and how that will be done to maintain the safety and effectiveness of the device.

FDA’s Action Plan

FDA’s action plan is just that. It isn’t regulation and it hasn’t advanced to guidance. It’s a plan based on feedback from the industry. Validation is barely mentioned in the plan, but there are some indications of FDA’s priorities and factors that SaMD companies should take into consideration when validating. One of these is the above-mentioned Predetermined Change Control Plan. The agency is hoping to publish related guidance sometime this year.

Another major factor mentioned in the plan is the bias that was mentioned earlier. Since this is such a big risk when it comes to data, validation plans need to focus on mitigating the problem. Extra testing should be done to test the device for bias. FDA’s action plan mentions race, ethnicity, and socioeconomic status, so focus on ensuring those factors do not impact the performance of the SaMD.

The last point mentioned by FDA is the concept of real-world performance or real-world data. This ties back to the usage paths that inform validation testing. Using real-world data ensures accuracy in determining where the greatest risks in the software are. While the plan doesn’t come out and say real-world data collection and monitoring will be a regulatory requirement, it does imply that it will be expected to some extent.

Conclusion

AI/ML are re-shaping the healthcare industry, and SaMD is just one example of how that is happening. These technologies have great potential, but that will only be realized if they deliver on their promises. As with any other software, risk-based validation is the key to ensuring the product performs correctly without overwhelming the manufacturer. With the right software development approach and by using AI/ML to find and fix defects, SaMD companies can ensure their products perform accurately and continually improve themselves as they learn from more data.

About the Author(s)

Erin Wright

director, product management, MasterControl

As MasterControl’s director, product management, Erin Wright spearheads the efforts pertaining to the development of the company’s groundbreaking Validation Excellence Tool (VxT) and the next generation analytics product, MasterControl Insights. She holds two patents related to streamlining the validation process by using a risk-assessment approach to greatly reduce validation time.

She joined MasterControl in 2013 as a professional services consultant and worked closely with hundreds of regulated companies in implementing MasterControl. Her extensive experience in quality, validation, and regulatory compliance includes working for an automated testing software company and several clinical trial software providers.

 

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like