Originally Published MDDI July 2006 SOFTWARESoftware validation requires critical thinking and knowledge of the tools available to manufacturers. A new TIR can be used to get validation just right.

16 Min Read
The Goldilocks Principle: Approaches to Software Validation

SOFTWARE

mddi0607p94a.jpg

Software is a significant part of any modern manufacturing operation, and FDA requires validation of such software used for regulated processes. Companies are often confused, however, about when and to what degree they should validate software systems that are used to support manufacturing or quality operations.1 To validate too much is unnecessary and can waste company resources. Validate too little and FDA will surely be dissatisfied. Identifying the right level of validation effort is the topic of a new technical information report (TIR) soon to be released by the Association for the Advancement of Medical Instrumentation (AAMI) Software Task Force.

The document, “TIR for the Validation of Software for Regulated Processes,” describes how to apply principles of critical thinking and risk management to produce value-added validation evidence. Such evidence must be comprehensive enough for auditing purposes and simultaneously build a company's confidence that its software will actually work for the intended use.

Although the TIR is not exactly a how-to document, it does give insight into alternative methods and key deliverables based on, among other things, the intended use, risk, and type of software.

Validation Issues

For the past 40–50 years, few industries have undergone more explosive growth than the computer and software industries. New technologies, new methodologies, and the emergence of the commercial software market have put many powerful tools in the hands of users. However, the body of knowledge around validation has not kept pace with the computer industry as a whole.

For those involved in software quality assurance (QA), one of the more difficult and time-consuming aspects of the job is figuring out how best to address validation. Most QA personnel describe painstaking testing processes for validating nondevice software in-house. The process generates reams of paper but very little useful information.

Because the permutations of software applications, sources, and associated risks are infinite, it makes sense that a so-called standard approach to validation is difficult to achieve. However, the lack of standardization means that companies are more likely to revert to tried-and-true validation methodology that is applied to all software. And usually that includes practices such as monumental testing and documentation efforts.

It is important to recognize that software is not going to be perfect. It is also critical to understand the risks involved in using imperfect software. It's a bit of a cliché to say that you cannot test quality into a product. Still, it often seems that QA professionals try to do just that during software validation.

Software is a huge part of the device-manufacturing environment. For almost every major automation process, there is a vendor that supplies the software. Internal development projects tend to focus on specific controllers or subprocesses. Even those projects have experienced an increase in commercial or open-source providers. In many cases, access to the design specifications of commercial (or off-the-shelf) software is not available unless you are building the software. System or user requirements are limited if they exist at all.

Documents that provide constructive guidance on software validation as a practice are scarce—especially in the device industry—even though validation is a common practice. FDA's guidance does a good job of providing a scope and definition foundation, but doesn't go much beyond that.2 In the last five years, a number of industry groups and committees have been working to develop meaningful guidance to assist the working public. When dealing with automated processes, the two rules that come into play most often are the quality system regulation (QSR) in 21 CFR 820.70(i) and the electronic records rules in 21 CFR 11.10(a).3,4

Many activities related to the validation of systems and software fall under the purview of auditors from FDA, ISO, and other stakeholders. Providing guidance on how best to demonstrate that a computer system consistently conforms to specification is becoming an important goal for a variety of industry groups including AAMI and AdvaMed. The International Society of Pharmaceutical Engineering, for example, provides a GAMP4 document that is commonly used in the industry.5 It is based on solid principles; however, the document is somewhat limited in terms of its approach. Its basic tenet is that the scope of a software validation is driven by the source of that software (e.g., custom developed or commercial off-the-shelf). Although the source is one important element, other factors, such as software risk and intended use, play heavily in validation decisions. It is for those elements that the TIR hopes to provide additional guidance.

The AAMI TIR Working Group

AAMI has an established software committee chaired by Sherman Eagles of Medtronic and John Murray of FDA. The committee has chartered a separate working group to investigate, document, and provide guidance for validating software used to support manufacturing and quality processes in the medical products industry. The output of that group is the TIR for validating software that performs regulated processes.

The working group comprises representatives from the following segments:

• Medical device manufacturers (quality, manufacturing, and R&D).

• FDA.

• Vendors (consulting, engineering, and commercial software).

The diverse group determined that a how-to methodology for software validation could never address the breadth of software used across the medical industry. Instead of creating a

checklist-type guidance document, the group chose to cover how to approach the process and to identify what tools can best be applied to deliver value-added software validation, regardless of its intended use.

Validation in a Regulatory Environment

The TIR's main objective was to expand upon works already published and not to redirect efforts. It was designed to provide a more comprehensive view and practical approach to these previous works. The TIR emphasizes two important and valuable concepts in the validation process: critical thinking and the validation toolbox.

Critical Thinking. To get the most value out of a validation effort, a certain investment in thought and planning is required. The working group concluded that the items that significantly influence the scope, effort, and documentation required in software validation are as follows:

• Intended use (including process definition).

• Source.

• Risk analysis.

Before validating a software process, there should be a fundamental understanding of its intended use. The basics of who, what, when, where, and how the software is to be used must be defined. Software should always be validated with careful consideration of its context and purpose. The intended use could be defined in a summary statement of all the functional requirements that are driving the need for the software. Recognizing that in some cases (in a real-world setting) the requirements may not be well articulated or documented, the intended-use statement can serve to capture those requirements at the highest level. In addition, because not all processes come within the scope of the QSR, manufacturers must also establish whether the software falls under the authority of one or more of the regulations cited earlier.

Understanding the origins of the software drives a number of validation decisions.5 Software that is purpose built (or custom built) comes with key requirements and design documentation. However, purpose-built software does not have previous-use history upon which confidence can be established. By contrast, software that is commercially available may have a long history in the context of intended use. If so, it can be well characterized. Its behavior and performance can be measured against the history. When the software falls somewhere in between the two extremes, manufacturers may need additional tools for validation.

Risk analysis, as it relates to software, is complicated. There are many types of risks that can and should be considered. With respect to the QSR, risk is usually associated with the physical risk to the patient, consumer, or operator of the product. These safety risks are typically well understood by companies as they relate to the products manufactured.

When software is used to automate processes or when it is part of the quality system, characterizing the risks becomes more challenging. The risk of software failure should be thought of as part of the entire risk of the manufacturing process. The goal, as it relates to software validation, is to understand software risks and to develop mechanisms for mitigation and control that bring those risks into an operationally acceptable form.

Risk is generally defined as the combination of the probability of occurrence of harm and the severity of that harm. In the software world, it can be very difficult to calculate the probability of a certain error or defect occurring. Consequently, such a determination may be more subjective and less quantitative; it is often referred to as a likelihood. Determining the likelihood of a failure or defect might be the combination of any number of subjective qualifications, including the estimation of a certain error occurring based on the complexity of code or variations found in manually entered data.

For the purposes of the TIR, the work group used risk as defined in ISO 14971 as a combination of severity and likelihood. The ISO definition of risk assessment is a “systematic use of available information to identify hazards and to estimate the risk.”6 It is important to note that software errors are considered by ISO 14971 to be systemic rather than probabilistic. Therefore likelihood, and not probability, is combined with severity to determine risk.

The work group identified three categorical levels of risk, which roughly correspond to the levels of required validation rigor. Severity in the TIR is usually represented as

• Critical—causing death or permanent or serious injury.

• Moderate—causing nonpermanent injury.

• Minor—not resulting in harm.

When severity is combined with likelihood, the result is the risk of harm (direct or indirect) that the system presents to the patient, the operator, the manufacturing personnel, or the environment. Although quality system software would typically pose only the potential for indirect harm to individuals or the environment, there may be a risk of direct harm potential.

For example, if there were a software-related failure of manufacturing equipment (part of the quality system), an operator could be harmed or there could be a release of toxic chemicals into the environment. In this example, the software has an inherently higher risk associated with its use. It should be noted that the severity of harm could be just as severe for indirect harm as for direct harm.

Toolbox. A considerable number of tools have been developed over the years that are invaluable during software validation. Enumeration of these tools may be found through a variety of sources, including IEEE, the Software Engineering Institute at Carnegie Mellon University, the U.S. Department of Defense, and NASA. As is always the case, selecting the right tool for the job is key.

Take for example the task of nailing boards together when the only tool available is a sledgehammer. The job can be done, of course, but not without consequences. Quite a few nails are wasted, the boards would likely incur damage, and the worker is exhausted by the time the task is finished. A carpenter who is equipped with a number of hammers, however, can find just the right tool and accomplish the job with minimal waste. Likewise, a software-QA person should also have many tools available to avoid waste, damage, and exhaustion. For example, the risk and quality team at a device manufacturer may be faced with the validation of two different computer systems: a spreadsheet used to track vendors and their approval status, and sterilizer controller software that adjusts sterilization cycles for finished products based on internal and external conditions. It would be unlikely and unproductive to use or apply the same tools in the same way to conduct each validation.

In this regard, the TIR defers to other works, both technical and nontechnical, for tools that may apply. It also offers a listing of tools commonly used. Following a typical software development life cycle methodology, there are various tools and techniques at each stage in the software development and selection process that could be used. The optimal choice of tools is driven by the intended use, source, and risk analysis of the software.

Putting Validation into Practice

Critical thinking and toolbox selection may be useful when considering the vendor selection process. For example, say a manufacturer decides to purchase commercially available software. The validation tools must be related to vendor assessment.

Manufacturers should consider two functions: the risk of the function for the software, and the software's history. Say the software for a particular function has been determined to be low risk, and both the vendor and software product have a long history of use in the industry. Because these criteria are met, a phone survey may be an adequate validation tool to establish the vendor as a suitable supplier.

However, if the piece of software in question is determined to be high risk, or the vendor does not have significant history in this industry, the manufacturer may need to do more work. A site audit to review the vendor's quality system may be more appropriate to determine the software's ability to consistently meet requirements. Two different tools (a survey and an audit) are applied depending on the particulars of a scenario to gain confidence.

Requirements are often the cornerstone of a validation effort, yet manufacturers might need to rethink their use. When purchasing an electric toothbrush, consumers do not typically write down a bunch of requirements first. They might check consumer ratings, but then they would go to the store, read the packaging, and choose one that fits their budget and needs. For the most part, consumers would be satisfied with the selection.

Obviously, it's a little different when selecting software that costs thousands, or maybe millions of dollars, and the device in question is for medical applications. But on the surface, the process is very much the same. For a well-characterized business process for which suitable vendors exist (e.g., document management), it is not necessary to spend a lot of time compiling a detailed requirements document.

mddi0607p94b_thumb.jpg

Figure 1. (click to enlarge) Software validation is best accomplished through team efforts, with members taking the lead in activities suited to their skill set.

Although an individual can certainly apply critical-thinking techniques with regard to validation strategies, critical thinking is most effective when applied by a team in a collaborative effort. Figure 1 provides a simplified illustration of how a team might work together to fulfill various elements of a validation. Although all team members work together to accomplish the validation, certain members should take the lead in activities best suited to their skill set.

Say the company has a software project that it has rated as a medium risk. Although the software is configurable, the company decides that it should assemble some detailed requirements to help ensure that the software is set up correctly. In a different project, the risk is minor and off-the-shelf software is used with minimal configuration needed. It is acceptable to use the validation process provided by the software. In this case, there are only a few simple high-level (macro) function requirements that need to be documented as part of the validation. The software's user manual can help to develop some test cases that confirm proper installation.

There are many different levels of requirements. A company must think critically about how much validation is enough to ensure that the software will conform to the company's needs.

It is fairly common for manufacturers to customize commercial software or to custom build systems for a specific business process. For customized systems, a traditional validation approach certainly still applies. Such validation entails a heavy emphasis on requirements, design specifications, and verification of other software development life cycle phases. FDA's document on general principles of software validation defines and distinguishes between verification and validation.2 For the purposes of the TIR, verification is objective evidence that specific phases of a life cycle or validation master plan have been successfully completed.

There is an opportunity to focus the validation effort onto specific features and functions that pose a higher risk. For example, parts of the system that involve automated decision making, routing, or other processes rely on algorithms and other logical schemes. If the routines fail, these processes generally have more-significant implications than data entry or collection operations.

Such logistics failures may be harder for users to detect, and therefore might present a greater risk. Object-oriented programs can help prioritize the validation by categorizing the program objects, rather than solely validating against requirements. The objects could be categorized as to whether the objects were new (to the software) or existing. New objects could be categorized, for example, based on their complexity, prevalence, and number of object interfaces. Existing objects could be categorized based on pedigree (i.e., past successful history) and what changes they have undergone (e.g., changes to the object interface, changes to methods and function calls, or changes to the internal object logic that affects other system elements).

The critical-thinking approach does not necessarily alter the level of effort needed to validate a custom system, but it can deliver a more usable document downstream for those responsible for the maintenance and support of the software. A series of documents that describe the software (including its use, risks, and testing strategies) is likely to be more useful than a long validation protocol with associated test evidence.

The TIR uses more than 20 industry-specific, real-life examples that a medical products manufacturer might encounter to demonstrate the practicality of this approach. Readers can begin to understand how to apply critical thinking and tool selection to successfully validate software in their environment.

The approach fundamentally diminishes (but does not eliminate) the need for testing. Directed testing will always have a role in validation. However, merely redoing a set of tests supplied by a vendor or consultant that has been run a hundred times before will not likely provide any meaningful information about this specific installation.

Conclusion

To fulfill its mission of protecting the health and safety of the public, FDA has developed a number of rules for good manufacturing practices and quality systems. These rules direct that processes, equipment, and in some cases, software, should be validated. There are several legitimate methodologies for validating software and software systems. As the device industry evolves, it should look toward ways to perform value-added validation activities that can effectively increase the confidence that software will perform the job as intended. Furthermore, these activities must ensure that if the software were to fail in some way, adequate precautions have been taken to minimize foreseeable risk.

Manufacturers that use software need to find sound guidance to produce meaningful, realistic, and effective software validations.

Mark Allen is director of regulatory affairs and quality assurance at NetRegulus (Centennial, CO). He can be reached at [email protected] . Steve Gitelis is CEO of Lumina Engineering (St. Paul, MN) and can be contacted at [email protected].

References

1. Dave Vogel, “Validating Software for Manufacturing Processes” Medical Device & Diagnostic Industry 28, no. 5 (2006): 122–134.

2. Food and Drug Administration, Center for Devices and Radiological Health, “General Principles of Software Validation; Final Guidance for Industry and FDA Staff” (Rockville, MD: FDA, CDRH, January 2002).

3. Code of Federal Regulations, 21 CFR 820.70(i).

4. Code of Federal Regulations, 21 CFR 11.10(a).

5. “Good Automated Manufacturing Practices Guide for Validation of Automated Systems,” GAMP 4 (ISPE Forum, December 2001).

6. ISO 14971:2000(E), “Medical Devices—Application of Risk Management to Medical Devices” (Geneva: International Organization for Standardization, December 2000).

Copyright ©2006 Medical Device & Diagnostic Industry

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like