Building Quality into Medical Device Software

PRODUCT DEVELOPMENT INSIGHT - ONLINE EXPANDED VERSION

David Bergeson

February 1, 2009

13 Min Read
MDDI logo in a gray background | MDDI

Medical device software validation is drawing increased attention from regulators, manufacturers, and the healthcare industry. Recent media articles have directed attention to the fact that the software forensic laboratory within FDA is now using automated source code analysis tools to evaluate software quality.1,2 In addition, FDA's enforcement reports regularly include medical device recalls due to software issues.3 This greater scrutiny of software quality and validation, combined with questions about what constitutes compliant and validated software, suggests that there is a need for greater understanding of both the regulations and current good practices associated with achieving validated medical device software.

As software systems increase in complexity, achieving and maintaining quality becomes more challenging. Ensuring software validation today requires a comprehensive, life cycle approach that includes vital validation tasks in every step of the software development and validation process—identifying the necessary resources and having them available when they are needed to achieve and maintain quality and compliance. The earlier that rigorous software quality assurance is applied in the development life cycle, the greater the assurance of minimizing software problems and avoiding errors that could lead to regulatory delays, costly recalls, or regulatory actions.

Although FDA has long considered adequate specification to be the key to achieving software validation, the agency's use of code verification tools indicates a belief that increased control of the coding phase of the software development cycle would help improve the quality of medical device software. Source code inspection—whether conducted by programmers or automated tools—has long been recognized as an effective method of detecting software “bugs” that can produce unexpected or incorrect software application behavior. The examination of source code for conformance with established coding standards is an essential element in developing safety-critical code, a robust software validation process, and a validated software product.4

Current Regulatory Guidance

FDA's 2002 guidance, General Principles of Software Validation, describes software validation as a life cycle process—that is, each software development phase should include at least one validation activity. The document contains guidance for FDA-regulated industries on the management and control of software validation processes but does not recommend any particular software validation methodology. The guidance, which notes that the basic principles of software validation have been in use for more than 20 years, gives device manufacturers the flexibility to choose an appropriate software validation process. It also recommends that software verification and validation be conducted throughout the entire development process.

Concluding that software is validated is highly dependent upon comprehensive software testing, inspections, analyses, and other verification tasks that are performed during each stage of development, per the guidance. It also notes that

  • Accurate and complete documentation of the software's functional requirements is essential for successful software validation.

  • Quality-related measurements, such as defect metrics and coverage metrics, should be used to gain an acceptable level of confidence in the quality of software prior to its release.

  • Conformance with all software specifications, and confirmation that all software requirements are traceable to system requirements, are vital to ensuring that a device meets the user's needs and intended uses.

In addition, the guidance states that control of the software development process is essential to build quality into the software, measure the quality of the software product, determine whether predetermined acceptance criteria have been met, and demonstrate that the software is suitable for its intended application. These are the basic elements of medical device software validation.

Meeting the Expectations

FDA's expectations (see the sidebar, “Regulatory Expectations”) may appear daunting, but most of the principles have been standard in the software industry for many years. In a regulated environment, carrying out the expected tasks is only part of the process. A manufacturer must also be able to demonstrate that the tasks and validation approach were both appropriate and sufficient for the product, so that the reviewer—and the agency—can agree with the conclusion that the software is properly validated.

A well-prepared and robust software validation summary report is a key document for the independent reviewer to gain an understanding of the software validation process that was employed, the relevant validation documentation, and the pertinent results of the validation activities and tasks. The ideal software validation summary report should accomplish the following objectives:

  • Provide a brief but logical narrative about the validation life cycle and the tasks employed to build quality into the software product.

  • Identify the defect prevention and detection activities and tasks that were employed.

  • Describe the specifications documents and how and why they are suitable as a basis for validation.

  • Explain how the coding process was controlled.

  • Demonstrate that testing and verification were thorough.

  • Explain how the testing process was appropriately challenging.

  • Present the essential measurements that were used to assess software quality and suitability for release.

  • Identify the activities and tasks undertaken to assure that changes did not introduce new defects.

  • Describe the defect management process.

  • Discuss the remaining known defects and their effect on validation.

Creating a Robust Validation Summary Report

The challenge in developing a robust software validation report is to provide the right details to support each part of the validation effort. The specifics of each company's software validation summary report depend on individual circumstances and the type of device being developed. Some developers view the software validation summary report as a living document, consisting of multiple versions that are created at each stage of development and validation. Another approach is the use of a master validation plan and corresponding summary report that identify the validation tasks planned and performed. The report is accompanied by specific documents that demonstrate the successful completion of each task and support the final validated conclusion.

Regardless of the method, it is essential to create a summary document that clearly spells out the validation effort and demonstrates why that effort was both suitable and adequate to validate the specific software product. The discussions in the sections below are not intended to be prescriptive, but do illustrate some key tasks and activities in the major software development phases of planning, specifications, coding, and testing; and the associated life cycle activities of measurement, change control, and documentation.

Planning. The software validation plan should be identified and outlined, and any modifications made during the project should be described. The key elements of a software validation plan should include the following:

  • Design inputs identified as necessary for validation (e.g., conformance to voluntary standards such as ANSI/AAMI/IEC 62304:2006, Medical Device Software—Software Life Cycle Processes).

  • Description of the development and validation environments, including the tools to be used, and any tools that require validation.

  • Life cycle phases and the associated software validation deliverables.

  • Predetermined software validation or release criteria.

The validation task applied to the software development and validation plan could be a formal technical review that is documented with approved minutes.

Specifications. The specifications documents that were created for the software system being developed should be listed and described. Both the external (e.g., requirements specifications, functional specifications) and the internal (e.g., design) specifications should be listed. The validation tasks applied to the specifications documents should be listed and described. The formal technical review is the primary validation task used to ensure that the specifications are high quality and validateable. It verifies that the specifications define both the positive (what the software should do) and the negative (what the software should not do) requirements; that the specifications have the quality attributes of accuracy, completeness, consistency, testability, correctness, and clarity; and that the specifications quantify the intended use of the software.

Another useful validation task that can be applied to software specifications is a requirement to concurrently create the corresponding test protocols and to require their review with both specifications developers and test protocol developers to ensure consistency. This activity ensures that not only is the primary (main) functional path specified, but also that possible secondary paths (alternate valid paths) and exceptional paths (invalid conditions and inputs) are identified— and that the intended behavior is defined for those inputs and conditions.

Additional validation tasks that can be incorporated into the validation of software specifications include the integration of risk management into the software specification process, creation and customer approval of a draft user or operation manual, traceability analysis from superior or precedent specification or requirements documents to assure completeness, and measurements made to assess defect removal effectiveness.

Coding. This section of the report should identify and describe the controls applied to the coding process, such as:

  • Established coding procedures and their key elements, including programming language usage rules, complexity management considerations, and source code understandability elements.

  • Conformance with coding procedures, which can be verified with human code inspections or the use of automated code verification tools. Code checking or verification is both a defect prevention and detection task. Code inspections can also mitigate indirect causes of hazards to proper software behavior.

  • Code inspections that include evaluation of conformance to the specified functional operations. Using a code verification tool to identify potential structural issues in the source code can facilitate focusing on functional implementation during human code inspections.

  • Translation (compilation) requirements, including error checking, and the verification of conformance.

  • Build testing.

  • Unit or module testing, which is typically applied during the coding phase but can also be included in the testing phase.

  • Measurements made to assess defect removal effectiveness.

Testing. The testing section of the validation summary should include three major elements. The first is a description of the level of testing effort. This includes elements such as unit testing, integration and interface testing, system testing, and acceptance testing. Or they may be described as low-level (e.g., unit and integration) versus high-level (e.g., function) testing. The description of each testing level could include a list of the test case identification methodologies (e.g., basis path analyses for structural testing and equivalence class analysis for functional testing) as well as entry and exit criteria for each level of testing. Measures of the software's quality and reliability should also be included. The same goes for measures of the software testing effort (e.g., the number of test cycles executed, the number of test cases in each category, etc.).

A discussion of testing thoroughness should answer questions such as: How complete was the testing relative to the source code and the specifications? Were both structural and functional testing methodologies used? Did the system or higher-order testing include key areas (e.g., function, operation, usability, volume, configuration, etc.)? What measures of testing thoroughness were made? Were traceability methodologies used to ensure completeness?

An explanation of how the testing was challenging should be included. In addition to showing that the software system behaved as expected, validation testing must also show that it did not do something unexpected. The summary should include an explanation of how the testing approach focuses on detecting defects. Functional testing that is challenging and finds defects includes testing combinations of inputs, boundary values, and conditions (both functional boundaries and internal technological boundaries), and possible invalid inputs and conditions. The explanation of rigorous challenge testing is vital for a strong validation report. Challenging testing includes testing with combinations of inputs likely to cause problems.5 Today, pair-wise testing is the current threshold for a combinatorial testing methodology.6 Other combinatorial test case identification methodologies include the use of orthogonal arrays to identify tests with two, three, or more input values.

Boundary value analysis and testing is another recognized and effective method for finding software errors.5 The description of boundary value analysis and testing should clarify how this testing was challenging. For example, did it simply consider maximum and minimum values of functional numeric ranges or was it comprehensive?7 Did the analysis extend to identifying program variables (numbers and calculations, systems of equations, text, dates, times), singularities (e.g., division by zero), discontinuities, points at which behavior changes, limits of authority, limits of patience, and extreme values? Was the boundary value analysis and testing applied to both internal (data structure limits) and external boundaries (functional limits)?

Other important elements of challenging testing include the examination of error-handling behavior and the use of negative, failure mode, or stress testing techniques. For software systems in which safety is an issue, the verification of error-handling functionality is an essential element of challenging testing.

The summary should discuss the software reliability measures made during dynamic testing and the measures made to assess defect removal effectiveness. In addition, the summary should also address how the testing considered and verified the user's needs.

Measurements. The summary should describe the measures of application program quality and reliability that were made and how they were assessed. It should also describe the measures of the validation process effectiveness in removing defects. In particular, the measures used in making the “is validated” decision should be identified, along with the data values or trends that justify the decision. Current good practice calls for more than one measurement to be made for each item being measured.8 In addition, the summary report should explain how the data are normalized to make them meaningful.

Measures should be made during each software development and validation phase. Some measures can be made throughout the software life cycle, while others are made only during certain phases of development. Measures of testing effectiveness are useful. The same can be said of measures of the quality of the software product.

Change Control. The summary should describe when source code change control was implemented and how source code changes were controlled. Adequate control of source code changes includes ensuring that all associated software documents remain accurate, current, and consistent. Also, change documentation records should show when, why, and by whom changes were made. Such control should also prevent any unauthorized changes and ensure that all authorized changes are implemented and tested. It is essential to include regression analyses and testing for all changes to show that other areas of the software code were not adversely affected.

Documentation. Generally, FDA's opinion is that if a task is not documented then it was not performed. It follows that if the documentation is incomplete or inadequate, then the validation is incomplete or inadequate. Therefore, all documentation must be complete and accurate and suitable for third-party review. It must also provide sufficient information for the reviewer to adequately evaluate the validation effort.

Conclusion

Although software validation is labor-intensive and costly, it is essential for the production of safe and reliable medical device software. Software validation also reduces business risk and can generate a positive return on investment.

With the pressure to shorten medical device development cycles, the application of a software development process with rigorous validation requirements—including risk management, the use of tools to automate software quality assurance tasks, and the use of software process and product metrics—are vital to delivering validated, high-quality software. Validation also reduces regulatory risk and enables manufacturers of software-driven medical devices to remain competitive.

David Bergeson is principal consultant for Parexel Consulting (Lowell, MA). Contact him at [email protected].

 

References

1. S Conroy, “Are You In Control of Your Software Analysis?,” Medical Device & Diagnostic Industry 30, no. 2 (2008): 16.

2. JD Rockoff, “Flaws in Medical Coding Can Kill,” Baltimore Sun (June 30, 2008).

3. FDA Enforcement Report Index, www.fda.gov/opacom/Enforce.html.

4. GJ Holzmann, “The Power of 10: Rules for Developing Safety Critical Code,” Computer (June 2006).

5. GJ Myers, The Art of Software Testing (New York: Wiley, 1979).

6. DR Wallace and DR Kuhn, “Failure Modes in Medical Device Software: An Analysis of 15 Years of Recall Data,” International Journal of Reliability, Quality, and Safety Engineering 8, no. 4 (2001).

7. R Sabourin, “On the Field of Finite Boundaries,” Software Performance and Test 4, no. 2 (2007).

8. R Craig, “Measurement and Metrics for Test Managers,” Preconference Tutorial, STAREAST Software Testing Analysis & Review, May 5–9, 2008, Orlando, Fl.

Copyright ©2009 Medical Device & Diagnostic Industry

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like