MDDI Online is part of the Informa Markets Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

SOFTWARE Implementing an Automated Traceability Mechanism for Medical Device Software

Medical Device & Diagnostic Industry
Magazine
| MDDI Article Index

Originally published February 1996

Carlos F. Vicens

To comply with FDA requirements and various U.S. and international standards, medical device manufacturers must be able to trace device software capabilities from a requirements specification through test and release. In sections on testing and documentation, FDA's "Reviewer Guidance for Computer-Controlled Medical Devices Undergoing 510(k) Review" explicitly states, "Testing requirements should be traceable to the system/software requirements and design" and "reviewers may expect that system and software requirements can be traced though the documents applicable to each of these software development tasks."1 The Institute of Electrical and Electronics Engineers (IEEE) gives similar guidelines when defining a functional audit in its standard for software quality assurance (QA) plans: "[An] audit is held prior to software delivery to verify that all requirements specified in the Software Requirement Specification have been met."2 On the international level, the design control section in the ISO 9001 standard states, "The supplier shall establish and maintain documented procedures to control and verify the design of the product in order to ensure that the specified requirements are met."3 In other words, it is not enough for a company to have documents showing its design and other documents covering its test procedures and results. It must also provide a way to trace each item from requirement to test. Such traceability is especially critical for Class III devices and Class II devices presenting significant risks.

There are other advantages to traceability beyond meeting regulatory needs. Having a comprehensive traceability mechanism makes it easier to complete related testing and documentation tasks.

* Whenever a design or test phase is completed, a verification exercise should be conducted to ensure that all requirements have been considered; traceability to specification and design documents enables this task to be done efficiently. * When software changes occur, traceability makes it relatively easy to evaluate the impact the changes may have on other parts of the development process. The traceability mechanism not only highlights tests that might have to be updated or repeated, it also points out documents--hazard analyses, specifications, and user manuals, for example--that may have to be reviewed.

* Traceability can also facilitate the compilation of data for completeness and coverage metrics. These data often go unused because the product development process is so complex and dynamic.

Unfortunately, with a few exceptions, not much has been published showing how to implement a practical traceability mechanism.4,5 Most systems are based on the meticulous maintenance of yet another manually compiled document called a traceability matrix. Although straightforward in concept and useful in theory, this matrix itself can become a source of error when it gets very complex, and must be continually revised. Because keeping track of software changes manually is difficult and time-consuming, it is not uncommon for a company to delay updating the matrix until after the work is done or until just before auditors arrive. An automated traceability mechanism could overcome such problems.

The implementation method proposed in this article works in conjunction with automated software test tools to eliminate the need for maintaining a traceability matrix manually. The technique is based on defining standard reporting functions and embedding them in the automated test cases. The embedded functions then send traceability data to a machine-readable file that can be accessed to generate a variety of useful reports.

WHAT IS TRACEABILITY?

As defined in the IEEE standard for software QA plans, there are two types of traceability.2 The first, forward traceability, enables users to trace how a requirement is (or will be) implemented or tested. Another IEEE standard, which covers software requirement specifications, states that each requirement must have a unique iden- tifier.6 Forward traceability exists if this identifier can be used to trace a feature through all related documents from concept to completion.

In contrast, backward traceability is the ability to tell where a requirement originated. In other words, all requirements documents, design documents, code, and test scripts must point to their source. Backward traceability is important because no development project is static--requirements, specifications, code, and test scripts may change at any time. Backward traceability exists if previous levels of documentation can be easily found and updated.

The kind of traceability mechanism needed by a development group or team depends on many factors, including product complexity and criticality, the documentation structure, the company's organizational culture, the product development environment and test tools, and regulatory requirements.

TRACEABILITY MATRICES

A traceability matrix is a table showing the relationships between items to be traced. In the example of a simple matrix shown as Table I, a naming convention was used to indicate the one-to-one relationship between each requirement and its test procedure. The test for the requirement in paragraph 3.1.1.1 of the software requirement specification (SRS) is TPR_3111, the test for the requirement in SRS 3.1.1.2 is TPR_3112, and so forth. In the real world, the situation is usually more complex, so that a simple naming convention is inadequate. For the matrix shown as Table II, the tester has decided that feature 3.1.1.1, the tachy detect algorithm alpha, requires two test modules, one for nominal conditions (TPR_024) and another for testing responses to error conditions (TPR_027), but that a single test module (TPR_052) can be used to perform load stress testing on two different features (tachy detect algorithms beta and gamma). This level of complexity is not too difficult to maintain manually, assuming the documentation process is initiated at the start of the project and becomes part of standard operating procedures. However, things are never quite that simple. A third level of complexity is almost always introduced by the large number of design and test documents that must be accommodated.

The goal of maintaining a matrix is to be able to trace requirements, but those requirements may be spread across many different documents. The SRS and the hazard analysis are the starting point. The tester may find additional requirements in related specifications, user documentation, or other peripheral documents. Not all of these requirements have to be targeted for testing, but all should be identified and reviewed. Of course, when these new requirements are identified, the SRS must be updated, and, to maintain traceability, each of the new testable requirements must be given a unique identifier. One way to simplify this naming procedure is to create a three-letter identifier for each source document based on its contents, as illustrated in Table III.

If a document has been carefully written and contains one requirement per paragraph, the paragraph number can be used along with the document ID as the identifier for the requirement. If paragraphs contain multiple requirements, which is more likely, each requirement can be assigned a "handle," such as those in Table IV, which lists requirements from several documents along with their identifiers for use in the traceability scheme. Requirement DET:Tachy_ Det_Alg_Gamma, for example, is the unique identifier for the tachycardia detection algorithm requirement described in SRS 3100-3204. Once a traceability matrix reaches the complexity level shown in Table IV (which doesn't even include the test procedures), it becomes difficult to maintain manually; an automated way of keeping it updated becomes very useful.

IMPLEMENTING AN AUTOMATED TRACEABILITY MECHANISM

The military and aerospace industries have dealt with the challenges of implementing a traceability mechanism for a long time, and most of the available tools reflect this heritage.7 Many support only the Ada software language or run on UNIX workstations, while medical devices are more likely to be built around PCs or embedded microprocessors.

Fortunately, a way to automate a traceability matrix is a hidden, or unrealized, feature of many automated software test tools, ranging from simple "shareware" debugging aids to systems that offer complete unit, integration, and system-level testing.8 The method described here can be implemented on any automated test tool with a test script language that can be programmed to create and update an output file.9 The critical feature is a good scripting capability. Scripts implement test cases and record test results; if the tool has a versatile script-writing facility, it can be programmed to generate and maintain a traceability matrix. The following programming tips will make the process work smoothly:

* Write modular scripts. A library of small test script modules is almost always easier to use and maintain than a complicated, monolithic, multifunction script.

* Include only one requirement, or a very limited number of related requirements, in each test script module.

* Use a batch file or top-level function to call and run a test set composed of a series of discrete modules.

Figure 1 presents a sample function and lists some of the data it could report. Depending on the test tool used, some of the items--such as the test script and log file names--can be queried from the system itself, eliminating the possibility of human error. Once the function is created, it can be called from each test module to generate a traceability table whenever the test set is run. The file it creates becomes the automatically generated and maintained raw data for subsequent traceability reports. Figure 2 shows a sample traceability report generated by the test procedures listed in Table II.

Once test set traceability data are captured, they can be used in conjunction with other sources of data to generate a variety of reports. The data in the traceability report file can be stored in various formats, depending on the capabilities of the test sys-tem used. At a bare minimum, the file can be formatted as a comma-delimited text file, which can be imported easily into any spreadsheet or database program. An example of a possible tabular report is shown as Table V. In this report the data from Table IV and Figure 2 have been combined to show the percentage of requirements already tested.

CONCLUSION

While manual methods of maintaining a traceability matrix are error-prone and time-consuming, automated methods can generate accurate, useful output with minimal effort. For example, when test scripts are modified to include a traceability function for use with each test batch, a regression report can be used to indicate the test coverage provided by the batch. Maintenance of traceability is transferred to the test-script level, eliminating stand-alone traceability documents.

REFERENCES

1. "Reviewer Guidance for Computer-Con- trolled Medical Devices Undergoing 510(k) Review," Rockville, MD, FDA, Center for Devices and Radiological Health, Office of Device Evaluation, sects 3.1.2 and 3.1.3, 1991.

2. "IEEE Standard for Software Quality Assurance Plans," ANSI/IEEE Std 730-1981, New York, The Institute of Electrical and Electronics Engineers, sect 3.6.2.5, 1981.

3. "Quality Systems--Model for Quality Assurance in Design/Development, Production, Installation, and Servicing," ISO 9001:1994E, Geneva, Switzerland, International Organization for Standardization, sect 4.4.1, 1994.

4. Cardie ML, and Tucker NG, "How to Validate a Computer System," Drug Information Journal, 29:187­199, 1995.

5. Wiegers KE, "Effective Quality Practices in a Small Software Group," Software QA Quarterly, 1(2):7­13, 1994.

6. "IEEE Guide to Software Requirement Specifications," ANSI/IEEE Std 830-1984, New York, The Institute of Electrical and Electronics Engineers, sect 4.3.6, p 13, 1984.

7. Kay RL, "The Systems Engineering Approach to Quality Software," Med Dev Diag Indust, 16(6):34­36, 1994.

8. Johnson M, "Dr. Boris Beizer on Software Testing: An Interview. Part I," Software QA Quarterly, 1(2):7­13, 1994.

9. Pressman R, Software Engineering: A Practical Approach, New York, McGraw-Hill, pp 728­738, 1992.

Carlos F. Vicens is director of testing and support at B-Tree Verification Systems (Minnetonka, MN).

500 characters remaining