Automated Testing for Real-Time Embedded Medical Systems

GUIDE TO OUTSOURCING: TESTING

Amit Shah, Tim Bosch

August 1, 2008

12 Min Read
Automated Testing for Real-Time Embedded Medical Systems

GUIDE TO OUTSOURCING: TESTING

Medical device companies, especially those with real-time embedded-system products, are often burdened with lengthy verification cycles. Even small development efforts can result in months of verification. There are numerous reasons for such extended verification efforts, but chief among these are

  • Complexity due to permutations of input or system configuration.

  • Complexity due to the system environment or system setup required for testing.

  • The number of external system possibilities for connectivity and interoperability.

  • A reliance on manual execution of tests for all aspects of verification (e.g., functional testing and system behavior testing).

  • A lack of support for test automation in the product itself (e.g., the absence of hooks or callable application programming interfaces (APIs) necessary for the scripts or code that drive automation).

Manual testing is time-consuming and error prone, with functional and system testing that starts late in the software development cycle. Relying on manual testing alone can increase the risk of late defect discovery and often leads to delays in product release. A lack of test automation can also cause delays in manufacturing and support.

Although there are proven automated test tools and techniques for unit and integration testing, most medical device companies have yet to embrace an overall automated testing strategy for embedded real-time devices. These firms must understand the challenges of testing embedded real-time medical systems and examine strategies to improve quality and reduce verification cycle time and costs.

Embedded systems come in many variations, but in general they share the following characteristics:

  • Special-purpose control logic, designed to support a few dedicated functions.

  • Part of a complete device including electrical and mechanical parts.

  • Varying complexity, from a single microcontroller to multiple processors, units, and peripherals.

  • An optional user interface and connectivity with external systems.

These characteristics differ from a general-purpose computing platform, like a desktop or enterprise system, which can perform multiple functions depending only on programming. However, many test automation approaches for general computing can be applied to real-time embedded systems. These approaches include determining which testing to automate, demonstrating a return on investment (ROI) to gain management buy-in, formulating the automation strategy, and identifying test tools needed. The approaches also include constructing the test framework and addressing regulatory requirements.

Embedded systems do present some unique testing challenges, such as parallel development of hardware and user-interface availability restrictions. Parallel development of hardware requires the use of emulators and simulators until later-stage system testing, once the actual hardware becomes available. For those systems with a user interface, it is also often developed in parallel, rendering it available only for later-stage system testing.

Because of such challenges, test hooks need to be built into the software to support testability as identified in an overall test automation strategy. This embedded test code simulates actions normally begun through a user interface or other command-control trigger (pressing a button on the device, confirming an action, selecting a value, etc.). A test case can be constructed by invoking a sequence of these actions via the callable test hooks. Furthermore, if target hardware is absent, these test cases can be executed by using a suite of emulators and simulators as part of the automated test environment. This approach enables testing software early in the development cycle, in advance of both user interface and hardware dependencies.

Benefits of Automated Testing

Automated testing can reduce the amount of manual testing, shorten the process schedule, and reduce cost. Testing is often viewed as a bottleneck in the software and system development process. By automating testing, a test team can help ensure that the software development, as well as the overall development, meets its schedule and budget goals.

Automated testing can increase efficiency in the software development process. Because automated tests consistently repeat and document a specific sequence of steps, the development team can reproduce defects found by an automated test case. This leads to quicker defect resolution early in the software development cycle. Furthermore, automated tests can be executed before source code check-in, which prevents defects from making it into the code base.

Test coverage is also increased. Comprehensive automated tests can be executed, unattended, 24 hours a day. This allows the test team to focus on safety-critical or complex parts of the product.

Automated testing can improve consistency and repeatability of testing over manual testing. Manual testing can be time-consuming and monotonous, resulting in mistakes by even the most conscientious testers. Automated tests repeat the same sequence of steps and record results with each execution, thereby ensuring overall accuracy of regression, integration, and system testing.

The process also facilitates performance and stress testing, which can be difficult or impractical to perform manually.

Early-stage test automation helps isolate problems encountered in the software, because hardware is often unavailable in early-development-stage testing. The early identification of software problems can ease integration with hardware. In addition, the embedded test code can also be used to support manufacturing and field-service system test needs.

Limitations of Automated Testing

If the automated test results are to be used for formal verification and validation evidence, the automated tests need to be executed on the actual target hardware against the software that is a candidate for final release. The additional code may increase the complexity of the code base and become a burden for development and maintenance. Also, because it will be included in the software released as part of the embedded real-time system, the test code needs to be validated to ensure that it has no undesired side effects on the intended use of the product.

Another drawback is that emulators and simulators, used as part of the automated test environment, may not accurately represent the behavior of the actual device and can give a false sense of application usability and system performance. In such cases, the emulators and simulators should be evaluated for accuracy relative to the actual device.

Embedded software development is complex. Changes to software interfaces and hardware are common, which makes testing especially challenging. Modular and extensible automated test environments can significantly reduce the influence of these changes. However, such a development effort requires careful design and continual maintenance.

Medical device OEMs often use independent test teams to develop and execute manual tests. Automated testing requires specialized programming skills that often require additional training. Depending on the automated test framework, the test team can use a programming language such as Perl to design and develop test cases.

Strategies for Effective Automated Testing

To articulate the strategies for effective test automation, this article uses a patient monitor as an example of a real-time embedded medical device. A patient monitor is a portable device used for standard physiological parameters such as ECG, heart rate, SpO2, NIBP, and temperature. The monitor is applied outside the body. It automatically analyzes the patient's waveforms and warns clinicians if a parameter is out of range via a visual or audible alarm.

First, designers need to decide what to automate. They must consider the performance implications on the system. A monitor displays waveforms on an LCD. Embedded test code that supports the verification of the integrity of this real-time signal may artificially add to the schedulability and resource load of the system. In this case, the designer would likely decide not to automate the testing of this functionality. Instead, it would be better to focus on automating functionality for which the embedded test code would have no effect on device performance, such as device configuration.

test_automation_thumb.jpg

Figure 1. (click to enlarge)
The test automation environment. Application programming interfaces (APIs) define the set of actions to construct the test case.

The next step is to design the test automation environment. The environment consists of APIs, a framework used to invoke the APIs and execute a defined set of actions to construct the test case, and, potentially, emulators and simulators (see Figure 1). Working with the development team, APIs are added to the code to simulate user actions such as button presses on the device. These APIs create a layer between the user interface and the control logic code, effectively creating an alternate externally callable interface that can be invoked by the framework. In addition to adding the callable interface, the APIs add logging capabilities that record input values and results, capture start and stop times, and log other data that may be needed.

The framework, in this case, consists of a set of scripts used to call the APIs and execute the sequence of actions defined in the test cases. These test cases will have been defined by test engineers and were previously executed manually. Once established, the test automation environment can be integrated with a build system to provide regular verification of implemented functionality. The integration facilitates regression testing and continuous feedback during the software development cycle.

Unlike test automation tools for general computing platforms, there are few tools available for system-level and functional testing of embedded real-time systems because these tools often need to be compatible with the technology (i.e., RTOS) for the application under test. Therefore, there is usually a need to create custom automated tools. The cost of development, testing, and qualification of these tools is part of the investment in test automation.

There are some third-party products that can be used as part of a test environment or within an overall test strategy, such as QualiSystems TestShell and National Instruments LabVIEW, among others. Designers still need to develop test scripts, though, and the license cost for such tools should be factored into any investment considerations. Remember that it is likely that these tools still need to be qualified to comply with regulatory guidelines.

Return on Investment

Automation requires an investment with a justifiable, and potentially significant, payback. In the patient monitor example, the costs may include the following:

  • Purchasing or development of automated test tools.

  • Training staff on automated test tools and scripting.

  • Qualifying automated test tools, simulators, and emulators for their intended use, as required by FDA.

  • Developing modular test frameworks to support automated test case creation, execution, and results reporting.

  • Building and maintaining testability in the application (i.e., embedded test code).

  • Developing, debugging, peer reviewing, and maintaining automated test cases.

The savings are determined by the amount of manual test execution time saved in functional, regression, and system testing. The savings also come from early discovery of defects through unattended and repeated execution of automated test cases. Industry data suggest that the cost of finding and fixing a defect during the system-testing phase is seven times the cost of finding and fixing a defect during the coding phase.1 For subsequent releases of the product, there can be additional savings in early stages of development if standardized tests are made part of the development effort.

The initial investment is usually recouped by the second release, after which savings are seen for subsequent releases. Upfront ROI analysis is essential to get management support.

Complying With Regulatory Guidelines

FDA's General Principles of Software Validation: Final Guidance for Industry and FDA Staff, provides guidance for medical device manufacturers related to software verification and validation.2

Automated Test Tool Validation. Section 6.3 (Validation of Off-the-Shelf Software and Automated Equipment) of the guidance recommends that medical device manufacturers are responsible for validation of off-the-shelf software for its intended use. This requirement applies to any software used to automate device testing. Automated test tools purchased from third-party vendors are used to generate automated test cases and test results that may be used for formal verification evidence for the product. In this case, these tools need to function as expected and should be validated for their intended use. Artifacts that should be reviewed, approved, and collected as a record of validation include requirements documentation, qualification procedures, and qualification results for automated test tools.

Note that third-party vendors may be able to supply automated test tool qualification documentation if they have qualified the tool. In this case, the medical device manufacturer may be able to use this qualification documentation to satisfy FDA's validation requirement for off-the-shelf software.

Software Verification and Validation Plan. Section 5.2.1 (Quality Planning) of the guidance suggests that verification and validation activities should be planned and documented. The automated test strategy and related activities must be documented, reviewed and approved as part of the overall software verification and validation plan for the product.

Automated Test Cases and Test Results. Section 5.2.5 (Testing by the Software Developer) of the guidance says “test procedures, test data, and test results should be documented in a manner permitting objective pass-fail decisions to be reached.” The creation of automated test cases using a scripting language is a development effort and, therefore, requires processes and standards similar to the development of application code. Automated test code must be source-controlled, appropriately commented, peer-reviewed (for both quality and adherence to predefined coding standards) and approved. Test results generated by automated test cases as a result of executing automated test cases must be reviewed and approved if they are to be used as formal verification evidence for the product.

Conclusion

Automated software testing is not right for everyone. It requires upfront investment and proper due diligence to ensure that it is right for the organization. Otherwise, adopting automation can be expensive and frustrating. Once the benefits of automation are understood, it is important to establish expectations and present management with sufficient data to support a program so that they may plan accordingly.

Implementing an automated software-testing program requires a structured approach. It requires a test strategy specifically tailored to a product and its regulatory requirements. It also requires that designers build or select the right automated test tools and that the environment use automated test frameworks and appropriate emulators and simulators.


Read more articles from Foliage contributors here.

By following the approach presented here, device firms can realize a significant ROI for automated software testing, gain a competitive advantage in the industry by reducing time to market, and increase the quality level of an embedded real-time medical device product.

Amit Shah is test engineering manager for Foliage, and he can be reached at [email protected]. Tim Bosch is chief architect in the medical division of Foliage. He is available at [email protected].

 

References

1. Case Study: Finding Defect Earlier Yields Enormous Savings, [online] (Dulles, VA: Cigital [cited 8 July 2008]); available from Internet: www.cigital.com/solutions/roi-cs2.php.

2. “General Principles of Software Validation; Final Guidance for Industry and FDA Staff,” (Rockville MD: FDA) January 11, 2002.

Copyright ©2008 Medical Device & Diagnostic Industry

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like