Systems Thinking: Testing for Software-Based Medical Devices

Using systems thinking throughout the life cycle of a device is essential as medical devices become more sophisticated.

Timothy Bowe

January 1, 2008

18 Min Read
Systems Thinking: Testing for Software-Based Medical Devices

SOFTWARE

mddi0801p130a.jpg

Illustration by ISTOCK PHOTO

The effect of incomplete or insufficient testing of products is directly felt in the areas of product quality, support costs, and, in the worst cases, patient safety. However, the increasing software and systems complexity of next-generation products makes testing a major undertaking for existing product-testing capabilities of medical device companies. Sophisticated systems contain complex hardware using programmable gate arrays, control firmware, application and database software, and sometimes third-party components. The evolution from simple embedded software control to these sophisticated systems has forced an increased investment in product testing.

When testing was simple, the inefficiencies involved in having independent testing efforts distributed throughout a company were acceptable, although not optimal. Today, these costs are less acceptable. The product-testing needs of various parts of a medical device company have always differed, and the increased sophistication of today's products amplifies those differences. Understanding such differences, along with the commonalities underlying all testing needs, is an important first step toward a different approach to product testing.

Developing an understanding of the drivers of product testing during R&D, manufacturing, and service phases opens the opportunity to extract lessons from the systems-thinking approach taken in the development of complex products. Testing relatively low-volume, high-complexity products from a top-down, system-level view is a paradigm shift for most companies. However, viewing overall product testing this way is a more complete, consistent, and cost-effective approach.

This article examines the fundamentals of a systems-based approach to the testing process. It identifies the benefits of a system-level testing model in medical products that have moved up the technical sophistication curve. The examples outline real-world challenges, demonstrating both the complexity of the problems and benefits that may be derived by adopting a systematic view of the test process.

Systems Thinking Applied to Medical Device Testing

Systems thinking entails taking a system-level view of the project under study, or more specifically, a top-down deconstruction of the product or process. In our experience, the range of R&D spending that the medical device industry typically allocates to testing is between 30 and 50%. Manufacturing has its own subassembly and system-level testing. Service group testing in the field or in the factory is used to verify product operation in the event of a field failure or to preemptively verify that no service is required. By any measure, that is a lot of testing. Central to much of this testing is the development of test software: board tests, component diagnostics, and functional and system tests. It is important to address whether the testing is comprehensive or overlapping and whether isolated parts of the company are developing the same tests for different users.

Traditionally, each department develops its own test capabilities, focusing on specific areas of responsibility. The enormous investment in engineering test specifications, procedures, and automation is often not leveraged by other parts of the company. The test applications may be manual or too complex for manufacturing staff to use. The tests most likely have been developed to look for specific design failures that are not deemed meaningful at the production stage. Or they may be too slow to be economical in the volume testing of manufacturing.

By the same token, manufacturing test capabilities do not address the challenges of product testing in the field. Manufacturing tests are optimized to quickly cull out improperly operating product. They are not designed to identify the source of the failure. In addition, the size of the manufacturing test equipment, or the trade-off between test speed versus flexibility, may also render the investment in subassembly testing useless to the rest of the organization.

Service test facilities are typically the last—and most sporadically—developed. It can take years to develop service test capabilities for high-end products such as imaging systems and clinical chemistry systems, which contain a large number of complex subsystems. In contrast, low-complexity products may have little or no support from a diagnostic perspective.

A system-level approach to testing in this environment would start with a composite view of what types of testing are required to fully evaluate a design, verify an integrated product, and diagnose component failure. If the test process is part of an integrated system that contains different users (use contexts) and different organizational needs (drivers), it may be useful to think of it as analogous to a complex system design. These different use contexts (e.g., R&D's need for depth and flexibility focused on verifying design and completeness compared with manufacturing's need for coverage and speed with a focus on pass-fail) drive the tests needed to support different parts of the organization. And there may be other use contexts, such as integrated diagnostics that the end-user may access, that require simplicity of operation and communication of steps to diagnose a problem. Built-in tests can verify product performance integrity upon start-up.

Although at first these use contexts may seem mutually incompatible, after synthesis into an integrated system, it is possible to find areas of commonality across these different contexts. An overall test approach allows definition and creation of test modules that are usable in different test contexts. Tests may include board tests from R&D that provide the basic functionality for subassembly test or software unit tests that provide the basic diagnostics required by service. Once the concept of a testing system is constituted, any core testing capability may be defined with modularity and interoperability. This allows interdepartmental leverage of test components in much the same way that it is possible to drop an Excel spreadsheet into a Word document.

There is a limit to how integrated the test system can be. However, there can be tremendous leverage across development, manufacturing, and service test systems. To see this in practice, look at a simplified example of a moderately technically complex product. The product comprises several control boards, each with control logic and a programmable gate array (a semiconductor device that allows for hardware function to be put into a design without use of conventional software). The boards are integrated into a product that has a control application running on a conventional PC platform.

mddi0801p130b_thumb.jpg

Figure 1. (click to enlarge) A simplified view of a typical product-testing schematic.

In a traditional test environment (see Figure 1), the electrical group of R&D would develop test protocols to verify the board logic. A separate effort would be focused on verifying the gate-array design to ensure that the board behaved in the proper predictable manner. The software group would develop its control and application code, hopefully developing unit tests to verify various control modules, with special effort focused on verifying software-hardware interaction. The software quality engineering group would look at the system requirements and develop tests to verify the software design. In many cases, this test represents the overall system test of how the software and hardware behave when integrated.

Assuming that the product passes these tests, the product would be released to manufacturing. The manufacturing test group would have been developing its own tests, focused initially on individual boards as they come out of subassembly test (or incoming inspection if assembly is outsourced). The product would then be integrated and the manufacturing test group would run a test suite that verifies product operation as a final test before shipping. Depending on product volume and complexity, final testing may only take a few minutes or may last several hours or days. The service group is the last group to address any problems with the test. Upon release, there are often only rudimentary test capabilities—typically manual tests requiring technical skills. Over time and continuous product releases, a suite of tests is accumulated, sometimes with correlated data to aid in the diagnosis of a problem.

mddi0801p130c_thumb.jpg

Figure 2. (click to enlarge) A simplified view of a systems-oriented product-testing program.

A systems approach to product testing would be fundamentally different, requiring a strong understanding of the interrelated nature of product tests. As illustrated in Figure 2, a system-level view of all significant test needs is assembled. Core elements of testing revolve around hardware testing and diagnosis and system functional testing. As shown in Figure 2, both R&D and service may need low-level, flexible access to hardware diagnostics. The software organization and subassembly test groups may need to verify the operation of embedded software and board electronics.

As product configuration changes over time, an encapsulated-board test approach isolates both the automated regression testing required by R&D and the automated final test in manufacturing. It may also be beneficial to tie service diagnostics to data collected in subassembly or final test in manufacturing. A systems approach to this problem allows similar test needs to be addressed by reusable components. A software module that is developed to implement a full hardware design test can easily be used as part of a service application tasked with looking for failed components. Modules developed as part of an automated regression test may provide the basis of a final product test. A module developed to verify a gate-array design could be applied to a subassembly test to verify board operation.

There are environments in which reuse across departments will be limited. For example, in high-volume production environments that use automated test systems (ATEs), reuse may be limited due to differences in run-time environments between development and the ATEs. Even with this limitation, reuse of test documentation and designs, standardized failure-mode terminology, and common test data formats have small beneficial results. And, in nearly all cases, there can be direct reuse of R&D test modules within a service diagnostic application, accelerating the process of developing sophisticated failure analytics.

Approaching Testing from the Systems Perspective

mddi0801p130d_thumb.jpg

Table I. (click to enlarge) Typical commonalities for test drivers for different organizations.

After identifying the potential contexts of testing, the overall system must be evaluated for areas of commonality and variability. Tables I and II show a typical, but not exhaustive, list of these characteristics.

mddi0801p130e_thumb.jpg

Table II. (click to enlarge) Typical variations in test drivers for different organizations.

Looking at these tables, it is clear that there is significant commonality in the testing needs of the various parts of the organization. In most cases, manufacturing needs faster, easier-to-use implementations of R&D tests. These manufacturing tests are similar to the black-box tests developed by R&D, in which all functional subsystems and interconnects are verified. In fact, the low-level test primitives often can be identical, but their assembly and the interaction between them must be different.

The reason for the different interaction is that despite the similarity in test functions, the various groups have widely varying goals. R&D wants detailed system access for verifying module designs. Service needs identification to a field-replaceable unit. Manufacturing needs to rapidly identify failed components and identify either a vendor problem or one in the production or manufacturing process. These differences can be explained using the example of a board-test application. In R&D, board-test applications are highly interactive, allowing the engineering staff to individually access specific elements on the board, namely the activating signals, and read outputs. Each low-level component of the board must be individually accessible in order to verify board functions.

In service, the interactions may be similar but the application typically guides the technician to perform specific tests in a specific sequence as the diagnostic process progresses. The goal is to systematically evaluate board operation looking for a board-level failure. The manufacturing test application, while performing all the low-level functions available in R&D, must perform the test in an automated fashion with a rapid test result. All of the complexity of the component test is hidden below the application interface.

The Benefits of a Systems Approach to Device Test Tools

Identifying the opportunity to use a systems approach to testing is the first step. Designing a test system and process that accommodate reuse of common test assets requires the use of system analysis methodologies. It also requires a strong understanding of how to design system architecture. Although a detailed discussion of system architecture is beyond the scope of this article, key characteristics of the approach are worth mentioning.

mddi0801p130f_thumb.jpg

Table III. (click to enlarge) Typical system drivers for test systems.

Tables I and II provide simplified views of the areas of overlap and the differences between the various use contexts. In addition to this analysis, a more detailed understanding of the system drivers and complexity drivers of these different contexts must also be developed. In essence, system drivers are characteristics required to make the system useful to the user, and complexity drivers represent characteristics of the system that make it difficult to create. Tables III and IV list examples of these drivers.

mddi0801p130g_thumb.jpg

Table IV. (click to enlarge) Typical complexity drivers for test systems.

The similarities among the requirements of the various contexts and the differences between their system and complexity drivers are strong indicators of a product-line approach to the system. Over the past decade, there have been advances in how to design product-line architectures and the development of a strong academic underpinning for the objective evaluation of various architectures for specific business and system drivers.1,2 Using these formalized methodologies leads to the development of an architecture that accommodates having different parts of the organization, to design of test components that maximize reuse in support of different user goals.

The identified complexity drivers of the various user contexts directly translate to the architectural design of the test platform. These user contexts include flexibility and deep access for R&D, speed and simplicity for production, and interactive directed workflow for field service. Ensuring that early decisions in the development of the test platform support, not preclude, usage modes by other groups guarantees that additional functionality required by those groups can easily be integrated into, or layered onto, the previous test platform framework. Real benefits, in terms of the robustness of the test platform, the cost of development, and the time line for test platform availability, can accrue directly from this approach.

Such platforms can be designed to allow for the integration of new capabilities over time, with these new capabilities designed to be built on existing infrastructure and functionality. Using this approach, the manufacturing test group can quickly understand of the preexisting test support developed by the R&D test group.

The definition of testing primitives that provide the lowest level of system testing (such as the verification of hardware control functions), along with complete technical specifications of how to exercise this functionality, can be provided to the manufacturing test group as a starting point for its test development. The test platform system architecture should specifically address how to augment the test system with additional tests or testing capabilities such as life-cycle tests, automated system tests, advanced diagnostics, or creation of a test database.

The manufacturing group need only add the missing test functionality required to support its specific needs. It is also possible to design the system in a distributed manner so that new capabilities integrated by other groups use the core functionality but do not interfere with or modify the testing environments of other parts of the organization. Additional functionality, such as support for setup of product localization, device serialization, and integration into the enterprise resource planning system, can be plugged into the existing test platform backbone.

Functionality to support product servicing can also be added with the major advantage of much simpler integration to the manufacturing product database. This connection enables more fluid interaction to determine as-built configuration and initial manufacturing test data and results.

Ultimately, it is feasible to design a test system that supports all stages of the product life cycle. It should not exist as a single monolithic application, but as a coordinated, interacting system, with defined interfaces and shared functionality.

Systems Thinking in Test Suites

An example of this integrated testing approach can be seen in a manufacturer of a remote surgical device. The product exhibits the characteristics of a low-volume, high-complexity
medical device. As the capabilities of the device increase with product evolution, the control board design techniques also evolve from discrete integrated circuit devices to field programmable gate arrays (FPGAs). These arrays are programmed to provide a wide variety of hardware functions—a boon to board designers and a bane to board and system testers. The flexibility of an FPGA supports dramatically increased capability in very small packages, but for tests to be effective, it requires a completely different approach to the development of board test functions.

To thoroughly test the control board, a complete understanding of the systems architecture, the interface between the software and hardware, and the various use modes of the control hardware are prerequisites. In essence, much of the software functionality developed in the product must be duplicated in order to test the functionality of the control board.

In order to verify the control board design during the development program, the R&D test engineers developed a full suite of low-level test functions. As expected, these tests verified each board requirement in each operational mode—a task that entailed years of effort to complete.

When nearing completion of product development, the manufacturing test department initiated its preparation for production tests. The group started by developing subassembly tests to verify the new control boards as they came out of board assembly. However, the group quickly realized that developing tests to verify these boards was a nontrivial task. It was impossible to develop board tests without a complete understanding of how the boards were to be used in the complete control system.

Once this was recognized, an effort was made to leverage the test tools utilized by the development organization. However, those tests had been developed solely from the perspective of requirements-based testing for product development. There were a large number of test tools and manual procedures that met the quality system regulation requirements for R&D testing but were of little value outside of that environment.

The manufacturer decided to develop a test platform that met the requirements of production and field service. Additionally, it decided to support the R&D requirements and ultimately to provide faster development turnaround for product enhancements that were anticipated throughout the product life cycle.

In another example, a manufacturer of a blood glucose meter incorporated production testing into the planning process from the outset. The development of a next-generation device was undertaken several years ago, with the expectation of building a product that would support five to seven years of enhancements. Although the device itself is much simpler than the remote surgical device, the extensive configurability and globalization support considerably drove up the production testing complexity.

At the outset of the development program, the company decided to develop the test platform so that it could follow the device into the manufacturing environment and beyond. The platform was conceptualized to be a test, configure, and verify suite, responsible for verifying hardware and analytical elements of the meter.

To support production, the test platform was also extended to allow for automatic configuration of the device for shipment to various geographical regions. Additionally, the platform could store the initial test results in a centralized database. The integrated test capabilities developed to support R&D were encapsulated to support the production functionality in an automated mode. They were also used in the development of a directed-workflow diagnostic system. This functionality supported problem identification during manufacturing, and was used to service returned product.

The effort to use a systems approach resulted in a single testing system with the ability to perform complete R&D tests, as well as supporting both manufacturing and service.

Conclusion

Properly testing high-complexity medical devices is an increasingly difficult task. The development of manufacturing test tools and platforms is expensive in terms of time, money, and skilled labor resources. With competitive pressures mounting, a streamlined process to facilitate the rapid and cost-effective development of manufacturing and service test tools can make the difference between a successful product launch and one that does not achieve anticipated market penetration or revenue expectations. In addition, costs associated with the potential liability from the launch of an insufficiently tested product into the marketplace make the case for a systematic approach to the testing of safety-critical, software-based products even more compelling.

Changing to that approach takes effort because it requires pulling together siloed organizations. But the benefits are real, and can be very large. Properly conceived and developed, such test platforms can provide exceptional support throughout the life cycle of many products in an integrated product line.

Timothy Bowe is co-CEO of Foliage, a consulting firm in Burlington, MA. He can be reached at 781/993-5500.

References

1. Timothy Bowe and Charlie Alfred, “Formulating Product Family Architectures: Techniques for Analysis & Strategy” [online], white paper; available from Internet: www.foliage.com/thought-leadership/whitepapers.php.

2. Paul Clements, Rick Kazman, and Mark Klein, Evaluating Software Architectures: Methods and Case Studies, (Boston: Addison Wesley, 2002).

Copyright ©2008 Medical Device & Diagnostic Industry

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like