The push for cost-effective medical device development may get some assistance from adopting rigorous software simulation practices.

+1
Karl Aeder, Tim Boschand 1 more

October 1, 2009

16 Min Read
The Development Edge: Software Design-For-Test

C "-//W3C//DTD HTML 4.01 Transitional//EN">


SOFTWARE



The pace and cost of medical device development is ever-increasing, forcing the next generation of medical devices to be built using more efficient design and development approaches, while still maintaining the rigor required by regulatory guidelines.
Software is key. Software is not only a continually growing component of devices themselves, but it can also influence the overall speed of validation, clinical studies, and ultimately production. For example, the advantages of software simulation of hardware interactions are widely known: software development schedules can be mostly decoupled from hardware development schedules; integration with hardware takes far less time; problems can be tracked down more rapidly; and more bugs are discovered earlier in the development cycle, which reduces costs. Manufacturing, field support, and postinstallation troubleshooting can also leverage these techniques through more-rapid and less-costly resolution of problems. These virtues are collectively called software development agility.
Methodical Design and Test of Software
The single most important factor promoting software development agility is the ability to functionally test new or changed software. The tests must be on demand, immediate, thorough, and rapid—as well as separate from the hardware.
Untested software will always have defects or issues when first brought up on target hardware. This is true regardless of how skilled the coders are and how rigorous the rest of the development process may be. Manual testing of software on the hardware is feasible during early development but it takes longer (and costs more) as the product approaches full function. Schedule and budget pressures frequently cut into time required for thorough manual software testing without consideration of the consequences. But shortcutting manual testing only leads to more overlooked bugs that disrupt smooth product introductions, or worse, result in field actions and possible recalls.
The only cost-effective and viable way to secure and retain software development agility and integrity is to design and build fast, comprehensive automated tests—starting from day one. That is, carry out software design-for-test.
None of this is controversial in the wider software industry. Every successful real-world software development process includes testing as an absolutely necessary step. Everyone would agree that having comprehensive automated tests is a good thing. But it seems that relatively few software development projects actually take this approach. The usual objection is that it costs too much to build tests, especially early in the development stage when a project is at its most dynamic and pressure is highest to get software working. But once that something is working, if it was not designed for automated tests, and if it has no automated tests, a new barrier arises that rarely gets breached. Those involved in the project are often told that the company can't afford to stop now to redesign and rebuild for automated testing, for time and cost reasons. This same pattern occurs everywhere, not just in medical device development.
Competitors that develop and maintain automated software tests methodically and carefully from the start gain an advantage over those that don't. They will see faster time to market as well as greater quality and reliability for their products. In a sense, a company can't afford not to take this approach.
Proven Techniques
There are several proven techniques that are inexpensive to incorporate into the early part of the process. Even better, these techniques pay back increasingly handsomely, in time and money, as a medical device progresses through its life cycle.
In sum, these techniques add up to software design-for-test (DFT), a concept analogous to hardware DFT.
Use Application Programming Interfaces. Architect and design for simulation of all external entities and systems that interact with software through extensible application programming interfaces (APIs). APIs enable automated functional testing separate from the hardware. Build simulation to be minimal at first and to evolve only as needed with the overall system. Ensure that each simulated external entity and system can be brought rapidly to a fully known state, for purposes of fast, effective testing and debugging.
Isolate Test Code. Strictly isolate test code from code that will execute in production. If test code is mixed with production code inside methods or functions, then the tested code will not be exactly the same as what runs in normal, intended use. Bugs will be missed. Test coverage metrics will always be inaccurate.
Ensure Predictable Execution. If the software uses an event-driven software architecture—which it very likely does—then ensure that the software executes in the intended order especially during functional tests. As a rule, events are processed in no particular order at run time, which may lead to subtly different results each time the device runs. Left unaddressed, such processing opens the door to nonreproducible bugs that will surface during use.
Use Configurable Tracing and Logging Levels. Design software tracing and logging facilities around significant events and state/status transitions, with configurable tracing and logging (the “verbosity” level) that allow selection of the amount of detailed information that the application records during execution. This speeds up debugging during development and problem diagnosis in the field.
Software DFT Techniques
Hardware DFT has been a standard approach for several decades. It's well understood. Two of its key concepts involve understanding internal state observability and state controllability. That is, how can we know exactly what is happening inside this complex system, and how can we repeatably and efficiently bring this system to a particular state?
The same concepts apply to software that controls a medical device. Some hardware DFT techniques have parallels in software DFT. For example, build special test circuitry (analogously, test code) into the system, reduce the size of state space by decomposing logic into self-contained units (modularizing), and build in exactly the right kinds of diagnostics (trace and log outputs) are all used in both systems.
Enable Simulation of Everything Through Interfaces. From day one of software development it is crucial to architect, design, and implement simulation of every aspect of the system that is not the software itself.
In particular, plan to simulate human interactions between users such as the biomedical technician, the nurse, the physician, and support staff. This might sound strange, but it's crucial. The automation design and control software must ensure that human interactions occur only through well defined APIs. Further, on the outside of this interface, where the human resides, no automation or control logic must execute, such as logic related to shutdown or automated recovery.
It is not necessary to simulate everything that a human could do with the user interface, only the passing (and failures) of new or changed (or missing) information between the human and the rest of the software. If human interactions are isolated and simulated, then arbitrarily complex and exhaustive testing of the core software that otherwise could not be accomplished through users manually exercising the system, can be automated.
In like fashion, simulate hardware interactions with motors, valves, sensors, and any other hardware with which the software interacts, through APIs. Start small by developing the interface with only start and stop interactions, and then expand both the interface and the simulation hand in hand. Simulate bar code readers, hand and foot controllers, external system interfaces, and every other subsystem or external system interaction. In all cases, start small with a simple API and grow both the API and the simulation together as needed.
“Buggy” software, discovered late in the integration or system test cycle, is costly and time-consuming to fix. The only way to know for sure that an addition or change hasn't broken the software is to test it. Manual testing is slow, expensive, and rarely as thorough as automated testing can be. The only reliable way to maximize the speed of turnarounds for new features and enhancements is to automate testing maximally within reason. And the only way to automate fully is to decouple human interaction.
Each simulated entity needs to be quickly settable to known states or statuses to support fast testing and debugging. If an implementation uses a relational database, make the network location of the database externally configurable to enable multiple prepopulated instances of the database to be accessible. This way, tests can start immediately from a known database state. The point is to simulate each external entity in as many known test/debug setup states as needed to accelerate progress.
The advantages of up-front software-only testing are test speed, thoroughness, convenience, and reduced expenses (each developer does not need expensive equipment). Figures 1 and 2 show that the test-simulation system can be used to insert problems that can be difficult, expensive, or impossible to replicate on demand on the physical equipment. The main purpose of most automated testing with simulated external systems is to shake down the software's functions, as opposed to testing throughput for instance. Not all problems can be detected through simulation; only execution on real hardware can turn up everything. In addition to functional testing, automated tests with simulators can be used to artificially scale up demands on the software, thereby revealing inherent performance bottlenecks and memory-usage problems, before integration and product release.
It is essential to recognize that the software inside the APIs does not change at all between figures 1 and 2, and the software does not know whether its current configuration is for production or for test.
The cost effectiveness of simulation and automated testing depends heavily on the designer's choice of test framework. As a rule, it is better with a batch-oriented test framework such as JUnit-JSystem, xUnit, or STAF, as opposed to a general-purpose automation tool or programming environment. Commercial test-framework products can be expensive, difficult to master, or centered on proprietary languages. Conversely, home built framework can be expensive to develop and maintain, but does provide the exact functionality needed for the system.
It's clear that simulation and automated testing require significant work and time. For this reason, they must be treated as real software projects or subprojects, not just as a set of side tasks for engineers or interns.
It might appear at first that such an investment for testing and debugging amounts to expensive overkill. But an OEM's goal is to produce a cost-effective medical device to meet market goals, and process maturation and software testing cannot be achieved with out such investment. Methodical automated testing is an option to prevent problems from occurring in real situations on real patients once the device is released.
Isolate Real Code from Test and Simulation Code. In the push to get simulation and test running at the same time as the real code, it may be tempting to mix simulation and test code into the automation and control source. This turns out to be a real mistake. The actual run-time code should never know if it's running with real hardware versus in simulation or test mode—or even a combination.
There are two ways to mix test and simulation code directly with the rest of the code. One is through conditional compilation; the other is through flags tested at run time that indicate the current mode. In both cases, the code that runs during test or simulation is different from the code that runs with real hardware. Why should this matter? Executing slightly different paths through the source during test or simulation subtly changes the moment-by-moment details of the internal state of the software. Bugs that would have occurred in production do not show up during developer testing, but only in later stage testing, when the cost of discovery is massively higher. Other bugs that would not have occurred can be introduced. These can cost time and money to identify and resolve, which is a waste if the defects are only in test code, not the real code. Simulation should be done indirectly through hardware abstraction layers and programmatic interfaces (APIs). It should not be done through mixing special conditional or flagged code in-line with the actual code.
Enable Deterministic Execution for Debugging. Most modern automation and control systems are event-driven, meaning they use asynchronous messaging on multiple threads to push information to consuming entities both internally and externally. This differs from polling systems, in which controllers continually pull state information synchronously from elsewhere and take action when specific changes occur. Polling is inefficient because it wastes CPU cycles asking for information that may not have changed. Asynchronous messaging is economical and elegant because information is passed only when needed—and it is the only viable choice when using multiple-processor hardware.
A challenge with asynchronous communication is that the sequence of delivery of threaded event information is usually not deterministic. That is, two runs through the system or a subsystem from the identical initial states that uses the identical inputs can have slightly different outcomes, or even radically different outcomes. Why? Consider two events, A and B, that can occur in the system. Software run in a particular operating system (OS) context at run time can lead to event A being delivered to a subscriber before event B. But another program run from the same starting point might pass through different OS contexts, during which event B will happen to be delivered before event A. Internally, software goes through two different series of states. Some of the most difficult bugs come from the nondeterminism of event delivery times, and it can be maddeningly hard to untangle what actually happened.
There is no single way to deal with synchronization problems in a system. If the pace of the system is slow and there are few events, variations in event delivery sequences might not matter at all. Or they might be crucial. In many medical devices, chances are good that the pace is high and there are many events, and therefore the challenge is almost certainly very important.
Whatever technique is used to handle synchronization, the principle is that tracking down potential problems due to nondeterminism must be done at the beginning. Build in software mechanisms that enable debugging to step through event processing in the exact time sequence that events were generated and processed.
A related useful technique for dealing with nondeterminism is to divide and conquer by testing subsystems one at a time and independently, in software, through many different distinct paths of deterministic execution. Problems found in the whole system can often be more rapidly isolated and fixed if there is confidence that, internally, the subsystems themselves are solid.
Design Logging Facilities for Debugging and Diagnostics. An all-too-common approach to logging in medical devices goes like this: The developers know from the start that they'll need tracing and logging facilities to debug the code. They pull something together quickly not long after day one to serve the needs of debugging prototype code. They may or may not adopt consistent standards among members of the software team (more likely not). Getting the first prototype system running is viewed as much more important than designing tracing and logging subsystems. But this tactic will probably start dragging the project down not long after that first prototype is running, unless the team has the foresight to clean up logging before coding in earnest.
The principle is to implement design tracing and logging from the start to provide accurate diagnostic information that is readable by humans, and that enables them to reconstruct system behavior to determine quickly what went wrong during failures.
Logging to files can incur a significant computational load. In addition, turning on logging can change the low level details of what is happening in the system, causing the bug being tracked to disappear. To prevent this, a valuable technique is to trace all relevant information to a port, but monitor that port and persist its information in log files only when the system is configured to do so. This way, although there will always be overhead for tracing, the relatively higher costs of file persistence can be put on another processor without affecting the software's inherent performance.
Additional design techniques for your tracing and logging include the following:
•Logging should have a number of “verbosity” levels that range from the minimum acceptable during production runs to the maximum useful for complex debugging. The level should be changeable at run time from an administrator user interface to the system (through an API, of course). For maximum flexibility and power when tracking down problems, make verbosity configurable for certain modules and subsystems. •Logging should be able to capture all communication with each third-party device or subsystem. In case of a failure, this record can be critical to determining whether the error was in the application or the third-party device. •Logging should provide a record of when each significant event in the system actually occurred—not just when the event was processed by another component, and not just when the event was made known to the logging facility. This was discussed earlier in the deterministic debugging section. •Logs should be easily parsed by both humans and programs, for fast and efficient searching and analysis.

If a software team considers these matters carefully and designs logging accordingly from the start, projects will see significant net time and money savings. If you do not design and build logging well from the start, there's an excellent chance that you never will do it, because the perennial obstacle of “we don't have time for that now” will always stand in your way. That will be your loss and your competitor's gain.

Conclusion
Simulation and automated testing requires significant work and time. For this reason, it must be treated as a real software project or subproject, not just a set of side tasks for engineers or interns.
It might appear at first that this much investment for testing and debugging amounts to expensive overkill. But the goal is producing a cost-effective medical device to meet market goals, and the reliability gained for process maturation and software testing is worth the price. One way that is both reliable and cost-efficient is methodical automated testing. The approach prevents problems from occurring in real situations on real patients once a device is released.
References

1.Brad Pettichord, “Homebrew Test Automation,” ThoughtWorks (September 2004); available from Internet at www.io.com/~wazmo/papers/homebrew_test_automation_200409.pdf.

2.Michael C. Feathers, Working Effectively with Legacy Software, (Upper Saddle River, NJ: Prentice Hall, 2005).



Karl Aeder is principle software architect for Foliage (Burlington, MA), a software product development firm. His e-mail is [email protected]. Tim Bosch is vice president, architecture and consulting, for the firm. He can be reached at [email protected]. Wayne Lobb is engineering director at Foliage and his e-mail is [email protected].
u376_72943.jpg

Click to enlarge





u37f_72901.jpg

Click to enlarge





Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like