Test System Engineering for Medical Devices: A Guide
Originally Published MDDI January 2002TEST SYSTEMSTest System Engineering for Medical Devices: A Guide
January 1, 2002
Originally Published MDDI January 2002
TEST SYSTEMS
Test System Engineering for Medical Devices: A Guide
Developing test systems for R&D through production requires a combination of preparedness and ongoing evaluation.
Tore Johnsen
A large number of medical device manufacturers use automated test and measurement systems during each stage of the product cycle, from R&D to production testing. These systems play an important role in improving the quality of products and helping speed them to market. Purchasing the test equipment is often less expensive than putting it into use; it can cost more to develop the software to run a system than to purchase the instrumentation. Many companies choose to outsource all or a portion of their test system development. Whether it's done internally or by an outside vendor, the critical factors for success remain the same: maintaining good communication between developer and client, following an efficient development process, selecting appropriate development tools, and recruiting people with the skills to do the job correctly.
This article provides a broad overview of test and measurement system development for the medical device industry. Included is a discussion of commonly used instrumentation and tools and an overview of the skills and practices necessary for successful test system development.
HOW AND WHY TEST SYSTEMS ARE USED
In the medical device manufacturing industry, test and measurement systems are used for a wide range of purposes. Some examples include those developed to:
Demonstrate proof-of-concept prototypes for new equipment used to cauterize heart tissue.
Research the shrinkage of new dental filling materials as they cure.
Verify that such implantable devices as pacemakers and neurostimulators function according to specifications.
Condition and test batteries for implantable devices.
Simulate the pressing of buttons, turning of knobs, collecting of patient data, and other functions while monitoring and recording a blood-treatment machine's operation.
In order to carry out these tasks, the required systems must be capable of performing such functions as automatically controlling switches and relays, setting and reading temperatures, measuring or generating pulse widths and frequencies, setting and measuring voltages and currents, moving objects with motion control systems, using vision systems to detect misshaped or missing parts, and others. The medical device industry's need for test equipment and related sensors and technologies is as varied as the industry itself. No matter what the specific need, however, compliance with the quality system regulation is mandatory.
TEST SYSTEM TRENDS
Some trends have emerged as the popularity of automating test systems has grown. For instance, integrating test systems with a corporate database is becoming more common. A number of test stations can be networked, and data can be stored in the corporate database. From that database, important statistical process data can be gathered and analyzed.
Additionally, manufacturers are growing more interested in using standard Web browsers to view data and in remotely controlling their test systems. Data are routinely passed between test applications and standard office applications such as Microsoft Excel and Word. Both the complexity and number of technologies being incorporated into the systems are growing, and, consequently, so are the demands on systems developers and project managers. Test system developers are using more-efficient software development and project management methodologies to meet these increased demands.
Standardizing specific test system development tools and instrumentation is another increasingly popular way to keep costs relatively low. Using the same development tools from R&D to production has obvious benefits: it reduces training costs, allows for technology reuse, and makes it easier to shift employees from one area to another, depending on demand. If executed with diligence, maintaining consistency also facilitates the creation of a library of reusable code. Standardizing makes compliance with quality systems requirements easier, too.
TEST SYSTEM INSTRUMENTATION
Figure 1. Test instrumentation is typically built around a PC or a chassis.(click to enlarge) |
A typical test system is created in one of two ways. It can be built around a PC using a plug-in data acquisition board (DAQ), serial port instruments, or general-purpose interface bus (GPIB). Alternatively, it may be built around a chassis with an embedded computer and various plug-in modules such as switches, multimeters, and oscilloscopes (see Figure 1).
The newest member of the latter family of instrumentation systems is the PXI chassis, which uses the standard PCI-bus to pass data between the plug-in modules and the embedded computer. This technology offers high data acquisition speeds at a relatively low cost. Originally invented by National Instruments (Austin, TX), it is now an open standard supported by a large number of instrument vendors. (For additonal information on this system, visit http://www.pxisa.org).
Manufacturers should keep in mind the several variations that exist on these schemes. To select the optimal instrumentation for any given project, a company must consider both the cost and performance requirements of the project, and the need for compliance with internal and external standards.
An important system-selection criterion is the availability of standardized ready-made software modules (i.e., instrument drivers) that can communicate with the instruments. Because the major instrument vendors are currently involved in a significant instrument-driver standardization effort, it makes sense for companies to check the availability of compliant instrument drivers before purchasing an instrument. The Interchangeable Virtual Instruments Foundation (http://www.ivifoundation.org) works on defining software standards for instrument interchangeability. The foundation's goal is to facilitate swapping of instruments with similar capabilities without requiring software changes. Creating a custom instrument driver can take from days to weeks depending on the complexity of the instrument; if it's not done correctly, future maintenance and replacement of instruments might be more difficult than need be.
DEVELOPMENT TOOLS
Development tools designed specifically for PC-based test system development have been in existence for more than a decade. Specialized graphical languages for drawing, rather than writing, programs are useful. One such product from a major instrumentation vendor is LabVIEW (National Instruments). Developing everything from scratch in C is probably not a good idea—unless there is plenty of time and money to spare.
As alternatives to specialized graphical languages, several add-on packages exist that can make a text-based programming language more productive for test system development. For example, one can buy add-ons for both Visual Basics and Microsoft Visual C++. If one's preference is C, for instance, but the benefits of a large library of ready-made code for instrumentation, analysis, and presentation are desired, LabWindows/CVI from National Instruments is a tool to consider.
Figure 2. Test-executive architecture.(click to enlarge) |
If what's needed is development of an advanced system to make automated logical decisions about what test to run next and to perform loops and iterations of tests—and store the results in a database—a test executive that can work with the test code is a wise option. Figure 2 shows an example of test-executive architecture. The test system developer writes the test modules and then uses the test executive's built-in functionality to sequence the tests, pass data in and out of databases, generate reports, and make sure all test code and related documentation is placed under revision control (i.e., configuration management).
Although this option still requires significant effort to develop the test modules and define the test sequences, using a standard test executive is, in many cases, far more cost-effective than making one from scratch. This is especially true for such large, ongoing projects as design verification testing and production testing, which require regular modifications of tests and test sequences.
THE NECESSARY SKILLS
A test system development project requires a multitude of skills to achieve success, including project management skills and good communication to keep the project on track and to ensure that all stakeholders' needs and expectations are addressed. Understanding and practicing good software development methodologies are also needed to ensure that the software that is built will actually meet the user's requirements. Test system development also requires that engineers have a thorough understanding of software design techniques to ensure that the software is both functional and maintainable, and an understanding of hardware and electronics to design the instrumentation and data acquisition portions of the system.
Before a test system can be put into production, it needs to be tested and validated. This means that the development team also needs the expertise to put together a test plan and to execute and document the results in a report. The engineers who built the system are not necessarily the best people to test it, so additional human resources are often needed for testing. Finally, because documents are created during the development process, documentation skills are also necessary.
When one considers that the typical project team for a midsize test system consists of two to four developers, one realizes there are more major skills required than there are team members; therefore, one of the challenges is to locate individuals with sufficiently broad skills and abilities to supply both technical and managerial leadership. To ease this burden, make the tasks less daunting, and increase the chances of project success, defining a development process is key. If the test system is used for regulated activities, such as production testing of medical devices, then the test system itself is subject to the quality system regulation and a defined development process is not only desirable, it's mandatory.
THE BENEFITS OF COLLABORATION
Outsourced projects are most successful when the developers and the clients collaborate. Keeping the client involved is the most efficient way of making sure that the system meets the client's needs. It also helps avert surprises—at either end—down the road. Collaboration requires honest and direct communication of issues, successes, and problems as they occur.
Miscommunication sometimes happens even with good collaboration. While it is important to keep the communication channels open so the developers and their clients can discuss issues without too much bureaucracy, it can be hard to keep track of who said what if too many parallel communication channels exist. And when engineers on both sides have ideas of features they would like to add to a particular system, controlling feature creep can become difficult.
Designating a single point of contact for discussing both a project's scope and its content is recommended, and making sure new solutions are reviewed before being accepted can also prevent problems. Instituting a change-control procedure is yet another important step to minimizing unnecessary changes.
THE PROJECT PROCESS
The goal for any project is to add only as much process overhead as is absolutely necessary to satisfy the objectives. When a process must be added because regulations mandate it, the involved parties should keep in mind that the process isn't being instituted merely to satisfy FDA or other agencies; it's being done to build better and safer products. Structure and process improvements can have a significant positive impact on the quality of the finished test system.
The Software Engineering Institute has defined the following key process areas for level 2 ("Repeatable") of the capability maturity model: requirements management, project planning, project tracking and oversight, configuration management, quality assurance, and subcontractor management.1 The foundations for a project's success are good requirements development and good project planning; if the requirements aren't right, or if a company can't determine how to get the project done, then the project is essentially doomed. What follows is a description of the progression of a few types of test system development projects as well as a discussion of requirements development.
Figure 3. The traditional waterfall life-cycle model.(click to enlarge) |
Phases of Test System Development. Whether a formal documented development process is followed or not, there are a few major tasks (i.e., project phases) that must be addressed: requirements development, project planning, design, construction, testing, and release. Should this particular order of tasks always be followed? Probably not. During the 1980s the software industry saw a number of large projects go significantly over budget, become significantly delayed, or be cancelled because they were inherently flawed. One cause of these problems was companies' strict adherence to the waterfall life-cycle model (see Figure 3). In this type of life cycle, the project goes through each phase in sequence, and the phases are completed one at a time. The waterfall model presumes that the requirements development phase results in nearly perfect requirements, the design phase results in a nearly perfect design, and so forth.
Unfortunately, projects are normally less predictable and run less smoothly than the waterfall model assumes. For example, a company doesn't always know enough at the beginning of a project to write a complete software requirements document. The sequence of actions necessary for project success depends to a large extent on the nature of the project. Because every project is unique, those involved must analyze the project throughout its phases and adapt the process accordingly.
Keeping that in mind, software companies have done much to improve software development methods since the 1980s. Today, one can find descriptions of a number of life-cycle models useful for different project characteristics. Choosing the appropriate life-cycle model depends on the nature of the project, how much is known at the start of the project, and whether the project will be developed and installed in stages or all at once. Of course, mixing and matching ideas from different life-cycle models can be an effective strategy as well. Even if a company has decided upon and made standard a particular life-cycle model, small modifications should be made to that model when a particular project necessitates it. The trick is to identify high-risk items and perform risk-reducing activities at the start of the project.
Test System Characteristics. The test systems used in the three device development stages—R&D, design verification, and production—each have their own characteristics.
R&D Systems. R&D test systems range in development time from a few days to many months. Scientists use the systems to explore new ideas; R&D test systems are also used to perform measurements and analyses not possible with off-the-shelf equipment and to build proof-of-concept medical equipment. Others are used by physicians for medical research. Vastly varied in both scope and technologies, most R&D test systems have one thing in common: the need for continuous modification and development. As the research progresses, the scientist learns more, generates more ideas, and might decide to incorporate a new functionality or new algorithms, or even try a different approach. Clearly the waterfall life-cycle model doesn't fit such developments. With R&D test systems, one doesn't know what the final product will look like at the start of the project. In fact, there might not be a final product at all, just a series of development cycles that terminate when that particular research project is over and the system is no longer needed.
Figure 4. The evolutionary-delivery life-cycle model.(click to enlarge) |
Assuming that a reasonable idea of the scope of the R&D project is known at the start, one possible life-cycle model to follow is the evolutionary-delivery model (see Figure 4). This model includes the following steps: defining the overall concept, performing a preliminary requirements analysis, designing the architecture and system core, and developing the system core. Then the project progresses through a series of iterations during which a preliminary version is developed and delivered, the client (i.e., the researcher) requests modifications, a new version is developed and delivered, et cetera, until revisions are no longer needed.
Of course, it's wise to try to pinpoint potential changes at the beginning of the test system development project so that the software architecture can be designed to handle the changes that might come later on.
Design Verification Test Systems. There often is a blurry line between R&D and design verification test (DVT) systems. In the final stage of DVT system usage, the output is verification and a report stating that the medical device performs according to its specifications. Before that stage is reached, however, it is not uncommon to encounter several DVT cycles, each delivering valuable performance data back to the device's designers, and each resulting in modifications either to the device's design or to its manufacturing process.
Figure 5. The staged-delivery life-cycle model. (click to enlarge) |
It may be desirable to use the DVT system to test parts of the device or portions of its functionality as soon as preliminary prototypes are available, but it may not always be possible to have the complete test system ready for such applications. In these cases, the staged-delivery life-cycle model (see Figure 5) may be the best choice. According to this model, test system development progresses through requirements analysis and architectural design, and then is followed by several stages. These subsequent stages include detailed design, construction, debugging, testing, and release. The test system can be delivered in stages, with critical tests made available early.
Production Test Systems. A production test system needs to be validated according to an established protocol.2 Such a test system is therefore developed and validated using a well-defined process, and the system can normally be well-defined in a requirements specification early on. There is still, however, a long list of possible risk factors that, if realized, can have a serious negative impact on the project if a strict waterfall development life cycle is followed. Research has shown that it costs much more to correct a faulty or missing requirement after the system is complete than it does to correct a problem during the requirements development stage.
A risk-reduced waterfall life cycle might be an appropriate model to follow when developing a production test system. In this life-cycle model, main system development is preceded by an analysis of risks and a performance of risk-reducing activities, such as prototyping user interfaces, verifying that unfamiliar test instruments perform correctly, prototyping a tricky software architecture, and so forth. Iterations are then performed on these activities until risk is reduced to an acceptable level. Thereafter, the standard waterfall life cycle is followed for the rest of the project—unless it is discovered that some new risks need attention.
Requirements Development. As the aforementioned life-cycle models show, requirements development directly influences all subsequent activities. It's important to remember that the requirements document also directly influences the testing effort. Writing and executing a good test plan are only possible when a requirements document exists that clearly explains what the system is supposed to do.
Developing a software or system requirements document is important, but there is no one perfect way to do it. Depending on the nature of the project, the life-cycle model selected, and how well the project is defined at its early stages, the requirements document might use a standardized template and be fairly complete, it might be a preliminary requirements document, or it might simply take the form of an e-mail sent to the client for review. No matter how it's done, putting the requirements in writing improves the probability that both parties have the same understanding of the project.
Test system developers also are well advised to create a user-interface prototype and prototypes of tangible outputs (e.g., printed reports, files, Web pages) from the system. These might take the form of simple sketches on paper or actual software prototypes. The purpose of the user-interface prototype is to make sure the software maintains the correct functionality. Often, the first time clients see a user interface, they remember features they forgot to tell the developer were needed, and they realize that the system would be far more valuable if greater functionality were added. Creating a user-interface prototype is perhaps the most efficient method for discovering flawed or missing functional requirements. Both parties will want this discovery made during the requirements development phase, not upon demonstration of the final product.
To the greatest extent possible, developers should identify any items that are potential showstoppers, such as requirements that push technology limits or the limits of the team's abilities. Identifying such problems might require some preliminary hardware design to ensure the system actually can be built as specified. High-risk items should be prototyped and developers should try to identify ways to eliminate the need for requirements that push the limits. Waiting until the final testing stage to find out that some requirements cannot be met is not a good idea. Even waiting until after the requirements are signed off to find that some cannot be met is unpleasant—especially if all it would have taken to prevent the problem was a few hours' research.
For outsourced development projects it is essential that the test system developer get feedback from the client and iterate as needed until an agreement is reached and, in some cases, the requirements are signed off. While performing the activities described above, the developer also should review any solutions suggested or mandated by the client. For instance, if the client says it already has the hardware and only wants the test developer to provide the software, the first thing the developer should do is request a complete circuit diagram of that client's hardware solution and carefully explain why it's necessary to fully understand the client's hardware in order to build a good software system. Flaws in the test instrumentation design are very costly to fix after the test system is built, yet it costs comparably very little to review the design ahead of time. Of course, an in-house test system developer also should evaluate the hardware design carefully before starting the software design.
Project Review Timing. It's likely that outside developers who get this far have dealt solely with the client's project team. If the project is large, however, it is not uncommon for the client to bring more people into the picture and conduct a project review after the system is complete. Some of the newcomers will have insights and desires that would result in changes—sometimes expensive ones.
If possible, this type of situation should be avoided. The system developer should insist on a project review by all affected parties before the requirements stage is concluded. It is not enough to just send around the software requirements specifications; people are often too busy with other projects to really go through them as meticulously as they should. A better strategy is to bring everybody together and show them the user-interface prototypes, the report prototypes, and any other important components of the project. Representatives of the end-users should be present as well. Although they should have been consulted during the requirements development process, the end-users are still likely to contribute valuable insights during the review.
By now it should be evident that there is more to requirements gathering than just writing a requirements document and getting it signed off. If the system doesn't work to the client's satisfaction at delivery, then it doesn't matter who is to blame. The project will be remembered by both parties as a painful experience with no winners.
PROJECT PLANNING
Every project needs a plan. The first step in project planning is to define the deliverables, then to create a work breakdown structure (WBS) hierarchy of all the project's required tasks. The WBS is then used to develop a timeline, assign resources, and develop a task list with milestones. A good project plan will also clearly define roles, responsibilities, communication channels, and progress-report mechanisms. It certainly helps to have some background or training in project management in order to plan and control the execution of the project. Some basic project management training is recommended for anyone in charge of a test system development effort. Seminars and classes in project management based on the Project Management Institute's standards are offered worldwide.
CONCLUSION
Successful test system development requires attention to both process and technology. Both clients and developers need to understand and appreciate good software engineering practices. Collaboration and communication are critical for success. Clearly defining roles and responsibilities, using efficient development processes and tools, and handling project risks early on permits problems to be handled at a stage when their effect on cost and schedule will be minimal.
REFERENCES
1. Capability Maturity Model for Software, Version 1.1, Technical Report CMU/SEI-93-TR-024-ESC-TR-93-177 (Carnegie Mellon University, Pittsburg, PA: The Software Engineering Institute/February 1993).
2. Medical Device Quality Systems Manual: A Small Entity Compliance Guide, (Rockville, MD: FDA, 1997).
BIBLIOGRAPHY
A Guide to the Project Management Body of Knowledge. Newton Square, PA: Project Management Institute Standards Committee, 2000.
McConnell, Steve. Rapid Development. Redmond, WA: Microsoft Press, 1996.
McConnell, Steve. Software Project Survival Guide. Redmond, WA: Microsoft Press, 1998.
Wiegers, Karl E. Software Requirements. Redmond, WA: Microsoft Press, 1999.
Tore Johnsen is technical director and en-gineering manager at Computer Solutions Integrators & Products in Woodbury, MN.
Figures 3 and 4 adapted from Rapid Development, with permission of the author, Steven C. McConnell.
Figure 5 adapted from Rapid Development, with permission of the author.
Copyright ©2002 Medical Device & Diagnostic Industry
You May Also Like