MDDI Online is part of the Informa Markets Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Avoid Glitches: Validate Your Process Automation Software

Process validation in the medical device sphere is no simple matter. To get a grasp on how to solve tough software process automation glitches, MD&M West will be offering two sessions on overhauling process validation. On Thursday, February 12 from 1:00 to 1:30 p.m., software experts can participate in a hands-on session focusing on validating process automation software. Headed by Vincent DeFrank, software validation manager at Philips Healthcare, the session will address the kinds of software that must be validated, 21 CFR 820.70(i) requirements for performing process automation software validations, and 21 CFR Part 11 demands.

In the following guest blog, DeFrank outlines some of the key issues involved in validating process automation software.

*     *     *     *     *

Using commercial off-the-shelf software (COTS) or in-house tools is essential for building quality medical device products. Often, companies have an economic motive for using COTS: such software enables them to take advantage of operating systems, program stacks, and test automation solutions. And in-house tools allow companies to gather data, test performance, and communicate with medical devices undergoing testing. Qualifying an automation process involves comparing the hardware and/or software system's intended-use documentation with the system requirements. Reports generated from such systems typically make use of digital records and signatures. Even for experienced software groups, validating automation software can involve a significant amount of work.

The first step in using COTS or software used to create a tool is to evaluate the associated risk. Most quality groups follow a step-by-step procedure to perform such evaluations, which usually include ranking the software or classifying the vendor. For example, step-by-step criteria can be based on quality policy adherence, safety risk, error detection, and upstream/downstream activities. And to determine the rigor of the assessment, the organization should also consider how much it will rely on the tool. While the COTS manufacturer may be able to supply reliable objective evidence showing that it has completed testing or that it will allow the quality group to audit life cycle processes, this is rarely the case. Thus, the software user must usually provide reports on requirements, plan, and procedures.

The key component to writing requirements is to not write more than what is needed. Initially, the user should think through what is truly required and use a real-time operating system that has many built-in features, including a mailbox function. If the development group is not going to use this mailbox because it has chosen another way to exchanges messages, this detail should not be included in the requirements subject to future testing. Nonetheless, the user should be sure to include a provision in the plan to address functionality that might be modified. Such functionality can be addressed using regression testing for new COTS versions. For automation technologies, the plan must state what the strategy is and how objective evidence will be collected, indicating that the automation is working correctly. One way to do this is to establish expected results and confirm the actual functionality manually in a procedure.

When writing and reviewing procedures for large validation efforts, it is a good idea to split the procedures into several sections within a document or to prepare several procedure documents. By breaking up test procedures, the trace review between requirements and testing becomes easier to manage. For validating small-scale software, the requirements, plan, and procedures can sometimes be combined into a single document. A best practice is to designate someone other than the author to perform a 'dry run' before the procedure is run for score (final execution) using a new system with software downloaded from a controlled source. Because there are many aspects to configuring the system for data, equipment, users, and reporting format, setup instructions for collecting evidence must be clear.

Electronic records and signatures are important in the automation software itself and as a review mechanism. For example, an oscilloscope software application is developed as a PC tool that uses connected data acquisition hardware. The software creates an image representative of the waveform being sampled and stores it in a file with the tester's information, along with other information such as date, time, location, etc. The review of this software tool record (in report format) can be printed, reviewed, and signed manually. Alternatively, the record can contain an embedded electronic signature. A number of vendors offer solutions for electronic signatures, including a PDF file format that allows signatures to reside securely on a server. Both electronic records and signatures must undergo intended-use validation.

The validation report presents the results of the procedures and summarizes the results. All the data and results are included with the report. The report documents any deviations that occurred during the execution of the validation plan or the test procedures. A good practice is to monitor the software for defects after it is released in order to ensure that it continues to meet its intended use.

500 characters remaining