Conducting Process Validations with Confidence

January 1, 1998

16 Min Read
MDDI logo in a gray background | MDDI

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

An MD&DI  January 1998 Column

PROCESS VALIDATION

By using statistical techniques during process validation, medical product manufacturers can improve quality, cut costs, and increase confidence in results.

Manufacturers perform process validation to achieve a high level of quality for product characteristics that cannot be measured on every product. Those processes that make the most likely candidates for validation are those, like sterilization, whose results can't be measured without rendering a product unusable.

An intuitive definition for process validation is "a set of steps that provide assurance that key manufacturing processes produce product the same way most of the time." As companies move mechanically through the steps for process validation, they may lose sight of the original motivation and of the intuition that lie behind the regulatory requirements. Some of the questions that manufacturers may ask themselves at the outset: Do processes that are operated anywhere within allowable ranges produce acceptable products? Are products produced near a specification limit just as good as products produced near a target value? Which processes associated with a product would pose a hazard to patients or users if they were to fail? Are measurements accurate? Are calibrations frequent enough? How many measurements should be taken?

Most of these questions carry statistical undertones. By applying the appropriate statistical technique to the process validation steps in FDA's regulatory framework, you will improve the quality of your products, increase your confidence in those products, and reduce overall product costs.

A FRAMEWORK FOR PROCESS VALIDATION

In the new quality system regulation (QSR), FDA defines process validation as "establishing by objective evidence that a process consistently produces a result or product meeting its predetermined specifications." In the section on process validation (820.75) of that same regulation, FDA states the requirements:

(a) Where the results of a process cannot be fully verified by subsequent inspection and test, the process shall be validated with a high degree of assurance and approved according to established procedures. . . .

(b) Each manufacturer shall establish and maintain procedures for monitoring and control of process parameters for validated processes. . . .

In the 1987 Guideline on the General Principles of Process Validation, FDA provides a methodological framework for process validation.1 FDA suggests that medical device manufacturers include some preliminary considerations and five formal elements in a prospective process validation. The five formal elements are an equipment installation qualification, a process performance qualification, a product performance qualification, a system to ensure timely revalidation, and documentation. Process validation can be considered the umbrella term for the successful completion of appropriate preliminary efforts and all five formal elements (Figure 1).

Preliminary Considerations. As might be expected, preliminary considerations encompass all those activities that should be completed prior to undertaking a formal process validation, including defining the product in terms of its performance characteristics, translating product characteristics into specifications, and considering the product's end use during the design phase. These activities are the same as those now required as part of the QSR requirements for design control and should be documented in a design history file (DHF). A DHF includes documents on design and development for both product and processes. The preliminary work completed early in a process validation is sometimes referred to as a prequalification, a prevalidation study, or simply a range-finding study.

Equipment Installation Qualification. The first formal element of process validation, equipment installation qualification, provides documented verification that process equipment is installed correctly, that measurement and monitoring equipment are calibrated, and that process equipment operates acceptably at use settings. Use settings should include worst-case setup conditions so that the full range of expected use is verified. Some manufacturers separate the equipment installation qualification into installation and operational qualifications (IQs and OQs), but both should be included in this step.

After completing an installation qualification, there is usually some pressure to immediately complete a process performance qualification. If the process is well understood and stable, the process performance qualification may go smoothly. More often than not, difficulties will arise. Even in the best process development programs, the transition to production can be rough. Often, a prequalification study may be required after the installation qualification and before the process performance qualification (Figure 1).

Figure 1. Typical flow of process validation. Illustration by Logicon RDA



Process Performance Qualification (PQ). The second element of process validation is a documented demonstration that a process is reproducible and effective. A typical PQ measures results for normal operating conditions as well as results when process parameters are set at extreme allowable operating limits or challenge conditions. Since most manufacturing procedures allow a number of process parameters to vary within particular operating windows, the difficulty for most manufacturers is to judiciously choose a reasonable set of extreme operating conditions for the qualification. During this step, a question also arises about how many lots or batches should be tested (i.e., sampled). This question should be answered with some statistical rationale; in the absence of that rationale, the commonly quoted "three batches" is simply a suggested minimum.

Product Performance Qualification. During this step, manufacturers must document that a product produced by the process in question operates as intended in an actual or simulated-use environment. Because this qualification is specified only for medical device manufacturers, it introduces some confusion when the pharmaceutically based IQ, OQ, and PQ acronyms are applied. (In the pharmaceutical industry, PQ is the process performance qualification. There is no common acronym that describes the product performance qualification.) A second source of confusion is that actual use tests are often performed on subassemblies or on devices composed of components produced by many processes. Therefore, the documentation may not fit neatly within one distinct set of process qualifications.

Most manufacturers avoid these areas of confusion by using complete names—instead of acronyms—for all qualifications and by assessing product performance separately from the process performance. The new QSR provides the potential for combining the product performance qualification with design validation.

A System to Ensure Timely Revalidation. In an ideal world where validated process parameters are monitored as required in the QSR and no changes are made to the process, there should be no need to revalidate. However, actions such as preventive maintenance, a change in raw materials or vendors, or a process change can trigger a need for revalidation. A validation procedure that addresses revalidation as an independent action or as part of the general requirement for process validation is a typical approach. Some manufacturers assess revalidation each time an engineering or production change occurs. The criteria a manufacturer chooses to ensure timely revalidation forms the fourth element of process validation.

Documentation. As the old adage goes, if you did not document it, you did not do it. Typical process validation files include:

  • A process validation standard operating procedure (SOP) providing general guidance and responsibilities.

  • A process validation master plan that addresses more specific responsibilities, processes, priorities, and schedules.

  • Protocols for each qualification describing specific acceptance criteria, measurement and analysis procedures, and test procedures.

  • A report for each qualification or for the entire validation. Reports should contain the data analysis and conclusions.

APPLYING STATISTICAL TECHNIQUES

By applying the appropriate managerial and statistical techniques to each qualification in the above framework, you can be more confident in your results. The most critical preparation tasks are to identify the processes that must be validated, identify the process parameters that most influence measured outputs, and ensure that the measurement of outputs is repeatable and reproducible.

Determining Which Processes to Validate. Three general sources can help identify processes that must be validated. Regulations and guidelines are an obvious place to begin. In addition, common industry practices provide examples and set expectations, and risk assessment techniques point to processes where failures could cause eventual harm.

According to the QSR, companies must validate processes where the results "cannot be fully verified by subsequent inspection." The most obvious candidate processes are those where the measurement of results destroys the product. Sterilization is a classic example of a process requiring validation.

The training currently being provided to industry and FDA by the Association for the Advancement of Medical Instrumentation (AAMI) probably best summarizes current industry practices.2 The AAMI compendium lists a number of examples of typical processes to be validated. A few of those processes are lab test methods, filtration, filling, calibration, packaging operations, injection molding, wave soldering, plastic bonding, welding, and software-controlled processes. Examples are helpful to companies trying to balance regulatory and business concerns and to FDA investigators trying to create consistent enforcement among all districts.

A final method of identifying and prioritizing processes to be validated is through the assessment of risk. Risk assessment techniques such as hazard analysis examine potential hazards and then identify causes. If the source of the hazard can be traced to a manufacturing process, that process becomes an immediate candidate for validation. Design control requires risk assessments to be conducted as part of design and development. Once a process is slated for a validation, it should be characterized in enough detail to identify the parameters that significantly affect a measured output.

Identifying Influential Process Parameters. Process characterization defines the relationship between process parameters and measured outputs, including which parameters most directly influence outputs. If any key parameters are allowed to vary in the manufacturing process, they can be used to challenge the process during the process performance qualification.

The most efficient approach to characterizing a process is by applying design of experiments (DOE). DOE is both a philosophy and a statistical technique. As a philosophy, DOE is a systematic approach to setting up a minimum number of test cases by first examining extreme combinations of process parameter settings and increasing the number of tests only as needed (Figure 2). As a statistical technique, DOE-based protocols provide information on the effect each process parameter or interaction between process parameters has on measured results.



Figure 2. Tree diagram showing the possible combinations of two process parameters when they are configured at extreme settings for use in design of experiments.

Figure 3 shows a Pareto chart and a contour plot generated during the analysis of the results of an injection molding characterization DOE. The Pareto chart clearly separates the influential process parameters from the less influential. The contour plot maps the expected process output (interior lines) for various settings of two process parameters assigned to the horizontal and vertical axes. In this example the expected part shrinkage depends on the settings for time and pretemp. Of course, even the use of DOE assumes that it's possible to consistently measure process results.

Figure 3. The Pareto plot illustrates process parameters using bar lengths. The contour plot shows the mathematical relationship between time, pretemp, and a measured output. Different values for measured outputs are displayed as interior lines. The plot can identify settings for the two most important parameters that will achieve a selected output.



Ensuring Repeatable, Reproducible Outputs. Repeatability and reproducibility studies examine the consistency of measured outputs. A result is repeatable if the same operator or instrument continues to obtain the same measurements from samples from the same lot. A result is reproducible if the same values are obtained when measuring samples from different lots or by different operators. Through repeated measurements, it is possible to establish the expected variability in results due to the measurement process. Those results can be quantified using statistical variance or standard deviation. By repeating the measurement using different instruments, operators, or lots, one can establish the variability in results that seems to be due to those differences.

Comparison of results is usually accomplished by examining statistical means and variances. The statistical t-test focuses on means, while analysis of variance (ANOVA) is used to compare variances. These techniques seek to determine whether the spread of the average results is larger than the spread of individual results. Figure 4 illustrates the measurements made by two operators, sometimes referred to as two "treatments." Both statistical tests assume that the operators should be making identical measurements. In this example, the t-test shows that the difference between the treatment averages is large and is a statistically rare event. There is only a 0.03% chance of finding a bigger difference, leading us to conclude with 99.97% confidence that the operators measure differently. The ANOVA comes to the same conclusion. The expected variability plays a big part in determining sample sizes for later qualifications.

Figure 4. Statistical tests for comparing means and variances. The t-test confirms that the differences in the averages of the two groups of data points (left) is large and statistically rare. The analysis of variance (ANOVA) draws the same conclusion by comparing the spread of averages of each group with the spread of the four data points in each group.



Since a process validation is associated with a manufacturing process, there is an expectation that the validation will be performed on production equipment. If this is the case, the equipment installation qualification should be finished as early as possible so that a large part of the preliminary work can be completed on qualified equipment.

Applying Statistics to the Equipment Installation Qualification. During the equipment installation qualification, the philosophy of DOE may be applied to establish candidates for worst-case operating conditions for the equipment. From this candidate list, companies may decide to test one, some, or all extreme operating conditions. When possible, this qualification is performed without processing product. Once the installation qualification is complete, the process performance qualification is initiated.

Applying Statistics to the Process Performance Qualification. Most validators consider process qualification to be the meat of the validation and will refer to production runs performed as part of this qualification as "validation runs." We prefer to call them qualification runs, reserving the term validation as the umbrella for the set of qualifications associated with each particular process. Regardless of the term used to describe these runs, during this phase it must be proved that a process produces effective, reproducible results. Both effectiveness and reproducibility can be demonstrated using statistical techniques, which can continue to be used to monitor validated process parameters on a frequent basis, as required by the QSR.

Reproducibility is clearly a reflection of the process variability. Variability can be introduced during a process or while measuring the results of the process. Preliminary activities focused on process characterization and measurement repeatability and reproducibility are an attempt to identify and quantify those items that affect process results.

During the process performance qualification, the items that affect process results are varied within expected ranges to challenge the process. For example, if a manufacturing procedure allows operators to vary machine pressure between 20 and 25 psi and pressure is known to be a parameter that affects measured results, then product should be made at both pressure settings during this qualification. If more than one key parameter is known to affect measured results, the possible combinations of challenge conditions can be identified using DOE. Whether all of these combinations must be challenged is a matter of good engineering judgment. However, the number of samples that must be collected during each challenge is a statistical judgment.

Traditionally, sample size is based on three factors: the confidence desired in the results (e.g., 99%, 90%, and so forth), the variability currently observed in a particular measured result, and the greatest acceptable change in measured output. For most processes, a tolerance or a percentage of the specification range will satisfy the last factor. Samples from each challenge condition can then be compared using the t-test, ANOVA, or statistical process control (SPC) charting techniques. Used since the 1940s to monitor production results, SPC charts provide a very visual method for identifying samples that are outside of normal, expected variability. Figure 5 illustrates the effects that eight challenge conditions had on the average process output for three lots of product. Standard SPC charts provide lower and upper control limits that bound 9.74% of statistically expected results.



Figure 5. This statistical process control chart demonstrates process reproducibility for eight challenge conditions on each of three product lots. The upper and lower control limits (UCL and LCL) act as statistical out-of-bounds markers for the plotted average results for each challenge condition.



Figure 6. Process capability (Cpk) blends statistical results with process specifications. A process with a Cpk of 2 (left) is more capable than one that has a Cpk of 1 (right).

If enough samples are available during validation, or once a process is validated and operating routinely, manufacturers can use process capability (Cpk) to provide a statistical measure of effectiveness. Cpk is a combined measure of how well process results are centered within a specification range and how well variability is controlled. Figure 6 shows two sets of process results; each set has the same average value, which is also the target value. When results vary greatly, a greater percentage of the results tend to fall outside of specifications, meaning the process is less capable, and the Cpk will be lower. Although FDA does not specifically require Cpk measurements it does refer to the use of statistical measures in section 820.250, "Statistical Techniques":

(a) Where appropriate, each manufacturer shall establish and maintain procedures for identifying valid statistical techniques required for establishing, controlling, and verifying the acceptability of process capability and product characteristics.

Once the process performance has been qualified, SPC charts and Cpk can be used to monitor production continuously as required by the QSR. Products produced by a qualified process are ready to enter a product performance qualification.

Applying Statistics to the Product Performance Qualification. As discussed above, during product performance qualification a product must successfully perform as required in an actual or simulated-use environment. For some devices, a clinical evaluation or trial may be necessary; for others, lab or bench tests may be acceptable. Obviously, this qualification will vary substantially and depend on the environment in which the device will be operated and on the technology incorporated into the device. Although the ground rules for statistical techniques do not change, production volume and batch sizes will temper their application. With the advent of design controls, FDA field investigator interest in this qualification should increase.

The latest edition of the Medical Device Quality Systems Manual states: "Design validation can be conducted using finished products made during process validation studies and will satisfy the need for product performance qualification."3 For those validators searching for time and cost efficiencies, a well-planned product performance qualification will serve both of those requirements.

CONCLUSION

Process validation is an umbrella phrase for the preliminary activities and five formal elements defined in the 1987 Guideline on the General Principles of Process Validation. The prevalidation activities and the formal product and process qualifications can be confidently accomplished by applying a few very effective statistical techniques.

Design of experiments, statistical tests, analysis of variance, statistical process control charts, and process capability measures are being used by an increasing number of medical product manufacturers to improve quality, reduce costs, and increase confidence in results. When you apply these techniques to your processes, you also can have reliable process validations.

REFERENCES

1. Guideline on the General Principles of Process Validation, Rockville, MD, FDA, Center for Drugs and Biologics and Center for Devices and Radiological Health, pp 5—9, May 1987.

2. The Quality System Compendium: GMP Requirements and Industry Practices, Arlington, VA, Association for the Advancement of Medical Instrumentation, pp 117—121, 1996.

3. Medical Device Quality Systems Manual: A Small Entity Compliance Guide, HHS Publication FDA 97-4179, Rockville, MD, FDA, p 7, December 1996.

Daniel L. Weese is department manager for commercial services at Logicon RDA (Colorado Springs, CO). Vera A. Buffaloe is a principal of Buffaloe Consulting (Arvada, CO).

Copyright ©1998 Medical Device & Diagnostic Industry

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like