MD+DI Online is part of the Informa Markets Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Articles from 1996 In October

Improving Quality with Integrated Statistical Tools

Medical Device & Diagnostic Industry Magazine | MDDI Article Index

An MD&DI October 1996 Feature

When used appropriately, statistical tools can make a significant contribution to the improvement of quality and productivity in medical device manufacturing. In the design phases of a product's life cycle, for instance, such tools as risk analysis can be used to evaluate potential problems with a particular design approach, saving vast amounts of time and effort by eliminating faulty approaches from further consideration. In the preproduction phase, design of experiments (DOE) can be used to refine processing specifications, thus speeding the transition to full-scale production. And in the postmarket phases of a product's life, trend analysis can enable manufacturers to rapidly identify adverse events that must be reported to FDA and to determine appropriate corrective actions.

In recognition of this important role for statistical tools, their use is strongly recommended in both the ISO 9000 family of quality systems standards compiled by the International Organization for Standardization (ISO) and FDA's proposed revision of its good manufacturing practices (GMP) regulation.1,2 Nevertheless, very few device companies presently operate quality systems programs into which statistical tools have been integrated to achieve the greatest possible efficiency and productivity. In our experience, when the performance of biomedical companies is evaluated against the 20 elements of ISO 9000, it is in the section on statistical techniques that they consistently score lowest.

Where device companies do make use of statistical tools, the function is commonly limited to basic training in statistical process control (SPC) for manufacturing employees and to the use of control charts and inspection sampling programs in key manufacturing processes. Such limited strategies rarely result in widespread or effective use of statistical tools. To make the best use of such tools, the developers of quality systems should give greater attention to integrating them into those systems.

Unfortunately for those quality systems developers, very little literature is available to assist them in creating an effective statistical program. Subpart O-statistical techniques in FDA's revised GMP regulation provides an outline for integrating statistical tools into the quality improvement system, but few details. ISO 9000 section 4.20 offers nearly the same advice, in the same depth. And although ISO is currently working to identify statistical tools that are appropriate for quality improvement, this effort has not yet resulted in a document that could be adopted as an ISO standard.3

The task for device manufacturers is thus a daunting one. And even after an ISO standard on the selection of appropriate statistical tools is issued, device manufacturers will still be faced with the work of integrating them into their quality systems, so that they can become widely and effectively used throughout the organization. This article describes the strategy and results of just such an effort conducted at nine facilities belonging to Medtronic, Inc. (Minneapolis), a manufacturer of medical devices worldwide. The effort to create a successful statistical program comprised three distinct elements:

  • Identification of appropriate statistical tools.
  • Development of procedures to assign responsibility for the implementation, control, and review of statistical tools.
  • Assessment of the quality system for use of good statistical practices.


Both the revised GMP regulation and ISO 9000 require that each manufacturer identify valid statistical tools for establishing, controlling, and verifying the characteristics of their products and the capabilities of their manufacturing processes. Although the process of identifying such tools can be helpful, the act of documenting the results is often more helpful.

The first step in identifying the tools that are to become part of the quality system is to compile a list of all candidate techniques and divide them according to the phases of the product life cycle (see Figure 1). Accomplishing this step will provide the manufacturer with a ready-made list of alternative techniques that can be applied to those phases whenever appropriate to answer specific questions.

Once the list has been compiled in the first step, the second step is to refine it, stage by stage, and determine how the company's quality systems manual will reflect the use of each statistical tool (see Table I). This step takes the manufacturer from the many possible techniques to the few that it intends to consider for application. Depending upon the type of data generated by a particular statistical tool, the rationale for requiring its use may include regulatory compliance, improvement of product design, refinement of manufacturing processes, or other reasons.

As part of this step, the manufacturer should also determine when it plans to apply a particular tool. In general, companies will gain the greatest benefits from powerful tools such as DOE if they are used as early in the product life cycle as possible. For such tools, then, pushing the use "upstream" should be a guiding principle. This principle is illustrated in Table I, where the design control stage includes the use of DOE, while corrective action consists mostly of the seven basic tools.

Figure 1. Statistical tools appropriate to the phases of a product life cycle.

One way to identify and assign applications to statistical tools is to survey the company's managers, engineers, and technicians to discover which tools they are already using or believe they should be using. For instance, DOE can be used to improve quality during several phases of the product life cycle, so company personnel may recommend that its use be required or considered at a number of points in the quality systems manual. Another useful way to identify the tools to be used is to review the results of previous internal and FDA audits. For example, FDA auditors often focus on a manufacturer's use of statistical tools for process validation, so it would make sense to include qualification and validation techniques as part of the design control and process control activities.

The process of considering which statistical tools to use can be reinforced by establishing procedures that remind company personnel of what tools to use and when to use them. As part of the design review process, for instance, the company's standard operating procedures could include a requirement that the review team consider the use of DOE or design failure mode and effects analysis (FMEA). Even if use of a particular tool is not a firm requirement, such procedures should ensure that employees will consider using it at appropriate times.


FDA's proposed revision of the GMP regulation and ISO 9000 both require that manufacturers develop and maintain procedures to establish, control, and verify the acceptability of product characteristics and process capability. A procedure for a statistical tool can accomplish any or all of the following:

  • Identification: ensuring that appropriate tools are considered for the application.
  • Implementation: ensuring that the tools are implemented at the right time and place.
  • Control: verifying that the tools are being applied appropriately.
  • Review: evaluating the use of statistical tools on a regular basis to be certain they are appropriate and are being used optimally.

Whenever statistical tools are being used to satisfy one or more of these purposes, quality systems developers should write procedures that define such use. Writing effective procedures is a key step toward integrating statistical tools into a quality improvement system. A procedure for use of a statistical tool should work to achieve the highest possible level of quality and productivity, encourage creative use of the tool, and add value to the process. A procedure that employees view as a burden that does not add value to their work will inevitably be disregarded.

The length and complexity of procedures can vary greatly, depending upon what is needed to describe the proper use of the tool in question. A procedure can consist of a short and simple statement, such as a single line in a process development checklist requiring that the process engineering team consider the use of DOE. A procedure may also be a lengthy and complex document, such as an acceptance sampling procedure that defines responsibilities for implementation, control, and review of sampling plans; includes a guideline for the rationale to be used in selecting the appropriate sampling plan; and provides work instructions for the operators who will use the procedures to perform the acceptance sampling.

Procedures may be written either as separate, stand-alone documents or as subsections of a larger protocol or process. In either case, it is important that the manufacturer keep track of the location of the procedure and be certain to place it under change control. This ensures that the proper version of the document is always available, and that earlier versions are discarded.

Stand-alone procedures are most useful when the statistical tool in question is an essential part of several processes and is used repetitively throughout the product life cycle. Such tools might include acceptance sampling, process capability studies, process qualification and validation studies, repeatability and reproducibility studies, and SPC. Including procedures for the use of a statistical tool within a larger process is a good choice when the tool is used only in conjunction with that process, or when the procedures may change from use to use. For example, it would be difficult to write a stand-alone procedure for DOE, because its implementation varies considerably each time it is used. Thus, the best way to integrate DOE into the process development phase might be to write a protocol specifying that part of the process development review will include consideration of DOE.

To write a good procedure for the use of statistical tools, quality systems developers should ensure that each of the following elements is addressed.

  • What tool is to be applied.
  • Who will apply the tool.
  • Where the tool is to be applied.
  • When (or how often) the tool is to be applied.
  • How the tool is to be applied.

The second element-who will apply the tool-is arguably the most important aspect of any procedure relating to statistical tools. The manufacturer should ensure that every procedure clearly assigns responsibility for every step that must be taken with regard to the use of the tool. For example, a procedure might need to specify who will be responsible for reviewing inspection stations used for inspection sampling, who is responsible for taking action when an out-of-control condition is indicated by a control chart (and what action they should take), or who is responsible for reviewing existing sampling plans and control charts to ensure that they are appropriate and are being used optimally.

In general, it is most effective to assign responsibility for statistical tools to the level of production that is closest to their actual use, and to train personnel accordingly. Assigning responsibility for all such uses to the quality assurance department-a practice that is all too often the first instinct of quality systems developers-usually proves unproductive.


A manufacturer's assessment of its use of statistical tools should consist of a formal, documented examination of current statistical practices and procedures, and an evaluation of future plans for improvement of the company's quality system. To be useful, this assessment should go beyond the compliance-oriented approach that is commonly seen in quality audits. Following are some key objectives that manufacturers should consider for their assessment of good statistical practices:

  • Determine the company's current state of compliance with the GMP regulation and ISO 9000.
  • Determine impediments to compliance with the GMP regulation and ISO 9000.
  • Raise awareness of the GMP regulation and ISO 9000.
  • Measure improvement over time.
  • Discover the best statistical practices in use throughout the company and share them with the rest of the company.
  • Provide advice on incorporating statistical tools into the quality improvement system.

To accomplish these objectives for their company, the authors developed a systematic approach to conducting a statistical practices assessment. The following sections describe the basic steps that were included in that approach, with some suggestions that others may use to implement such a program in their own companies.

Participate in Formal Company Audits. Formal audits of the company's quality system are required by both the GMP regulation and ISO 9000. By participating in these audits, and making the statistical assessment a part of them, assessors can help to alleviate undue stress that can result when a department has to undergo a separate evaluation. In addition, since audits and assessments consume valuable resources, combining the two enables the company to conserve resources and improve its efficiency.

Focus on Key Areas. For most medical device companies, the key production areas in which statistical tools are used are design control, process control, incoming inspection, testing and measurement, and corrective action. Sometimes assessment of practices cannot be accomplished for all of these areas in the same tour. For example, assessment of process control and incoming inspection can take as much as three days all by themselves. Thus, assessors should begin with the most important areas and let the others follow as time and resources permit.

Follow a Procedure. Adopting and following clearly defined procedures can help assessors to ensure that their assessments are consistent, even when they are carried out at a variety of sites and perhaps over a long time. Consistency is important if the company is attempting to compare the practices of several divisions or production areas, in part because it ensures that the company will have a clear understanding of the best practices available within it. Being consistent can also help to allay the natural anxiety that can arise when the practices of one department are being compared to those of another; an inconsistent assessor will soon find his or her results called into question from many sides. A written assessment procedure that spells out what areas are to be inspected and what questions will be asked can help to resolve these problems. Figure 2 shows samples of the authors' assessment forms related to process control.

Apply a Metric. Nothing gets management's attention like a meaningful metric, and nothing is quite so good at conveying the differences among various practices. By using a metric to indicate the sophistication or quality level of a department's statistical practices, assessors can make the job of comparing sites and communicating results much easier. Here again, consistency is a key element, so assessors should make sure to follow a procedure that will ensure regularity in their scores. Table II shows the authors' scorecard for assessing the use of statistical practices for process control.

Provide Advice and Assistance. Auditors almost never give advice. But during statistical assessment interviews, assessors should feel free to go beyond the normal limits of an audit and offer whatever observations and advice they can. Company personnel often misunderstand the appropriate and optimal use and application of statistical tools. Most interviewees thirst for this information, and assessors should give it to them whenever possible.

Document Results. Assessors should prepare a formal report of their work and give copies to the assessed departments. This will aid company personnel in understanding the important and often difficult points involved in applying statistical tools. The report can also be used to establish benchmarks against which future performance can be compared.

Identify Best Statistical Practices. As assessors gather information from a number of divisions or functional areas, they should take the opportunity to compile a record of the best practices they discover and share them throughout the company. Circulation of information about a company's best practices should not be limited by departmental boundaries; sometimes practices used in one department can be usefully adopted by others. A procedure for acceptance sampling, for instance, may work as well for in-process and final inspection as it does for incoming inspection. Our assessments have revealed some best practices that have been well worth sharing, including an excellent procedure for gage repeatability and reproducibility studies, a quality manual with a very good section on the identification of statistical tools, and the report of a project to identify appropriate inspection stations for acceptance sampling.


In most medical device companies, there is a need for greater integration of statistical tools into the quality system. Although some departments and facilities may argue that their existing procedures minimally comply with the GMP regulation and ISO 9000, and are therefore adequate, there is no doubt that a well-structured statistical program can improve quality and productivity and reduce costs beyond anything that might be obtained by a minimally compliant procedure.

Such improvements often come as a surprise to personnel in assessed departments. At one facility visited by the authors, staff were amazed to learn that use of a variables sampling plan could reduce sample sizes by an order of magnitude from previous levels. With such improvements in sight, the authors intend to continue conducting such assessments in conjunction with the company's series of ongoing internal audits.

The role of the statistician in developing the use of statistical tools includes determining which ones are applicable and which ones are best, developing easy instructions for use of the tool, and determining whether it is necessary and is being applied optimally. These are challenging tasks for statisticians, and they are even more difficult for nonstatisticians. In fact, the difficulty involved in writing procedures for the use of statistical tools is one of the main reasons they often don't exist where they are most needed. To help personnel cope with these challenges, assessors should consider providing templates of procedures that can be modified for use by all of a company's facilities.

In general, the inclusion of statisticians as members of an internal audit team is likely to be well received by the departments being assessed. With their training in GMPs, ISO standards, and quality auditing, these statistician-assessors can offer valuable advice of a sort that company auditors are usually unable to provide. On the other hand, statistician-assessors should not assume that they always know best. Each assessment tour should provide new examples of a company's best statistical practices, and assessors should learn as much as possible with every inspection. The assessment process is dynamic, requiring flexibility and an ability to adapt as well as consistency. Assessors should remember that their ultimate objective is continuous quality and productivity improvement, and that the application of statistical tools is but one means to that end.


1. "Working Draft of the Current Good Manufacturing Practice (CGMP) Final Rule," Rockville, MD, FDA, Center for Devices and Radiological Health, Office of Compliance, July 1995.

2. "Quality Systems—Model for Quality Assurance in Design, Development, Production, Installation and Servicing," ISO 9001-1994, Geneva, International Organization for Standardization, 1994.

3. Wadsworth HM, "Standards for Tools and Techniques," in Proceedings of the 48th Annual Quality Congress, Milwaukee, American Society for Quality Control, 1994.

John S. Kim is director of corporate statistical resources and Michael Larsen is senior statistician at Medtronic, Inc. (Minneapolis).

Copyright © 1996 Medical Device & Diagnostic Industry

FDA's New MDR Regulations:What Manufacturers Need to Know

Edward M. Basile and Elizabeth A. Schmidtlein


Go to Beginning of Article


See Box 3: Mechanics of MDR Filing

In addition to the basic reporting requirements for manufacturers and device user facilities, the new regulations contain several administrative and procedural provisions. These are summarized in Table II.

Table II. Summary of other MDR requirements. (The requirement for designation of a U.S. agent has been stayed indefinitely.)

Requirement Applies to Summary Changes from 11/26/91 Proposal
Designation of U.S. agent Foreign manufacturers. Foreign manufacturers must designate an agent in the United States who will register and submit MDR reports, conduct or obtain information about investigations, forward reports to the manufacturer, and maintain complaint files on behalf of the manufacturer. Agent subject to same requirements as a manufacturer. (Requirement stayed). Clearly delineated responsibilities of agent.
Exemptions, variances, and alternative reporting User facilities and manufacturers. Exemptions, variances, or alternatives to any or all of the reporting requirements may be granted upon request or at the discretion of FDA. Added variances.
Files User facilities and manufacturers. Records of complaints and MDR reports must be kept for two years or, for manufacturers, the expected life of the device, if longer. None.
Written MDR procedures. User facilities and manufacturers. Written procedures must be developed, maintained, and implemented for identification, evaluation, and timely submission of MDR reports, and compliance with recordkeeping requirements. Eliminated training and education programs for user facilities and manufacturers.

Written Procedures and Recordkeeping. The new MDR regulations require both manufacturers and user facilities to develop, maintain, and implement written procedures that set up systems (1) for the timely and effective identification, communication, and evaluation of adverse device events; (2) for a standardized review process to evaluate the reportability of an event; and (3) for the timely filing of reports with FDA.39

The regulations also specifically require written procedures for documentation and recordkeeping. These procedures must include requirements for documenting and retaining (1) any information evaluated to determine if an event is reportable, (2) all MDRs and information submitted to FDA and to manufacturers, and (3) any information evaluated for the preparation of semiannual reports or certification. In addition, the procedures must ensure access to information that facilitates timely follow-up and inspection by FDA. Obviously, all of these procedures will be reviewed as part of future FDA inspections.

Manufacturers and user facilities are also required to establish and maintain prominently identified MDR event files in written or electronic form that contain information and documentation relating to adverse events. According to FDA's draft guidance document, manufacturers' MDR event files should contain the following (or references to where such items may be located):

  • A copy of the initial complaint record containing the reportable information.
  • Documentation of the entity's attempts to follow up and obtain additional information about the event.
  • When information cannot be obtained and submitted, an explanation of why it cannot.
  • Copies of any relevant test and lab reports, service reports, and reports of investigations.
  • Documentation related to the deliberations and decision-making processes used to determine the reportability of the event.
  • Documentation of the final assessment of the event and any corrective action taken.
  • Copies of MDR reports and other information submitted to FDA, distributors, or manufacturers.40

User facilities must maintain MDR event files for two years from the date of the event, while manufacturers are required to maintain these files for either two years from the date of the event or the expected life of the device, whichever is greater. FDA has defined expected life of a device as "the time that a device is expected to remain functional after it is placed in use." How to calculate the expected life of a device has not been defined, but presumably companies will do so based on their experience marketing the device. In FDA's guidance on manufacturer reporting, the agency stated that there are some categories of devices that will have an indefinite expected life, since their end of life cannot be estimated. Manufacturers of devices with an indefinite expected life may want to request a variance from the record retention requirement to avoid having to retain report files for these devices indefinitely.

Requests for Additional Information. Under the new regulations, FDA may request additional or clarifying information if it determines that such information is necessary to protect public health.41 The request must clearly relate to a reported event, state the reason or purpose for which the information is being requested, and include the date by which the information is to be submitted.

HIMA criticized this requirement as being vague, lacking adequate guidance about the conditions under which additional information might be required, and providing no assurance that entities will be provided sufficient time to comply. HIMA suggested that, at a minimum, the regulation should be amended to make clear that the agency will allow a company a reasonable amount of time to comply with the request and that the company will not be considered to be in violation of the regulations if it is unable to obtain the requested information after making a reasonable attempt to do so. These suggestions, however, were rejected.

Public Availability of Reports. Mandatory reports submitted to FDA and records of telephone reports are subject to public disclosure via Freedom of Information (FOI) requests. Before releasing the reports, however, the agency is required to delete certain information, including confidential commercial or financial information, personal or medical information (including serial numbers of implanted devices) that might constitute an invasion of privacy, and names or other identifying information of third parties that voluntarily submit an adverse event report. Under certain conditions, the new regulations also prohibit FDA from disclosing the identity of a device user facility that has filed an MDR report.42

Computerized Forms and Electronic Reporting. FDA is encouraging companies to computerize the forms required for MDR reporting, but firms cannot make major changes to the appearance and format of the forms and must receive written approval from FDA prior to using them. According to FDA's guidance document on manufacturer reporting, the Center for Devices and Radiological Health is also encouraging companies to submit MDR reports electronically, subject to prior written consent, and is now developing an Electronic Data Interchange protocol to facilitate such reporting. Electronic reports include disks, magnetic tape, and computer-to-computer transmissions.

Enforcement. Noncompliance with the MDR regulations is prohibited under the FD&C Act and may result in a variety of FDA enforcement actions ranging from warning letters to injunction proceedings, civil penalties, and criminal penalties.

Effect on Product Liability. For user facilities, 21 USC 360i(b)(3) states that no report by device user facilities, their employees or individuals officially affiliated with them, or physicians "shall be admissible into evidence or otherwise used in civil action involving private parties unless the facility making the report . . . had knowledge of the falsity of the information contained in the report." There is no such statutory protection for manufacturers or distributors. However, the MedWatch form contains a general disclaimer statement, and the regulations permit a reporting entity to submit its own disclaimer denying that the report or information constitutes any type of admission that the device or reporting entity or the entity's employees caused or contributed to the event reported.

As a practical matter, manufacturers should be aware that violations of the FD&C Act can have significant product liability implications. When a manufacturer violates the statute or implementing regulation, a court can find without any more evidence that the manufacturer was negligent in the context of a product liability case.43 Indeed, a failure to file a required MDR report may have product liability implications even though FDA does not take regulatory action.44


The new MDR regulations contain several substantial new requirements for medical device manufacturers that will clearly add considerably to every manufacturer's costs of complying with MDR reporting. The new MDR regulations require manufacturers to provide FDA with a greater quantity of information, and the required information is of considerably greater specificity. There is likely to be substantially more paperwork and additional work in investigating and analyzing whether particular events are reportable.

The benefits to be derived from providing all this information to FDA are dubious at best. When Congress enacted SMDA, it felt that additional postmarket surveillance of products was necessary to balance what the 1990 Congress perceived as SMDA's relaxed premarket review requirements. In fact, premarket review under SMDA has become more rigorous and thus the need for additional postmarket surveillance is questionable. After some experience with the new MDR regulations, it would be prudent for Congress to revisit the MDR requirements to see whether the public health has really benefited. If evidence of public health benefits is lacking, the regulations should then be scaled back substantially.

In the meantime, however, manufacturers will have to cope with a cadre of increasingly complex regulatory requirements that, like all of FDA's regulations, are likely to be strictly enforced. Indeed, manufacturers should expect to find MDR event files and MDR procedures at the top of FDA investigators' lists of documents to review during routine inspections in the months to come.


1. Federal Register, 60 FR:63578 (Final Rule, Medical Device User Facility and Manufacturer Reporting).

2. However, in a separate rule published in the Federal Register last July, FDA revoked the distributor reporting certification requirement that went into effect on May 28, 1992. See 61 FR:38346 (Final Rule, Medical Device Distributor and Manufacturer Reporting, Stay of Effective Date and Revocation of Final Rule on Ceritification).

3. 49 FR: 36326 (Final Rule, Medical Device Reporting).

4. Federal Food, Drug, and Cosmetic Act (FD&C Act), sect. 519, 21 USC 360i, as amended by the Medical Device Amendments of 1976, Pub. L. 94-295, 90 Stat. 539, 1976.

5. FD&C Act, sect. 519, as amended by the Safe Medical Devices Act of 1990 (SMDA), Pub. L. 101-629, 104 Stat. 4511, 1990.

6. FD&C Act, sect. 519(a)(1) and (2), as amended by the Medical Device Amendments of 1992 (MDA of 1992), Pub. L. 102-300, 106 Stat. 238, 1992.

7. FD&C Act., sect. 519(a)(3), as amended by MDA of 1992.

8. 56 FR: 60024 (Tentative Final Rule, Medical Device User Facility, Distributor, and Manufacturer Reporting).

9. 21 CFR 807.20.

10. See 61 FR: 38348–38349 (Proposed Rule, Medical Device Reporting, U.S. Designated Agents).

11. 21 CFR 803.50.

12. 21 CFR 803.3(c).

13. 60 FR: 63583.

14. Medical Device Reporting for Manufacturers (draft guidance document), Rockville, MD, FDA, Center for Devices and Radiological Health, May 1996, at 19–20 [Hereinafter Medical Device Reporting for Manufacturers.

15. 21 CFR 803.20(c).

16. See 21 CFR 820.162 and 820.198.

17. See 21 CFR 803.22.

18. 60 FR: 63583.

19. Medical Device Reporting for Manufacturers at 51.

20. Medical Device Reporting for Manufacturers at 11.

21. 21 CFR 803.3(d).

22. See 60 FR:63582.

23. 21 CFR 803.3(aa).

24. 21 CFR 803.3(m).

25. 21 CFR 803.50(a)(2).

26. See Medical Device Reporting for Manufacturers at 13.

27. 21 CFR 803.3(c) and 803.53(a).

28. 21 CFR 803.3(y).

29. See Medical Device Reporting for Manufacturers at 20.

30. 21 CFR 803.53.

31. 21 CFR 803.50(b).

32. 21 CFR 803.56.

33. 60 FR: 63591.

34. 61 FR: 38346.

35. 61 FR: 38348.

36. 61 FR: 39868.

37. 21 CFR 803.3(e).

38. 21 CFR 803.58.

39. 21 CFR 803.17.

40. See Medical Device Reporting for Manufacturers at 38–39.

41. 21 CFR 803.15.

42. 21 CFR 803.9(c); see also FD&C Act, sect. 519(b)(2).

43. See Tool v. Richardson-Merrill, 60 Cal. Rptr. 398, 1987.

44. See Stanton v. Astra Pharmaceutical Products, 718 F.2d 553, 3d Cir. 1983.

Copyright© 1996 Medical Device & Diagnostic Industry

Selecting Materials for Medical Products: From PVC to Metallocene Polyolefins

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

Originally published October 1996

Sherwin Shang and Lecon Woo

Polyvinyl chloride (PVC) and polyolefins are among the most popular polymers used in medical applications. In 1996, PVC is projected to make up around 27% (750 million pounds) of the total medical plastic volume consumed in the United States (Figure 1). Another 36% of that total will be made up of polyolefins, including high-density polyethylene (HDPE) at about 12%, low-density polyethylene (LDPE) at about 7%, and polypropylene (PP) at about 17% ( Figure 2). However, most of the dollars spent for medical plastics go for minor components made from specialty polymers and engineering plastics. PETG, for example, is the predominant material used for rigid trays, while polyester and nylon are commonly used in medical films and packaging.

PVC is a versatile plastic that can satisfy a wide range of product function, safety, performance, and cost criteria. Plasticized PVC has been widely accepted for use in flexible medical products, and many products made from it have passed critical toxicological, biological, and physiological testing. Nevertheless, because of its connection with toxic by-products of processing and postuse incineration, PVC continues to receive increasing criticism.

As a prospective replacement for PVC, the family of polymers known as metallocene polyolefins has shown great potential. Metallocene polyolefins can deliver many of the same material properties and functions as plasticized PVC. When considering the use of components made from metallocene polyolefins, however, medical producers will also need to assess their suitability from the viewpoints of design, processing, and product performance.

This article discusses the advantages and disadvantages of PVC and metallocene poly-olefins for use in flexible medical products. The challenges that metallocene polyole-fins must meet in order to succeed in today's medical industry are also explored.


As previously mentioned, the factors that govern the development of medical products can be categorized into four distinct areas: material selection, design, processing, and product performance. The detailed requirements of these four areas are listed in Table I on page 134.

These considerations can greatly restrict a manufacturer's choices for developing a product. In the area of material selection, for instance, firms must consider design flexibility, cost-effectiveness, and finished-product safety, quality, and performance.

On the processing side, manufacturers need to consider yield potential as well as a material's ability to be extruded, molded, bonded, sealed, assembled, and sterilized. Additional concerns focus on water-vapor transmission or barrier, oxygen and carbon dioxide permeability, leachable elements, processing window, operating-temperature range, biocompatibility, heavy-metal content, regrind percentage, shelf life, and interference with drugs and solutions.

Selecting Materials. In order of priority, the material selection process for medical products emphasizes safety, performance, and cost. Safety issues are dominated by concerns about possible interactions between plastics and drugs, proteins, blood, biological cells, and medical solutions. Cost has gradually taken on greater importance in recent years because of the cost-reduction pressures related to health-care reform.

The process of selecting suitable materials for medical products begins with the creation of a precise and accurate definition of the product's material and functional requirements (see Table II on page 136). For example, finding the right polymer for an enteral-fluid package or blood container requires simultaneous consideration of design, processing, and performance needs.1 Other critical factors considered at the material selection stage include biocompatibility, leachability, drug-plastic interaction, oxygen and moisture barrier protection, optical clarity, ultraviolet (UV) stability, shelf life, the end-use environment, and total system costs. In addition, designers must consider the demands of downstream operations such as component bonding, assembly, sterilization, shipping, storage, and postuse disposal.

Recent progress in metallocene technology, including the ability to produce inexpensive metallocene catalysts, has led to the development of cheaper metallocene-based polyolefin materials. Metallocene polyolefins have the potential to achieve much better performance than existing polyethylene (PE) and PP formulations. Because they have properties similar to many specialty polymers and engineering plastics, metallocene polyolefins have the potential to replace PVC and some expensive engineering plastics, particularly for medical products requiring high impact strength and ductility at low temperatures. As a result, this polyolefin family is showing great potential for use in the medical and health-care product industries.

To be useful in today's medical products, however, metallocene-based polyolefins must also fulfill product design, processing, and performance criteria simultaneously. To meet these criteria with high product quality at the lowest possible cost will require a broad spectrum of material characteristics and processing capabilities.

Material Performance Versus Product Performance. A polymeric material can be processed in many ways to achieve a desired set of functional characteristics. For this reason, the characteristics of the base material may not be reflected in the performance of the finished product. In reality, that performance reflects the combined influence of material, design, and processing.

For example, plastic film with a low glass-transition temperature (Tg) has a better impact energy than a film with a high Tg. The finished product made with a low-Tg film is expected to have a better cryogenic impact resistance than the product made with a high-Tg film if it is designed and processed properly.

This example indicates that, in addition to a product's material, its design and processing can affect the performance of a finished medical product. Accordingly, it is critical to consider material, design, process, and performance simultaneously at every phase of product development and production.


Device manufacturers that wish to consider the use of metallocene polyolefins for use in their products or packaging will need to look at a wide range of characteristics. While the major traits of PVC have been established through a long history of use, those related to metallocenes are still emerging as the technologies develop and improve. Following are some of the key advantages and disadvantages of each, as their respective technologies now stand.

Advantages of PVC. PVC can be used to produce a variety of medical products ranging from rigid components to flexible sheeting. The type and amount of plasticizer used determine the compound's Tg, which in turn defines its flexibility and low-temperature characteristics and thereby establishes its range of suitable applications.

Because rigid and flexible PVC components have the same material structure, they can be easily assembled by solvent bonding. The two solvents most commonly used in PVC bonding are cyclohexane and methyl ethyl ketone (MEK). Rigid parts that have been molded of PVC are suitable for ultrasonic bonding, while flexible extruded or calendered PVC films can be sealed using heat or radio-frequency (RF) sealing.

Medical products made from PVC can be sterilized by steam, ethylene oxide, or gamma radiation. Plasticized PVC can have a Tg as low as –40°C and still be suitable for steam sterilization at 121°C. Additional characteristics that make PVC attractive include its low cost, broad Tg spectrum, wide processing-temperature range, high seal strength, thermoplastic elastomer–like material properties, high transparency, wide range of gas permeability, and biocompatibility. Medical products made from PVC have passed critical toxicological, biological, and physiological testing. In sum, PVC is one of the best medical materials in terms of cost and function. No other single material has such broad material latitude.

Disadvantages of PVC. Even though many medical products have been made from PVC, the material continues to receive criticism.2 The most commonly cited shortcomings involve toxic effluents produced during manufacture, and the generation of hydrogen chloride (HCl) during incineration. Because HCl is a component of acid rain, postuse disposal costs for incinerating PVC can be quite high. Other concerns related to PVC depend largely on the type and amount of plasticizers used. For some PVC compounds, there is evidence of plasticizer leaching to medical solutions, chemical interaction with drugs, water-vapor loss during long-term storage of medical solutions, and gas permeability.

Although these disadvantages sound serious, most can be eliminated or managed using existing technologies. For instance, current PVC manufacturing techniques can reduce residual vinyl chloride monomer levels to less than 1 ppm, thus minimizing the toxic effects of the compound. Similarly, modern emission-scrubbing equipment can adequately prevent releases of HCl and other effluents during incineration disposal.

With regard to the leaching of the plasticizer DEHP, however, expert opinion remains divided. California's Safe Drinking Water and Toxic Enforcement Act of 1986 raised concerns about the toxicity of DEHP. But a long-term hemodialysis study that covered more than 7 billion patient-days of exposure resulted in no widely accepted data linking DEHP exposure to carcinogenicity in human beings.3

Advantages of Metallocenes. The rev-olution in polyolefin materials spurred by new metallocene-catalyst technologies has created a great opportunity for medical and health-care industries. High yield, high clarity, high impact resistance, and low extractables are just a few useful characteristics of this plasticizer-free polyolefin family.

Metallocene PP is one group of compounds for which research has shown great potential. Unprecedented control over the microstructure of PP has led to commercial production of syndiotactic PP (s-PP); material scientists are also exploring new elastomeric PP using oscillating catalysts.4–6 The material properties of these two new PPs are similar to those of thermoplastic elastomers (TPEs), particularly the oscillating catalyst compound, which requires only a propylene monomer. This is different from current commercial PP elastomers that are based on the monomer C3 but have C2 and C4 as comonomers.

Metallocene PE (m-PE), on the other hand, has been targeted for use as a film in the medical packaging industry.7, 8 Enhanced clarity and reductions in both initial seal temperature and crystallinity certainly create many advantages for the packaging industry. Metallocene PE is also expanding into packaging applications traditionally dominated by ethylene-propylene-diene monomer and ethylene-propylene rubber.

There are a number of other characteristics that make metallocene-based polyolefins attractive for use in medical packaging. Most important, the TPE-like materials are chemically inert and do not interact with drugs. Their narrow molecular-weight distribution (MWD) results in low leaching and extractable levels, and their high thermal stability minimizes the need for stabilizers. The materials accommodate gamma radiation, and impact-resistant s-PP film tolerates steam sterilization. Lastly, the compounds are environmentally sound and can be cleanly incinerated or recycled, thereby reducing disposal costs.

Potential Metallocene Disadvantages. The current formulations of metallocenes have a number of disadvantages that researchers may in time overcome. For instance, concern over metal residues makes researchers' efforts to reduce the use of the cocatalyst methylaluminoxane a matter of some urgency. The usefulness of some formulations may also be limited by processing concerns: because of its narrow MWD and long crystallization half-time, for instance, s-PP is difficult to process.

Sealing presents another difficulty. Metallocene polyolefins are suitable for heat sealing, but not for solvent bonding or RF sealing, which are required steps in assembling medical device kits. Similarly, metallocene PE cannot be autoclaved because of its low melting point (Tm). Finally, metallocene technology must still confront the key challenge of cost reduction to meet market constraints.


Although their potential is great, metallocene polyolefins will have to overcome a number of challenges before they gain wide acceptance in the medical and health-care industries. These include concerns in the areas of product safety and resin quality, product design and processing, and product performance.

Safety and Quality. Although metallocene polyolefins can supply many desirable properties and carry out many functions, at present their safety standing is not fully understood because there are not enough historical data. Establishing biocompatibility is perhaps the most important challenge ahead.

Similarly, medical product manufacturers cannot deliver high-quality products without lot-to-lot consistency provided by resin suppliers. Metallocene resin quality, however, will not be established overnight; it will take time and teamwork to learn, fine-tune, and troubleshoot the production lines. Additives and stabilizers different from those used with traditional polyolefins may be required to formulate metallocene medical products that achieve the desired characteristics. Not every commercial additive and stabilizer is suitable for medical applications. Metallocene engineers will therefore have to develop suitable additive and stabilizer packages while preventing toxicity.

Product Design and Processing. In order to meet the design and processing requirements of medical product manufacturers, researchers will need to develop metallocene-based polymers that have suitable material properties and fit into existing production systems. To develop a new metallocene part or to replace a PVC component with one made of metallocene, for example, designers will need to address their compatibility with existing bonding, assembly, and sterilization techniques. Challenges for the future will include how to overcome the limitations of metallocene material properties, how to provide the latitude needed to meet various product design and processing constraints, and how to integrate metallocene materials into an existing pack or kit.

For example, medical products such as blood-collection units and solution bags require clarity, high cryogenic impact strength, and autoclavability.9, 10 Metallocene s-PP is known to have a low modulus and high impact strength at room temperature; however, it has a Tg of –5°C (compared to –40°C for plasticized PVC) and is therefore not a suitable replacement for low-temperature applications (see Table III). Fortunately, an impact modifier used with metallocene PE can enhance its cryogenic impact resistance. As shown in Table III, the low beta peak of m-PE indicates that this formulation can achieve low ductile-brittle transition temperatures. However, the Tm of m-PE is too low to permit steam sterilization.

Metallocene PP and PE by themselves may not meet the bonding and sealing requirements of medical products. Unlike PVC, the molecular structures of metallocene polyolefins have no dipole, and for this reason cannot be sealed using RF. This difficulty can be overcome by enhancing copolymerization and reactor technology or by blending metallocenes with other polyolefins or specialty polymers. To make metallocenes competitive with PVC for RF sealing, metallocene researchers must find a way to copolymerize polar functional groups and introduce dipoles into the molecular structures of PE and PP.

Product Performance. Safety, quality, integrity, functionality, and cost are the key factors that determine a medical product's success in a competitive market. A high-quality product associated with a low cost is the goal of medical device manufacturers throughout the world. To achieve and maintain this goal, manufacturers working with metallocene-based polyolefins must be careful to prevent polyolefin material–drug interactions, maintain strict quality control, maintain product integrity, prohibit dose concentration change, and keep extraneous costs to a minimum.

Product cost is a function of total system costs, which include costs related to materials, processing, assembly, sealing, scrap, sterilization, QA/QC, packaging, shipping, storage, shelf life, and the end-use environment. The key to making metallocene polyolefins competitive in the medical market may be not the cost of the materials themselves, but the total system costs of products made with them.


Progress in metallocene polyolefin technology has encouraged the medical device industry to take advantage of these newly developed TPE-like materials. But before deciding to use metallocene polyolefins in their medical products, manufacturers will need to simultaneously consider the material, design, processing, and product performance characteristics of the compounds as they relate to every phase of product development and production.

Metallocene polyolefins have a great potential to replace existing PVC formulations and traditional engineering plastics in medical applications. The ability to achieve this goal will depend on cooperation among medical product manufacturers and metallocene resin suppliers.


1. Carmen R, "The Selection of Plastic Materials for Blood Bags," Transfusion Med Rev, 7(1):1, 1993.

2. Goodman D, "Global Markets for Chlorine and PVC: Potential Impacts of Greenpeace Attacks," J Vinyl Technol, 16(3):156, 1994.

3. Finney DC, and David RM, "The Carcinogenic Potential of DEHP in Humans: A Review of the Literature," Med Plast Biomat, 2(1):48, 1994.

4. Shamshoum ES, Sun L, Reddy BR, et al., "Properties and Applications of Low Density Syndiotactic Polypropylene," in Proceedings of the Worldwide Metallocene Conference, Metcon '94, Spring House, PA, Catalyst Consultants, p 30, 1994.

5. Shamshoum ES, "Syndiotactic Polypropylene Catalyst: Properties and Possible Applications," in Proceedings of the Second International Business Forum of Specialty Polyolefins, SPO '92, Brookfield, CT, Society of Plastics Engineers, p 199, 1992.

6. Borman S, "Elastomeric Polypropylene: Oscillating Catalyst Control Microstructure," C&EN, January 16, p 6, 1995.

7. McAlpin JJ, and Stahl GA, "Applications Potential of Exxpol Metallocene-Based Polypropylene," in Proceedings of the Worldwide Metallocene Conference, Metcon '94, Spring House, PA, Catalyst Consultants, p 7, 1994.

8. Knight GW, and Lai S, "Constrained Geometry Catalyst Technology: New Rules for Ethylene Alpha-Olefin Interpolymers—Unique Structure and Property Relationships," Polyolefins, vol VIII, Brookfield, CT, Society of Plastics Engineers, p 226, 1993.

9. Shang SW, "What Makes Clear Polypropylene Discolor?" Med Plast Biomat, 2(4):16, 1995.

10. Woo L, and Ling MTK, "Cryogenic Impact Properties of Medical Packaging Films," SPE ANTEC '90, p 1116, 1990.

Sherwin Shang is program manager in the Biotech Group, Fenwal Div., and Lecon Woo is the Baxter distinguished scientist at the Medical Materials Technical Center, of Baxter Healthcare Corp. (Round Lake, IL).

Figure 1. Projected U.S. consumption of all medical plastics in 1996. Source: The Freedonia Group

Figure 2. Projected U.S. consumption of medical polyolefins in 1996. Source: The Freedonia Group

Table I. Medical product development considerations.
Material Selection Meet requirements of safety, design, processing, and performance.
Material compatibility with other components used in the same pack.
Drug and solution contact.
Biocompatibility and chemical inertness.
Leachables and oligomer residues.
Material aging, particularly after sterilization.
Additive chemicals and catalyst residue.
Lot-to-lot consistency from resin supplier.
Environmental friendliness.
Technical service from supplier.
Design Flexibility for medical product design.
Easy assembly.
No built-in residual stress in plastic components.
Bonding/assembly capability among product components.
Easy quality control by visual inspection or instrumental sensor.
Processing Extrusion/molding/thermoforming capability.
Large-scale manufacturability.
High production output rate.
Wide processing operation window.
Compatibility with the plant's existing manufacturing systems.
Assembly technology.
Sterilization methods.
Product Performance Safety and quality.
Cost/performance ratio.
Function orientation.
Market competition.
Customer satisfaction.
Cosmetic appearance.
Return to article

Table II. Medical products and packaging, with corresponding materials requirements.
Medical Product Product Requirements
Intravenous solution pack Flexible moisture barrier film.
No interaction with medical solution.
Blood-collection units, containers, and packs Breath film for platelets.
Low-temperature ductile impact film for plasma.
Peritoneal dialysis solution pack Flexible moisture barrier film.
No interaction with medical solution.
Disposable stem cell container Biocompatible to be able to incubate the cells.
Breath film for cells to grow and multiply.
Biohazard bag Puncture resistance.
High drop impact strength and autoclavable.
Pharmaceutical blister pack, bottle, and container Barrier for moisture, oxygen, and carbon dioxide.
Return to article

Table III. Material characteristics of metallocene polyolefins.
Material Tg (°C) Tm (°C) Beta Peak (°C) Crystallinity
s-PP-1 –5 120–130 21%
s-PP-2 –5 146–151 29%
i-PP –5 135
C8 m-PE 55–121 –34 13–55%
C4 m-PE 72 –34 21%
C4 m-PE 70 –33 20%
Return to article

Copyright© 1996 Medical Device & Diagnostic Industry

Coming Soon to a Monitor Near You: A New Information Source

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

Originally published October 1996

MD&DI's longtime contributing editor Michael Wiklund jokingly calls the World Wide Web the World Wide Waste of Time. Anyone who's done much web surfing knows he has a point. But experienced surfers also know that the value and accessibility of the information on the web are substantial, and growing every day.

For members of the medical device industry, this truth will be demonstrated with a splash on October 15, when Medical Device Link (MDL) makes its Internet debut at

Users familiar with our previous site (at will notice some significant differences. That site, like many first-time Internet efforts, was designed to get a simple on-line version of our print information onto the web. MDL goes beyond this approach, having been designed from the ground up as a completely new information tool, one that takes full advantage of this exciting medium.

Although you'll be able to find all of MD&DI's articles from current issues and archives going back, eventually, to 1993, there will be many more resources on MDL. Also available will be articles from MD&DI's sister publications IVD Technology, Medical Plastics and Biomaterials, and Medical Electronics Manufacturing, along with featured products and services from Medical Product Manufacturing News and European Medical Device Manufacturer. In addition, you'll read news and original articles unique to MDL.

To help you find suppliers of products and services, MDL will include a searchable database of more than 1000 leading companies and consultants. You'll be able to browse through this list by predefined product categories, or you'll be able to simply type in the name of the product or service you're seeking (and, optionally, restrict your search to a specific geographic region), and receive a list of companies that match your query. When you find the company you want, you'll be able to contact it directly through the web site.

Additionally, you'll find not only published articles, but also related materials that—because of length restrictions—could not be printed in the magazines. But we don't plan to be the only ones adding to Medical Device Link's content. Via the site's Forum feature, we encourage you to offer your own feedback and observations, enriching the site's store of information and helping to build a vital on-line community.

Other notable features of the site will be an on-line bookstore where you can buy publications from Canon Communications (publisher of MD&DI) as well as from other publishers, and an Expo Center where you can receive information about and register to attend upcoming Medical Design & Manufacturing trade events.

The number of features offered through MDL will continue to grow throughout the next year. The possibilities are endless, and we look forward to exploring them with you.


In last month's column I said that patent law seemed hip to me; unfortunately, I wasn't fully hip to it. Noting wisely that I had just begun to learn about the intricacies of patents, I then made a beginner's blunder. Let me correct the record here: the United States currently has a first-to-invent patent system that proposed legislation would change to a first-to-file system, not vice versa.

I thank Norm Best, president of RoboDisk Corp. in Burbank, CA, for his gracious note pointing out my mistake. He added the following eloquent observation: "The United States has been on the first-to-invent system since its constitution gave Congress the explicit power 'to promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.' When the power of the clerk exceeds the power of the inventor, we will have lost a signficant right. The standard of first-to-file would probably not withstand a constitutional challenge."

John Bethune

Copyright© 1996 Medical Device & Diagnostic Industry

Managing Positive Biocompatibility Test Results

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

Originally published October 1996

Nancy J. Stark

Go to Checklist for Investigating Causes of Toxicity

When the biological evaluation of medical devices is approached as a routine function involving nothing more than successfully passing a series of biocompatibility tests, there is little opportunity for innovatively managing positive test results, which indicate that a material is toxic. But blind compliance is not the intent of the International Organization for Standardization (ISO) or FDA.1–3 Manufacturers have the freedom, and the responsibility, to apply the ISO biological evaluation standard in a way that ensures the biological safety of their devices while conservatively managing resources. Neither ISO 10993-1 nor the FDA memorandum on its use specify pass/fail criteria for biological testing, recognizing that it is almost impossible to set general criteria and that manufacturers are in the best position to determine what level of toxicity is acceptable for their products.

Based on careful comparisons of biocompatibility and clinical data, some companies have determined the highest safety test score relating to unacceptable performance for a specific class of products and use this value as the pass/fail criterion. At many companies, however, there is a tendency to panic when biological safety test results are positive. The possibility of a positive result can call into question carefully considered material choices, threaten costly delays in product development schedules, and raise doubts about test strategies. Manufacturers may hastily pursue many directions at once, ending with an array of conflicting information. Or they may forget that the goal of biological safety testing is to determine whether a material or device is safe for its intended use, and not (necessarily) to determine the cause of toxicity.

The best approach to any situation where the ideal may be unobtainable is to follow a planned course of action, first confirming the facts at hand, then considering the options for future actions. Applied to the problem of positive biocompatibility test results, this means methodically confirming that the test procedure was followed as intended and that the test result is reproducible, and then considering whether the toxicity can be eliminated or is acceptable (see Figure 1). The steps in this strategic approach are described below.


Confirming that the positive test was carried out according to the specified procedure requires an investigation of several points. If the test was conducted according to good laboratory practices (GLPs), the signed test protocol can be examined for possible sources of error.4 In many cases, however, the standard protocol may be quite brief, and a closer look at study details will be necessary. If the test was not conducted per GLPs, as is the case for many 510(k) devices, the following questions can be used as a guide.

Was the Correct Test Article Evaluated? Mix-ups regarding the samples supplied to a testing laboratory are more likely in the early stages of product development programs, when several similar materials are being screened. Old lots of materials may have deteriorated or similar-looking materials that were stored near the intended sample materials may have been erroneously sent out for testing. Of course, there is always the possibility that the testing laboratory mixed up some samples upon receipt. The manufacturer should insist that the laboratory review its log sheets to confirm exactly what was tested.

Was the Correct Formulation Evaluated? Formulations such as those for wound dressings, electrode gels, bone cements, or dental fillings are likely to go through many iterations, and many will fail performance testing before safety testing is even undertaken. When several candidate formulations are in process, it is easy to unintentionally send one out for testing. It is also possible that errors were made during formulation. For example, when an electrode gel formulation had a surprisingly positive test result, subsequent analysis showed that the formulator had inadvertently added the wrong kind of potassium salt, raising the pH and preparing a highly corrosive material.

Was the Test Article Manufactured Correctly? Manufacturing processes that can affect safety test results can be categorized into three groups: those that alter a material mechanically; those that potentially introduce, or are intended to remove, chemical entities; and those that alter the surface chemistry of the test material. When a positive test occurs, the manufacturer needs to determine whether mechanical processes have introduced configuration anomalies, whether chemical processes have introduced or failed to remove toxic moieties, or whether energy processes have introduced surface changes, any of which may have an unexpected and untoward effect on material biocompatibility.

Mechanical processes such as stamping and die-cutting can introduce sharp burrs or edges on devices, which secondary processes such as polishing and burnishing are intended to remove. If a processing problem occurs and the sharp burrs or edges remain, they can cause skin irritation when whole test articles are applied to an animal, or cause cell damage or death when applied directly to cells in cytotoxicity tests.

Many processes that introduce new chemical entities, such as epoxy glue, or that remove chemical entities, such as organic cleaners, can also affect test results. Medical tapes present a good example: During manufacturing, tapes are passed through large ovens to evaporate and remove organic solvents trapped in the adhesives. If the tapes pass through the oven too rapidly or if the oven temperature is too low, some residual solvents may remain, resulting in an intrinsically toxic product.

Finally, other manufacturing steps can alter the chemical structure of the test material by breaking and reforming organic bonds. Ion implantation to increase lubricity or alter other surface properties is one example. Gamma and electron-beam sterilization, which bombards products with high-energy radiation or electrons, is also likely to alter the material surface, and sterilization with ethylene oxide creates by-products that may absorb into the material surface.

Was the Test Article Clean? Although all device manufacturers recognize the importance of supplying clean, sterile products to their customers, sometimes the same high level of respect for cleanliness is not provided for test articles. A material may be sent for testing as received from the supplier, without any concern for its state of cleanliness. In other cases, a test article may have been handled with ungloved hands or laid on a desktop, risking exposure to food or erasure crumbs. It is critical that designers place test articles in appropriate packaging and never handle them directly.

Was the Test Article Properly Identified? The distinction between product and packaging is usually clear to the manufacturer, but not always to the laboratory, which may inadvertently extract both packaging and product to prepare the test sample. The test article should be clearly identified, such as "a guidewire shipped in polyethylene-tube packaging," and the sample preparation instructions should clearly state that the packaging is to be discarded prior to extraction.

Was the Test Article Stored Properly? Some materials must be stored frozen, humidified, dehumidified, or away from light. Test articles should be properly packaged to optimize storage conditions and clearly labeled to communicate storage requirements to the laboratory.

Was the Correct (Intended) Extractant Used? Because the nature of the extractant can have a profound effect on safety test results, specific extractants may be necessary for certain types of products. For example, a drug excipient may be the preferred extractant for materials used in drug-delivery devices, artificial saliva may be preferred for dental devices and artificial perspiration for electrode gels, and a minimal essential medium containing serum that mimics wound exudate might be preferred for wound dressings. The test laboratory should confirm that the specified extractant was actually used.

Were the Correct (Intended) Extraction Conditions Used? Many new materials behave differently than traditional ones; superabsorbers, for example, obviate the usual rules relating to volume of extractant per gram or surface area of material extracted. Following traditional extraction ratios for such materials will result in a sample as viscous as syrup, which may cause instantaneous death when injected into mouse tail veins (the result of myocardial infarction as the injected sample travels as a bolus directly to the heart). Extraction conditions should be developed specifically for a new type of material and used consistently throughout the product history.

Was the Protocol Followed? If a test is carried out under GLPs, any deviation from the protocol should be documented, but deviations must be recognized to be recorded and custom protocols may contain procedures unfamiliar to the laboratory technicians. For example, materials such as casts and dental cements are cured in situ, with curing beginning as soon as the packaging is broken and the components mixed together. Breaking open the package and mixing the components are critical steps in sample preparation but outside the normal routine of the technician. Compliance with such unusual steps in a protocol should be examined carefully if test results are variable or other than what was expected. If the laboratory's standard protocol was used rather than one customized for the test material, this protocol should be reviewed carefully to ensure that it does not contain procedures that are incompatible with the testing requirements of the article in question.

In addition to addressing these questions, the manufacturer should ascertain from the testing laboratory whether the results of positive and negative controls included in the test run were normal, and whether there were any unusual results observed in the test run as a whole. The test procedure is considered confirmed if nothing in the investigation indicated a deviation from what was intended when the test was ordered. Obviously, if the intended procedure wasn't followed, the test should be repeated correctly.


The next step in the strategic approach to managing a positive safety test result is to confirm its reproducibility. There is a tendency for companies to simply send out a duplicate test article with the notion that if the result is again positive, the material is indeed toxic, but if the result is negative, the material is safe. There's a fundamental problem with this thinking—with only one positive and one negative test result, which can legitimately be believed? On the other hand, manufacturers can create problems by testing too many articles with too many variables at too many laboratories. The sometimes-positive, sometimes-negative outcomes result in conflicts that may require expensive research projects to resolve. In spite of conformance to standard test methods, each different laboratory used will introduce new variables to a test. It can be very difficult, if not impossible, to sort out why a material passes a test at one laboratory and fails the same test at another.

The middle road is to submit two additional test articles to the original laboratory. The two test articles should be separated from each other and the original sample by time and space; for example, they might be from separate manufacturing lots, made on separate days, taken from the start and end of a run, or made by separate shifts. The test articles should be identical to each other and the original article in all other aspects of composition and the manufacturing process. The results of the additional tests can be considered to either confirm or deny the toxicity of the original test article. Product development can usually proceed if the results of two out of three tests are negative.

It is important at this point to remember the purpose of the additional testing. Analysts may be tempted to embark on a series of experiments, testing various formulations or methodically eliminating one ingredient or process at a time, in order to pinpoint a causative agent. But the goal in this second step is simply to confirm the reproducibility of the result, not to investigate its cause.


If the toxicity of the material is confirmed by repeated testing, the ideal solution is to eliminate the toxicity. If the test article is a formulation or assembly, or is subjected to manufacturing processes, any one ingredient or process step may be causing the positive test result. Thus, the next step in the management process is to identify the causative agent.

The manufacturer should begin this investigation by reviewing all the information available regarding the test article's components. There may be clues to a toxic moiety in the drawings, material safety data sheets, vendor technical sheets, or chemical formulation. For example, one vendor added a cadmium stabilizer to a rubber formulation without directly informing the manufacturer. In another case, a job shop added a rigid material to the interior of a tube to stabilize the structure, which the manufacturer discovered when the device was cut in half to generate a drawing. If toxicity was observed with whole devices but not with extracts, the configuration may be the cause of the problem.

The available information regarding the manufacturing processes should also be reviewed. Could the chemical or energy processes used in the manufacture of the test article be contributing, creating, or incompletely removing toxic moieties? The lot history record of the test article should reveal whether anything unusual happened during this particular manufacturing run.

Finally, the manufacturer should seek out new information. A literature search on the key components in the test article may reveal whether any of them have a history of toxicity. Obtaining infrared profiles with an isopropyl alcohol extraction before and after processing can determine processing effects.5 Discussing test results with the material supplier is also important. Very frequently a supplier knows why a material may fail biocompatibility testing.

Once the information review has pinpointed the cause of toxicity as closely as possible, then one or more of the following changes can be implemented to eliminate toxicity.

  • Use different ingredients or a different ratio of ingredients to obtain a formulation that is not toxic. It may be possible to reduce the percent contribution that a particular component makes to the total formulation, or to use an alternative ingredient.
  • Improve quality control measures for mechanical process steps to ensure that burrs and sharp edges are not introduced or allowed to remain.
  • Make changes to the manufacturing process that eliminate the addition of toxic contaminants or ensure the complete removal of existing contaminants. For example, solvent solutions used to clean metal parts must be completely removed and the coagulant used to precipitate latex onto mandrels must be thoroughly washed away.
  • Replace a material with another that can serve the same function without contributing to toxicity. This is frequently possible when the material functions more or less independently within the product design.


Although eliminating toxicity is the ideal, there are situations where this is not possible. Toxicity may be intrinsic to the product and impossible to eliminate without compromising product function. One familiar example is the electronic componentry of pacemakers and cochlear implants. The toxic circuitry in these devices must be contained within a nontoxic case so it cannot leach out and injure the implant recipient. Another example is a simple product called an adhesive remover. No matter how it is formulated, the product is always a mixture of organic solvents that carry with them the possibility of systemic toxicity subsequent to skin absorption.

Medical devices that must cure in situ are also intrinsically toxic. The curing process of products such as casts, dental cements, and bone cements may involve the generation of free radicals or other reactive chemical moieties, or may be exothermic. In addition, implants made from nickel alloys carry an intrinsic level of toxicity. Nickel is a cardiac toxin, an oxytocic agent, and a common sensitizing agent (an estimated 5% of the population are allergic to nickel contact).6 The possibility of nickel being released into the biological environment always poses the risk of toxic response.

There are also some devices whose functions result in injury—for example, a medical tape designed to hold an appliance onto the skin. If the appliance is life-supporting, the tape will be expected to adhere to the skin with some high degree of tenacity so that the device will not fall away. This high adhesion level is likely to result in skin injury when the tape is ultimately removed.

In each of these examples, the toxicity is intrinsic to the device: Suitable (nontoxic) alternative materials do not exist, or the device will not function as intended if the injurious material is removed. The manufacturer is left with no alternative but to accept the toxic material. The strategy becomes one of justifying its use.

Justifying Use of the Material. There are three approaches to justifying the use of a toxic material in a medical device. The first is to compare the level of toxicity of the material to a comparable material that is currently being used by the manufacturer. If the new material has a lower level of toxicity than the current one and the current one has a safe history of use in the marketplace, the use of the new material may be justified because it is a move in the direction of decreased toxicity and increased biological safety.

The second approach is to compare the level of toxicity of the material to a comparable material that is currently being used in a competitive product. Again, if the new material has a lower level of toxicity than the competitive material in the same biological safety test, and the competitive material has a safe history of use in the marketplace, the use of the new material may be justified.

In the third approach the maximum dose and the no-observable-adverse-effect level (NOAEL) for the material are calculated and then compared.7 To determine the level at which no adverse effect occurs, the sample is titrated by using decreasing amounts in the test system. The highest concentration of sample at which no effect is observed is the NOAEL, which can be expressed in units, surface area, weight, or volume of material. The maximum dose of a material equals the units, surface area, weight, or volume of material to which a patient will be exposed during a typical course of therapy. If a material's maximum dose is 100-fold less than its NOAEL, the material is considered safe for use. (The 100-fold criterion is based on a 10-fold variation between species and a 10-fold variation within species.) The comparison must be repeated for each biological safety test giving a positive response.8

The NOAEL approach has been employed to justify the use of nickel alloys in implants. The amount of nickel released by in situ corrosion is compared with the maximum permissible amount of nickel that can be given per day in intravenous fluids, which was determined from intravenous injection of nickel in dogs.6 If the release of nickel through corrosion is less than the amount that can be safely given in intravenous fluids, the alloy is considered safe for implantation.

Dealing with Risk. Accepting a level of toxicity in a medical device carries with it some level of risk to patients. A responsible company will want to assess this risk level and determine whether or not the benefits of the device outweigh it.

Many devices that pose biocompatibility risks have become widely accepted because of their benefits. For example, a certain percentage of the population will experience life-threatening anaphylactic shock from the contrast media that is injected in preparation for x-ray. Nevertheless, since suitable alternatives do not exist, the medical community judges the potential benefits of the procedure to outweigh the risk. Another example is implantable heart valves, where thromboembolism is the most feared adverse effect. Thromboembolisms occur in about 3% of patients who receive mechanical heart valves, a percentage that is deemed the standard risk.9

To determine the risk/benefit ratio for a new product, the positive biological test result that identified its toxicity must be related to an actual effect that might take place in a patient using the device. For example, is the potential effect cardiac arrest, sensitization, or skin irritation? What will be the outcome—duration, level of pain, and degree of disablement—if injury occurs to a particular patient? What expenses will there be for the patient, both in medical costs and in loss of income? If the severity of the risk is considered to be the sum of some dollar value placed on the effect, the associated expense, and the expected outcome, a value for severity can be calculated using the equation:

Severity = Effect + Expense + Outcome

Then, by estimating the frequency of the effect for the patient—will it be a single occurrence or will the injury be repeated with each use of the device?—risk can be calculated as the product of the severity level and frequency:

Risk = Severity × Frequency

Placing a monetary value on the risk provides a numerical way to compare it to the product's benefits. Of course, if the risks to patients are morally unacceptable or are not outweighed by the benefits of device use, the manufacturer should refrain from offering a device for sale.

Any manufacturer that wishes to stay in business will also want to assess the risk to the company in the event of injury to patients. In this case, the applicable equation is:

Risk = (Loss of sales) + (Loss of goodwill)
+ (Cost of legal action × Number of suits)
+ (Cost of recall × Likelihood of recall)

One author has estimated the cost of a recall to range from $200,000 to $500,000, with an average cost of $300,000.10 Monetary values can easily be assigned to the other considerations, too, to derive a value for company risk. A high risk value that will not be outweighed by product sales should dissuade the manufacturer from offering the device for sale.

Similar analyses can be applied in order to calculate a risk to any other entity or person. In some cases, a manufacturer might want to estimate the risk to the environment, the risk to caregivers, the risk to hospitals, or the risk to other third parties.

Labeling for Safety. Every manufacturer should have a mechanism for reviewing product labeling for its consistency with device biological safety. Specifically, the directions for use, package inserts, and advertising should be reviewed. Some departments within the company may deem this unnecessary and observe that it lengthens the product development phase, delaying market entry. However, without a system of checks and balances some very strange claims can creep into product labeling. Clear, unambiguous labels are critical for devices that have potentially toxic effects.


Finally, whatever strategic approach and ultimate decisions a company makes with regard to managing biological safety testing, it is important to be logical, defensible, and consistent in decision making. The justification for accepting a toxic material must be based on sound physical, chemical, immunological, biological, and analytical principles. Any deviations from standard practice, FDA memorandum G95-1, or ISO 10993-1 should be documented and filed. And decisions should be applied consistently across the product line and across company functions.


1. "Biological Evaluation of Medical Devices—Part 1: Guidance on Selection of Tests," ANSI/AAMI/ISO 10993-1:1994, Arlington, VA, Association for the Advancement of Medical Instrumentation (AAMI), 1995.

2. "Use of International Standard ISO-10993, 'Biological Evaluation of Medical Devices, Part 1: Evaluation and Testing,'" Blue Book Memorandum G95-1, Rockville, MD, FDA, Center for Devices and Radiological Health (CDRH), Office of Device Evaluation (ODE), 1995.

3. Seidman B, "Manufacturer Use of ODE's Blue Book Memorandum on Biocompatibility Testing," Med Dev Diag Indust, 18(6): 58–66, 1996.

4. Code of Federal Regulations, 21 CFR 58, "Good Laboratory Practices for Nonclinical Laboratory Studies."

5. Wallin R, "In Vitro Testing of Plastic Raw Materials," Med Dev Diag Indust, 15(5): 126–132, 1993.

6. Sunderman FW, "Potential Toxicity from Nickel Contamination of Intravenous Fluids," Ann Clin Lab Sci, 13(1):1, 1983.

7. Ecobichon DJ, The Basis of Toxicity Testing, Boca Raton, FL, CRC Press, 1992.

8. "Method for the Establishment of Allowable Limits for Residues in Medical Devices Using Health-Based Risk Assessment," AAMI/ISO/CD-V 14538, Arlington, VA, AAMI, 1996.

9. "Draft Replacement Heart Valve Guidance," Appendix K, Rockville, MD, FDA, CDRH, Div. of Cardiology, Respiratory, and Neurological Devices, 1994.

10. Wood BJ, and Ermes JW, "Applying Hazard Analysis to Medical Devices, Part II: Detailed Hazard Analysis," Med Dev Diag Indust, 15(3):58–64, 1993.

Nancy J. Stark is a Chicago-based consultant specializing in medical device biological safety and clinical research. This article is based on a portion of her book, Biocompatibility Testing & Management, 2nd ed, Chicago, Clinical Design Group, 1996.

Figure 1. Flowchart of steps to be followed in managing positive biocompatibility results.
Copyright© 1996 Medical Device & Diagnostic Industry

Beyond the Elections: The Device Industry in "Little-Unit"America

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

Originally published October 1996

Ted Mannen
Executive Vice President, Health Industry Manufacturers Association, Washington, DC

Over the past two years, the device industry has made legislative reform of FDA its top public policy priority, and appropriately so. But this intense effort has masked broader trends that will shape the industry's policy environment, whatever the outcome of FDA reform and the November elections.

These trends grow out of changes in our economy and politics. And as they affect America as a whole, so do they influence the device industry's ability to innovate and improve patient care. As a result, the industry needs a focused strategy for keeping pace with the shifting sands of its environment.


Alvin and Heidi Toffler have popularized the notion that America has progressed from an agricultural economy to a manufacturing one to a "third-wave" economy that today is based increasingly on information. The periods between the waves produce wrenching transitions—farm to factory, factory to keyboard—that in turn produce strong political reactions. In 1896, the reaction focused on industrial power and its social consequences (to which adoption of the first food and drug laws can be traced). In 1996, the concerns center on free speech in cyberspace and the power of technology to stimulate too much productivity (contributing to downsizings not altogether different from those to which the Luddites so famously objected).

Today's economic transition is being played out against a new political backdrop—one made clear after the 1994 elections. Columnist and veteran political observer Michael Barone sees America shifting from a top-down orientation to one localized in the grass roots:

We are moving from what has been the exception in American political life back to what has been the rule ...a political regime significantly closer to that envisaged by the Founding Fathers than the political regime we have grown accustomed to in the 60 years following the New Deal of Franklin Roosevelt.

Barone believes these changes will be unaffected by election results and the political complexion of Congress and the White House. While not everyone will subscribe fully to Barone's thesis, it is difficult to deny signs of his "little-unit" America: government that rules with a lighter touch, greater reliance on markets and the individual, and more decentralization, devolution, and local action. Reinforcing these trends are third-wave information technologies. They exert a decentralizing, empowering effect that links and animates the little units, giving new vitality to E. M. Forster's dictum, "Only connect."

These broad trends apply especially to the device industry, where little unit is very nearly a literal descriptor. Despite the recent spate of consolidations, most device companies are still small, and they often organize politically into little units—as evidenced by the more than 20 regional device associations that have formed in the last five years (bringing to 48 the number of states where biomedical groups of some sort exist). Finally, device issues themselves—FDA reform notwithstanding—are driving toward this same little-unit level.


Only two years ago, our reimbursement lexicon was filled with such exotic health-care reform terms as the "national average per capita current coverage health expenditure." Concepts like these strained us not only phonetically but also politically, leading to repudiation of President Clinton's plan and of centralized, government- controlled health care.

This has left the field clear for the marketplace, through managed care, to triumph. And while managed care represents the kind of decentralized, pluralistic, market-driven system that innovators have long said they preferred, the new system is not an unqualified blessing. As the editor of MD&DI has aptly observed, "Industry concerns about FDA will gradually fade in recognition of the much more intractable problems posed by the restructuring of the health-care provider marketplace."

This is a marketplace, notes analyst Kenneth Abramowitz, in which "Washington is virtually irrelevant." The focus is instead on specific units at the local level, many of which are consolidating in order to compete. Medical Data International (Irvine, CA), in a May 1996 report titled All Healthcare Is Local, pointed out that "managed care's growth occurs through enrollment of specific populations in local health-care plans often affiliated with specific local employers [leading to] consolidation of specific independent health-care facilities."

In this marketplace, technology is not (as some in Washington would have it) an overarching policy construct; it is a collection of specific products that result in specific clinical and economic outcomes. Even national programs like Medicare (in which some 70,000 beneficiaries join managed-care plans each month) are increasingly being conducted in product-specific terms: payment levels for power wheelchairs; competitive bidding for clinical laboratory equipment; coverage changes for wound-care products. It is no surprise, then, that the in vitro diagnostics members of the Health Industry Manufacturers Association—as part of their response to the managed-care environment—are compiling a repository of studies on the outcomes of laboratory testing.

Finally, ambitious centralized technology assessment programs are devolving into the pluralistic, information-sharing regimes the device industry has long sought. The federal Agency for Health Care Policy and Research, for example, no longer writes clinical guidelines itself, but instead serves as a clearinghouse that links little-unit guideline developers around the country. Linkages like these are being powered by new information technologies that, as Paul Ellwood recently noted in MD&DI (June 1996), are rapidly turning U.S. health care into a "massive, continuous clinical trial" with many moving parts.

The future will perhaps bring a national backlash against managed care and the level of quality it offers patients. But today, the political jostling over quality characteristically occurs at the state level. As far as the eye can see, the reimbursement environment is fragmented, specific, and local. And even as managed care spurs consolidation, this consolidation gets its energy from the pluralistic, little-unit forces that are drawing decisions away from top-down policy-making in Washington.


Just as reimbursement policy has devolved from the broad, centralized approach of the Clinton health-care plan, so also is the liability environment changing from one that was increasingly based on a single, comprehensive principle to one driven by diverse, case-by-case specifics.

The principle grew out of a Federal Food, Drug, and Cosmetic Act provision that says that FDA's requirements preempt a state's requirements. In 1993, this provision was interpreted to allow preemption of a state's court-made product liability law. This meant, for example, that a company with a device in compliance with applicable FDA labeling requirements could not be held liable in a tort suit for failure to provide adequate warnings.

This so-called Collagen doctrine (Collagen Corp. pioneered the legal reasoning) was embraced by a growing number of courts. In less than four years, device companies used the doctrine to prevail over plaintiffs in more than 70 formal judicial decisions (more than twice the number of times the doctrine failed)—and to secure any number of favorable settlements. While the doctrine's protections generally expanded with the degree of FDA regulation, Collagen was potent not only against claims involving premarket approval (PMA) devices but also against those involving investigational and 510(k) devices (the latter most often Class III, but also Class II and, in one case, Class I).

In all, the Collagen doctrine was a unique and powerful legal tool—a bullet that, for many device companies, packed far more magic than the widely heralded but politically untenable liability reforms introduced in the 104th Congress. In fact, in 1994 one device company lawyer argued against such reforms, telling Congress that "tort reform has already arrived for medical device manufacturers" and that legislation could only weaken the Collagen doctrine.

Today, the scope of Collagen is in question, victim of the June decision in Medtronic, Inc. v. Lohr, in which the U.S. Supreme Court decided that federal preemption did not protect a Class III 510(k) device. A few days later, in a less-publicized case, the Supreme Court declined even to affirm a lower court's ruling that a PMA device was protected. Instead, the Court returned the case to the lower court for reconsideration in light of the reasoning in Lohr. Justice Stephen Breyer, who provided the crucial swing vote on key aspects of Lohr, wrote that FDA's rules "will sometimes" preempt a tort suit. Deciding when this is or is not the case, however, will require slogging through the specifics of individual disputes, a form of trench warfare that could be slow, arcane, and risky.

And so what had increasingly been a single, all-purpose answer to industry's liability concerns could well degenerate into little-unit litigation that yields many conflicting answers. As one legal expert explained: "There is lots here for lawyers and law professors to play with for years to come."


There will always be an FDA, and FDA will always regulate devices. But just as the agency's rules do not always exert a national, preemptive effect on state laws, neither do they always sweep uniformly across the device industry. In the future, FDA regulation will be less a monolithic, "big bang" Washington production and more a mosaic of many pieces, including the following.

Specific Products. The device law's architecture—three product classes, three levels of risk—has always been based on differences among specific technologies. Today, these differences are being sketched in sharper relief as implementation of the existing device law matures (a process largely unaffected by legislative reforms). For example, as the law requires, FDA has mandated submission of safety and effectiveness information on more than 100 types of preamendment Class III devices and their substantially equivalent successors—a step likely to trigger waves of reclassification petitions, each built around the risk characteristics of a specific type of device. As they unfold, activities like these will underscore the fact that regulation always has its truest meaning in relation to individual products.

Regional Flavor. When FDA commissioner David Kessler made good on his vow to "take enforcement up a notch," he sharpened industry focus on the agency's regional and district operations. One factor that has contributed to the rise of regional device associations has been the need for a local forum where companies, as a group, could engage FDA field officials—an approach that has blossomed into pilot programs aimed at longer-term improvements in the industry-agency relationship.

No passing fad, FDA field activities will grow even more important in the future. FDA's exemption from the 510(k) requirements of some 280 categories of lower-risk devices in the last two years means that factory inspections are now the principal locus of agency regulation for one-third of all classified devices. And even for many devices requiring 510(k) premarket clearance, FDA, through its third-party pilot program, has delegated key review responsibilities to private contractors. In working with these contractors, device companies will be looking, at least in the earlier phases of the review process, not to Washington, DC, but to where the contractors are located: California, Colorado, Pennsylvania, Illinois, Maryland, Minnesota, and the United Kingdom.

Global Information. Much of FDA regulation is premised on the belief that information can be held in place while experts study it. But today information travels rapidly around the world, passing through the hands of experts and laymen alike. So just as FDA stalks the Internet to pursue improper product claims, so do patients use this same medium to locate one another.

Cyberspace is a medium that encourages a single-minded focus on single topics. As a result, one finds on the Internet today a wide range of medical news groups—electronic gatherings where the world's patients and their families can share information about treating AIDS, prostate cancer, heart disease, arthritis, juvenile diabetes, and dozens of other conditions. This same power to narrowcast threatens to replace the traditional peer-review process with a system that is, as one clinical expert notes, "much faster, more open, and more democratic." For the Internet can elicit not just a handful, but hundreds of comments to identify faulty research—a capability that at least one journal, the Medical Journal of Australia, has already put to use. In the end, centralized FDA power will necessarily erode as aging-but-empowered baby boomers learn they need only click their mouse buttons to deliver feedback to decision makers and the media.

FDA, with its roots in second-wave industrialization, will no doubt survive the third-wave information economy. But there is nothing inevitable about the form the agency's work will take.


The years between World War II and the World Wide Web formed an era of relatively clear choices, a time characterized by two superpowers dueling over two distinct and mutually exclusive political visions. But the Cold War's end has left a hazier landscape—one in which a variety of interests are conducting tactical skirmishes over issues cast less in black and white than in shades of gray.

This is the sort of landscape where the device industry must find its footing as it straddles the 20th and 21st centuries. The issues of the future will less often be cosmic, all-or-nothing imperatives like FDA reform. They will instead be more specific, discrete, and scattered—for instance, the decisions of managed-care plans regarding clinical pathways for individual procedures, court rulings about how federal preemption interacts with new good manufacturing practices requirements, or FDA findings on what medical device reporting (MDR) baseline data mean for specific types of devices.

At the same time, state capitals will emerge as prime venues for important issues. The states are already addressing such issues as device licensure fees (made more likely by a recent federal court ruling) and the assessment of experimental care and related technologies (witness the incipient program in Washington state). And product liability represents a particularly promising state opportunity. Ohio and Michigan recently enacted Collagen-like protections against punitive damages, and other states may be ready to follow suit.

None of this is to say that major national issues will go away or that Washington, DC, will become irrelevant. It is simply to say that a directional shift is taking place, and that industry should shift its strategy accordingly.

The future will reward us more for focused maneuvers than for broad and sweeping movements, for little-unit linkages instead of mountaintop pronouncements, for self-help tools rather than prefab solutions. Advocacy will increasingly become a test of speed—of moving fast enough to get those at the grass-roots level the information they need to be effective. And with many things to do and many forums to cover, industry will be tempted to be of many minds. The trick will be to maintain focus as our portfolio grows more diverse, episodic, and contradictory.

One of the larger contradictions is posed by international issues. While regulatory harmonization remains an important long-term goal, the most pressing issues today relate to reimbursement, and they are being framed in ways distinctly inharmonious to the pluralistic, market-driven reality in the United States. Accustomed to central direction of their health systems, other nations are considering—or have already implemented—big-unit programs, including certificates of need (China), diagnosis-related groupings (Germany and South Korea), centralized technology assessment (France, Japan, and Taiwan), centrally administered price controls (Japan and Taiwan), and limits on clinical trial reimbursement (Germany and Japan). In response, industry must export its knowledge on these issues to new hot spots around the globe. And more broadly, as industry struggles to focus on an increasingly diverse issue mix in the United States, it must cope with a larger dissonance between the United States and other nations.

To be sure, next month's U.S. elections will directly affect the policy environment of the device industry. This, in turn, will carry ramifications for industry's policy positioning, relationship building, and day-to-day advocacy.

These are all important matters, but they should not deflect us from formulating a focused strategy for dealing with the deeper, more lasting currents running through our industry and our country.

Copyright© 1996 Medical Device & Diagnostic Industry

Return to MDDI main page
Return to MDDI article index

Managing Positive Biocompatibility Test Results

Nancy J. Stark

Boxed Information

  • Drawings
  • Material safety data sheets
  • Vendor technical sheets
  • Chemical components
  • Configuration
  • Manufacturing processes
  • Lot history record
  • Literature search
  • Physicochemical profile
  • Vendor consultation

Return to article

Working toward Global Harmonization—One Standard at a Time

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

Originally published October 1996


President, mdc medical device certification, Memmingen, Germany, and Chairman, ISO TC 194

Writing international standards is anything but easy, especially when one considers the logistics of assembling representatives from various nations and gaining input and agreement from all of those involved. Even so, the importance of establishing a globally harmonized system for medical device regulation is not only well recognized by those in the industry, but doing so may also be critical to the success of device companies in the years ahead.

Aware of these factors, Wolfgang Müller-Lierheim, chairman of the International Organization for Standardization's (ISO) technical committee on the biological evaluation of medical devices (ISO TC 194), is in a position to affect the timely development of one such worldwide standard. In addition to overseeing the committee charged with ISO 10993, Müller-Lierheim has become intimately familiar with international standards through his years as president of a medical device testing company that he founded 16 years ago, as well as in his prior role as head of product development for what has since become CIBA Vision. He has also served on several standards-writing organizations in the device industry and is also involved with the European Commission's working groups related to the practices of notified bodies and other issues concerning medical devices.

In this interview, Müller-Lierheim discusses his vision for ISO TC 194 as well as his thoughts on how ISO 10993 may be used throughout the global device community.

How does the work of ISO TC 194 in the field of biocompatibility relate to other standards around the world?

International as well as national standards are voluntary. Under the so-called new approach, harmonized standards—which are European standards that have been referenced in the Official Journal of the European Communities—play a key role in the European system of directives. European authorities, including notified bodies, have to assume that a product complies with the essential requirements of the relevant directive addressed by a harmonized standard if the product meets the requirements of that standard.

The European Committee for Standardization (CEN) obtains mandates for the development of harmonized standards from the European Commission. The ISO 10993 series of standards addresses the commission's mandate for the development of harmonized biocompatibility standards for medical devices.

We now have two committees that handle the biocompatibility of medical devices. One is CEN TC 206, which is the European committee in the medical device field, and the other is the international committee, ISO TC 194. As we see in other fields of international standardization, it is not always clear whether the majority of delegates of technical committees are moving toward international harmonization or whether they're choosing to stay with their own American, Japanese, Chinese, or European ways.

In the case of ISO TC 194 and CEN TC 206, we are fortunate that CEN TC 206 has agreed that this mandated standard should be created on an international level, so long as the ISO committee is able to obtain approval for an international standard and meet the time goals that have been set by the European Commission.

How does the work being done by ISO TC 194 relate to standards in countries outside Europe?

First of all, the work of ISO TC 194 started with looking at what existed. The entire work of the ISO 10993 series started with the so-called Tripartite Agreement adopted by the United States, Canada, and the United Kingdom. And until May of last year, this agreement was the basis of biocompatibility assessment by FDA.

During 1989 and 1990, we reviewed the standard and tried to identify its strengths and weaknesses as well as what was lacking. Working groups (WGs) were then formed to review and improve on what existed. The result of their efforts became ISO 10993-1, which was then adopted as an AAMI [Association for the Advancement of Medical Instrumentation] standard in the United States and later replaced the Tripartite Agreement for FDA use.

That's ISO 10993-1, on the selection of test methods?

Yes, that's the one adopted by FDA. And some other parts have also been adopted by AAMI.

ISO 10993 was originally expected to contain only 12 parts but has now expanded to include 16. Does TC 194 plan to continue evaluating what biocompatibility standards are needed within ISO 10993 and add sections accordingly?

On that, I can only give you a very personal opinion. We started with what we thought was most urgently needed. About a year ago, we reached a point where we had covered most of the issues and thought about how these standards are used internationally. We concluded that some standards work might still be needed.

On the other hand, the existing documents were developed under quite a bit of time pressure, and we were aware from the beginning that an early revision would be necessary—especially because all the standards were being developed simultaneously and would need to be checked against each other for consistency.

Was the time pressure because of the CEN mandate?

Yes. The European Medical Devices Directive (MDD) became effective on January 1, 1995, and we have a transition period until June 13, 1998, after which all medical devices have to be CE marked and assessed. There is no grandfather clause in Europe, so all devices will have to be evaluated.

So that put pressure on the committee to complete the initial group of standards so they could be referenced in the MDD?

Harmonized standards are not directly referenced in the MDD but become "harmonized" by reference in the Official Journal of the European Communities. The time pressure is associated with the deadlines defined in the European Commission mandates given to CEN.

ISO 10993-1:1992 was accepted as a European standard and published as EN 30993-1 in September 1993, meaning that it has been recognized by CEN. But in order to become a so-called harmonized standard, it has to be recognized by the European Commission, which is the governing body for Europe. The standard will be considered harmonized only after it has been published in the Official Journal of the European Communities, and this is still pending.

ISO TC 194 managed to publish ISO 10993-1 as early as 1992—only three years after the technical committee's establishment. Regulatory authorities accepted this international standard, but not without slightly modifying the approach. FDA put in some additional requirements, Japan provided additional guidance, but the European Commission has not yet decided that ISO 10993-1 is a harmonized standard, although other parts of ISO 10993 have been recognized as such.

Does that mean that TC 194 needs to go back and revise ISO 10993-1 to take into account what FDA and the Japanese have done?

Yes and no. TC 194 recognized that the extent of guidance given in ISO 10993-1 was not sufficient. We discussed two options: either completely revise ISO 10993-1 to include all the variations brought in by different parties or initiate a minor revision and address the additional need for guidance in a separate part of the standard. TC 194 favored the second option, so we established WG 15 to develop a strategic approach to biological assessment of medical devices.

Because it was very important to get all parties involved, we had the people responsible in FDA attending the meeting in Stockholm last April. And they demonstrated that they were prepared to discuss their ideas and concerns with the international community. The Japanese government representative was also available to discuss a fair way of interpreting the document, so having this meeting in Europe with the two regulatory bodies was a very positive situation.

FDA supported the development of ISO 10993-1 but decided to modify it for implementation in the United States, placing U.S. manufacturers in the awkward situation of still not having a harmonized standard. Will this new effort with the agency's involvement be likely to erase that discrepancy?

I am sure we will be able to find an international agreement, because the question of a person's safety cannot be different in Europe, the United States, or the Far East. It's only the way of thinking that's different. Even if there are five experts from FDA involved in the development of a standard, it does not necessarily mean that they reflect the opinion of the whole organization.

On the other hand, with international standards, we have involved regulatory authorities, industry, health professionals, and patients, and they all have their own interests, peers, and constituencies. So every international standard is a compromise of the whole community. Now assume that five representatives of one large organization have done their best to achieve an optimum-quality standard. It does not necessarily mean that they do not get criticism from their own organizations. That's just the way it is. We have to find what's best for an internation- al community, and I'm quite sure we will achieve it.

How are the ISO and European standards' voting and adoption systems structured?

There are more than 150 national member bodies within ISO. A majority of qualified votes in ISO means that an ISO document is adopted. In this case, the United States has one vote, just like Israel or Zimbabwe.

By contrast, there are only 18 members of CEN. And the European standardization system has a weighted voting process, so larger countries have more votes than smaller countries, although we have strong protection of minorities. We need much more consensus in Europe to get a standard.

Additionally, an ISO standard is absolutely voluntary; individual countries are free to decide whether to adopt it as a national standard. It's completely different in Europe; every European standard has to be transformed into a national standard within six months of publication.

This is a tight schedule, and it means that if there is a European standard—whether or not it differs from an ISO standard—the European standards bodies have no choice but to accept it. Adopting the ISO standard or any other standard is not an option. This is why it is so fortunate that TC 194 has a good relationship with its sister committee, CEN TC 206, under the so-called Vienna Agreement.

You mentioned the new WG 15. How is it structured?

TC 194 decided to establish a management team for WG 15, with one representative each from the United States, Japan, and Europe. We also wanted to be sure that there would be an equilibrium between industry and government. For example, Akitada Nakamura from Japan is a leading member of the Japanese regulatory body, whereas Barbara Krug is a representative of European industry. Barry Page from the United States, who was given overall convenorship, is from neither government nor industry, although he has a history in industry with Becton Dickinson as well as with the Health Industry Manufacturers Association. He is very well known and has a good track record in international standardization work. He was formerly convener of TC 194/WG 11, which was responsible for compiling 10993-7 on ethylene oxide residuals.

Also, Pauline Mars from the Netherlands is the secretary, so we really linked ISO TC 194 to the different economic areas, as well as to the European standards work.

So the working group is structured such that it receives the necessary input to ensure that when it's finished, everybody has signed off on it?

This is what we tried very hard to do. Forming such a group was not easy.

In looking back at ISO 10993-1, what has the committee found to be missing?

ISO 10993-1 clearly states what is needed in medical device biological evaluation, and one of the requirements is to consider existing information. While the standard includes this as a requirement, it does not give sufficient guidance on how to do it. Now, in this kind of a vacuum, an FDA group under the direction of Mel Stratmeyer developed a draft proposal on how to use existing information. The document is an interesting and thoughtful approach, and we wanted to include it in our international discussion because it was, to my knowledge, the first paper that extensively addressed this aspect.

FDA had hoped to revise that document for issuance as a guidance for U.S. industry. Will using it as the basis for a larger international framework give the document more importance than it might otherwise have had?

Absolutely, and I understand that FDA is willing to accept input from other countries and experts to improve the document to where it can achieve international acceptance. And that is exactly what we aim to do as an international community and committee.

So last December's draft guidance has provided a starting point for the work-ing group. Where do you expect it will go from there?

There is one more point that is missing in ISO 10993-1. We assumed we had a guidance on how existing information was to be used, as well as a guidance on which tests may have to be performed. But there is not sufficient guidance on how to interpret test results, so this is another issue that WG 15 should address.

One approach is to say, "This is the individual expertise of the so-called expert." But why shouldn't we try to formulate part of a standardized way to assess test results? It will certainly take time to resolve this issue.

Another point that is not clearly addressed is the question of materials characterization. This also relates to Mel Stratmeyer's paper on how to use existing information, but WG 14 under the convenorship of John Lang from the United Kingdom would have to address how to do that from a scientific point of view.

How does the work of TC 194 relate to FDA's planned biomaterials compendium, and what level of characterization do you think is needed for the materials used in medical devices?

To our way of thinking, the materials manufacturer should be primarily responsible for providing the level of materials characterization necessary to begin biological assessment. So this information should be released to the device manufacturer, but it should also remain proprietary.

All we want to standardize is the kind of data that must be provided in order to start assessment. Frequently, a test report will describe a test method and result but will not clearly say what kind of material has been tested. This leads one to ask whether the material really is representative of the product that comes onto the market. If a regulator gets a test report, how does he or she know that it really relates to the product entering the market?

There has to be some guidance given on how to describe the product. There are different ways to do that. One way would be for the manufacturer to keep on file the exact characterization of a material that relates to a certain batch number, or something like that, and then give the test results. Another way would be for a manufacturer to disclose this information to the test lab, or for the test lab to perform its own characterization of the materials. In no way has this been resolved; it must still be addressed.

Another area where we found weaknesses relates to the question of reference materials. There is not a sufficient number of well-characterized reference materials and positive and negative controls for all types of biological evaluation. We became aware of this during compilation of the standard on material preparation and reference materials. So in this case it is not sufficient just to revise the document; some basic work still needs to be done.

And last but not least, the whole issue of biological evaluation has evolved from the testing of chemicals. Some of the test methods we are currently using have been validated for chemicals and active substances but not for medical devices.

The question of sample preparation has to be addressed by the working group on sample preparation, and the validation of test methods has to be coordinated by each individual group describing these methods. This will not change the committee's work in principle, but it will make the standards more understandable to those parties that were not directly involved in their development. They will become more evident to both regulators and to industry, and I think this is necessary.

One of the ways FDA modified ISO 10993-1 when it accepted the standard was by including a flowchart to lead people through the process. Are working groups planning to incorporate similar information?

Yes. WG 1, which is responsible for ISO 10993-1, is including a flowchart now. The working groups are willing to include whatever intelligent input they obtain. Similarly, international standards committees should review whatever ideas are available, and mem- bers of working groups should apply their national experience.

Do the committees have problems dealing with the FDA approach because FDA is a regulatory body and not a standards-setting body?

Not necessarily. It may create problems, but the ways assessments are done in different regulatory areas will eventually be harmonized. Part of the contribution of an international standards committee is to question the ways in which assessments are made. Even without TC 194, every regulatory authority is changing, and the way FDA made assessments in the 1980s is not the same as in the 1990s.

So this is very much an ongoing process of feeding in new information and then using it to improve the standards that are written and the way they're used?

Definitely. And all the members of these international committees have to learn about the differing systems of various countries. There are some countries, like the United States, with very well developed systems, and there are others that are still learning. Nevertheless, maybe those countries that are still learning will simply improve on all that exists in a well-operating system.

Are other nonmembers watching what the committee does, or do they think its efforts don't apply to them?

I doubt nonmembers feel that the committee's efforts don't apply to them. It's more an economic question of whether a country can afford to have a sufficient number of well-trained experts as well as resources for international travel. This is clearly a situation in which only the developed countries can afford to have people directly involved in the standards writing. Other countries are watching and using some of the work, but they're not participating actively.

What about the way the tests for biocompatibility and so on are accepted internationally? Presumably a U.S. company would be able to submit its device for a specific required test and, once that has been successfully accomplished, the results would be accepted internationally, without question.

That is exactly the aim. We now have one major difference between the U.S. and European systems of assessment. In the European system, the MDD requires that each manufacturer make its own risk assessment, which is not currently required by FDA.

While this is a minor difference, the European system is based on the assumption that manufacturers should have sufficient expertise to be the best people to evaluate their products. Only if the manufacturer's own risk assessment seems to be weak does the notified body have to interfere. If the product is Class III according to the MDD, the notified body does have to take a thorough look at the assessment and follow it, whereas with low-risk products more responsibility lies with the manufacturer.

As I understand it, in the U.S. system the responsibility for risk assessment is mainly put on the shoulders of FDA.

So the manufacturer should be able to test its own devices but, in cases where a third-party independent assessment is required, a testing house is used?

Yes, but the manufacturer is not expected to have all test facilities in-house. Rather, it is expected to have sufficient understanding of its own products to be able to decide which tests have to be performed and to select the appropriate testing houses.

While many Americans are familiar with the term notified body, they are still confused when it comes to exactly how those bodies function.

A notified body has two tasks: certification of quality management systems, and certification of the performance and safety of products. Certification of the performance and safety of products does not necessarily mean that the notified body has to perform the tests itself. Rather, it must be capable of showing that the tests that are done are done properly, and give sufficient evidence of the product's safety and performance. The European system of accreditation and certification gives an opportunity for a testing house to be accredited by the government for certain tests.

For example, you'd have to go to a notified body that was notified for the electromagnetic compatibility (EMC) directive in order to have that body go to a testing house; the notified body couldn't do the testing for EMC itself?

You see, FDA never does tests itself; it accepts test results from others. Nevertheless FDA is expected to have the expertise to judge the results. It's much the same with notified bodies, although they have an option to operate and employ in-house test facilities.

So FDA is more like a combination of competent authority and notified body?

Yes. The whole area of postmarketing surveillance—in the United States it's called the medical device reporting system—is the competent authorities' business in Europe. Notified bodies are not directly involved. So it is just that the task has been divided in Europe among different groups.

In Europe, are there standards that a testing house for biocompatibility and toxicology would have to meet in order to be considered acceptable?

In Europe, the standard, as such, does exist. It's not only for biocompatibility, it's for all kinds of third-party testing. We have the European standard, EN 45001, and this outlines how to operate a test laboratory. It is similar to ISO Guide 25, and as I understand, ISO Guide 25 is now under revision to be more harmonized with EN 45001.

Are those the equivalent of the good laboratory practices (GLP) regulation in the United States?

Yes, but ISO Guide 25 is closer to EN 45001 than to the GLP regulation. For example, EN 45001 gives clear guidance on how a test lab must demonstrate its independence from commercial interests. The GLP says more about organization.

How does this work if the lab is in the United States?

This is one of the major difficulties we have to resolve: how can a European notified body make sure that a U.S. laboratory meets the necessary requirements? U.S. laboratories have approached notified bodies and they have reached some agreements in trying to overcome the difficulty, but so far we haven't come up with a very satisfactory solution.

So the United States has to catch up with Europe so that its labs can demonstrate that they are of sufficient quality?

I think some assistance from the U.S. government is needed. If there is a system in place, we can negotiate about common standards, but as long as we do not see a system that we can recognize, it is very hard from a European point of view to distinguish among different U.S. labs. Now what happens is the same thing that happened to European contract sterilizing companies. Those companies did not have a governmental acknowledgment, which meant that they were visited by a number of European notified bodies. And this is somewhat similar to what may happen to major U.S. laboratories.

Copyright© 1996 Medical Device & Diagnostic Industry

Why Choose Color Displays?

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

Originally published October 1996

Michael Wiklund and William Dolan

There is an entire class of medical devices, such as infusion pumps, blood gas analyzers, and ventilators, that could be considered technological centaurs—part mechanical device and part computer. Intermixing mechanical and computer components has greatly extended the functionality of these devices over the past decade, enabling physicians to develop entirely new therapy regimes. It has also led to a shift in the way people interact with those products. Information display and control actions previously assigned to dedicated-purpose meters, counters, knobs, and switches are now handled via interactions with software. The ubiquitous control panel has been replaced with the increasingly ubiquitous computer display and input device, such as a trackball or arrow keys. Turning a knob has been replaced by "highlighting and selecting."

Initially, these centaurlike products incorporated very small displays; typically a 2-line by 16-character liquid crystal display (LCD) or its equivalent. These little displays had considerable value, but also introduced significant usability problems. Comparing interactions with a medical product to a person-to-person conversation, the little displays made conversation overly terse, leading to miscommunication. Because of the lack of screen titles and meaningful prompts, device users found themselves confused about their place in a sequence of tasks, unsure of the meaning of terms abbreviated to fit the available space, and puzzled about the next step of a task.

Fortunately, as soon as prices came down, manufacturers started to incorporate larger displays to resolve, among other things, usability problems. The displays of choice over the past couple of years have been the full-size (640×480-pixel) VGA monochrome electroluminescent panel or LCD and their quarter-size cousins (320×240 pixels) that are cut from the full-size glass pieces. These displays have become relatively inexpensive and widely available, largely because of their extensive use in notebook computers.

Now, many manufacturers are pondering the same next step that the computer laptop industry has already taken: upgrading to color. An upgrade to color has considerable allure. Color certainly enriches communications between user and device, boosting a device's usability and desirability. For example, end-users state a strong preference for color-coded warning messages (e.g., white text on a red banner to indicate a dangerous condition), which can be very attention getting and start communicating their message (i.e., there is a problem) upon initial glance. These benefits of color have already been proven in higher-end products, such as integrated patient monitors and advanced imaging devices that are clearly special-purpose computers. But the considerable difference in price between monochrome and color displays is still a concern to manufacturers of lower-cost medical devices, the aforementioned centaurs. Consequently, design decisions regarding the use of a color versus monochrome display often stall and become the subject of endless debate and waffling.


Recognizing that others will champion the cause of low-cost monochrome displays (see sidebar), the balance of this article offers arguments in favor of stepping up to color. Some of the arguments may be familiar ones, but others may not. The number of advantages offered by color versus monochrome displays may be surprising.

Image Quality. Image quality varies widely between and within general classes of displays, including cathode ray tubes (CRTs), electroluminescent panels (ELPs), vacuum fluorescent displays, and LCDs. Although there are exceptions to the general rule, image quality increases as display cost increases. For example, a monochrome LCD (twisted nematic/passive matrix) is relatively inexpensive, but has several image quality shortcomings. Such displays are not as bright as others, are prone to viewing-angle limitations, and refresh slowly, blurring dynamic effects such as cursor movement or list scrolling. By comparison, a color LCD (thin-film transistor/active matrix) provides a reasonably bright image that refreshes quickly and affords a wider viewing cone. It is important to note, however, that some monochrome displays such as CRTs and ELPs also provide high-quality images.

Visual Appeal. As common sense would suggest, people generally regard colored screens as more visually appealing than monochrome screens. Anecdotal remarks and quantitative ratings by participants in user studies consistently reinforce this conclusion. For example, if you conduct a focus group or usability test with prospective customers and show them a version of an information display in color versus monochrome (gray scale), almost all will prefer the color screen. In some sense, this preference is comparable to people's general preference for stereo over monaural sound. Color adds an extra dimension to the visual experience, enabling people to draw upon the full capability of their senses. As such, customers will exhibit a natural attraction to, and preference for, colored displays over monochrome displays, all other things being equal.

Graphic Simplicity. Color monitors afford designers a greater opportunity to produce simple-looking screens. The reason is that adjacent colors of differing hue (spectral wavelength composition) and similar value (lightness versus darkness) create natural borders, whereas adjacent gray tones are incrementally less effective (see Figure 1). As a result, colored elements do not always need edge demarcation to assure distinctiveness, for example, where monochrome screens do. In comparison to colored screens, most monochrome displays are burdened with extra on-screen elements, adding to their visual complexity.

Color Coding. Color is a terrific method of coding important information. When color is not available, one must resort to other coding techniques. Alarm messages, for example, can be made larger and bolder than other text, physically segregated from other information, highlighted by means of demarcation lines or an inverse video effect, or be presented using a symbol (e.g., a bell or horn symbol) in place of text. However, as Figure 2 illustrates, color is a most compelling coding technique when it comes to detecting alarm information embedded within other information. In fact, human factors studies have shown that color outperforms other visual codes, such as size, shape, and brightness as a coding technique.1 It performs particularly well when the task is to search for an item or group of items among many, or to track a moving target.

More Information. Color displays offer the potential to present more information at a time than monochrome displays. As discussed above, it is simply a matter of having one dimension for communicating information. Christopher Goodrich is a senior industrial designer with Ohmeda, Inc. (Madison, WI), a manufacturer of anesthesia delivery systems. He considers a color display essential to the usability of medical devices that present a lot of information in parallel. "Color gives you an effective way to visually separate groups of information so that the information is easier for users to find," he says. "This helps you avoid a trade-off imposed by monochrome screens—putting in less information to avoid visual clutter."

Competitive Advantage. Competing manufacturers often engage in so-called feature wars. In such wars, manufacturers arm their products with extra features (usually software functions that can be enabled or disabled) that appeal to customers initially but rarely get used. As an everyday example, consumers are drawn to CD players that can play songs from several CDs in a preprogrammed sequence, but few use the feature once they own the device. In the medical world, the valued but unused feature might be a special data-analysis feature.

Color displays are different. They provide a continual benefit to users, much the way a good sound system rewards the listener. Color displays also provide a significant competitive advantage, particularly in head-to-head comparisons that take place in clinical environments and trade shows. As many marketing managers can attest, devices equipped with color displays tend to draw more casual interest among people walking the trade-show floor. The competitive advantage can be expected to shift toward products incorporating a monochrome display only when minimum cost is the dominant purchase criteria.

Some companies may consider color displays a necessity only when technology matures and manufacturers start to compete on the basis of design quality as opposed to functional capability. As such, companies holding an initial technological advantage may equip their products with monochrome monitors, expecting to sell just as many units and maximize profit. However, this approach offers future competitors an opportunity to break into the market with a me-too product equipped with a color display, meeting a demand for higher user-interface quality. Therefore, an early commitment to color before it becomes a necessity may be an effective way to ward off future competition while also boosting user-interface quality.

Progressive Image. Medical and diagnostic device manufacturers work hard to establish a positive image for their company in the marketplace. Most seek a progressive image, positioning themselves on the leading edge in terms of technology, perceived quality, and actual quality. It is not clear that marketing a product incorporating a monochrome display will erode a progressive image, but it could. For instance, when customers check out products incorporating monochrome displays for the first time, they are prone to label them old-fashioned or cheap. This perception may be a case of transference from people's experience with black-and-white versus color television. In comparison, the same product equipped with a color display might be labeled as progressive and user-friendly.

Meeting Customer Expectations. Today, such consumer electronics as palmcorders, pocketable TVs, and digital cameras incorporate color displays. This has raised the ante for industrial product manufacturers. Customers are becoming accustomed to high-quality color displays in place of monochrome displays. Therefore, some customers may feel neutral at best about using a product incorporating a monochrome display. However, other customers may regard the product as less user-friendly.

A spokesperson for Aksys, Ltd. (Libertyville, IL), a start-up company developing a system for hemodialysis, agrees that color has a real draw for customers. "Color can raise a customer's perception of product quality and user-friendliness—no question about it. Our customers are certainly used to products in their daily lives, such as televisions and home computers, that use color monitors. Therefore, their in-home experience raises the expectation for color in a medical product intended for use in the home. So, even though competitors might not be forcing us toward color at this time, we have to look seriously at the customer's preference for color."

Goodrich is also convinced that forces outside the medical industry are compelling medical device manufacturers to use color displays. "Customers are starting to expect medical devices to incorporate color displays, largely due to their experience with consumer products. This shift in expectation stands to make monochrome displays obsolete for application to products that require large displays." Speaking for himself and the hundreds of customers he has interviewed, Goodrich adds, "Monochrome displays are just so lacking in overall appeal, you want to avoid them except in cases where the wide-angle viewing capability of ELPs is essential."

Future Enhancement. In the near future, the struggle of choosing between color and monochrome displays should end. The cost differential will shrink and color displays will be the de facto standard for high-tech devices. Accordingly, the late 1990s represents a transition period, posing a shelf-life problem for manufacturers that choose monochrome for their next-generation products. What are manufacturers going to do when the marketplace forces them to upgrade to color? For many, the hardware needed to support a monochrome display will not support an upgrade without major redesign and retooling. Display mounts, power supplies, and display controllers may be incompatible, although some manufacturers have started to engineer commonality into their monochrome and color product lines. Also, a subsequent switch to color may require considerable software changes. So, initial reductions in development and manufacturing costs may be counterbalanced by the costs of design changes or introducing yet another next-generation product to market ahead of the intended schedule.

To hedge their bets, manufacturers can take a hybrid approach: designing for monochrome today, allowing for an easy upgrade to color tomorrow. This approach may result in some overengineering at the outset, but is likely to reduce product development and manufacturing costs in the future. Overengineering may come in the form of a flexible approach to display mounting and higher-capacity display drivers, for example. It may also come in the form of extra software code with setup functions that enable the appropriate code to produce a monochrome versus color image. However, it is possible to develop a single version of the software that works with either a color or monochrome monitor. The effect is akin to starting with a color monitor and adjusting the colors so that everything looks monochromatic. Making this approach work requires care in picking colors of equivalent value (lightness versus darkness) so that the monochromatic image has a consistent appearance.

Customer Choice. Rather than making a display decision, some manufacturers offer both color and monochrome displays and let customers decide between them. The extra costs of engineering and manufacturing both types of units can be recovered by marking up the color units, for example, presuming that some customers will pay the extra amount. Then, as the cost of color displays decreases to meet the cost of monochrome displays (due to economies of scale and competition), the monochrome monitor can be dropped from the inventory. From the beginning, this approach pays off in terms of establishing a high-tech image while also being able to offer a reduced price on an entry-level product. Typically, this approach requires designers to introduce redundant information coding, augmenting a color code with some other coding technique, such as shape-coding or size-coding. This ensures that a colored screen will still work in monochrome. Redundant coding also benefits people who have impaired color vision (8–10% of Caucasian males, about 4% of non-Caucasian males, and about 0.5% of all females).2

Avoiding Supply Problems. Already, there is concern over the availability of monochrome monitors in the future. Many manufacturers wonder if they will face a problem similar to finding low-end microprocessors (i.e., 286 chips) in a market that has moved on to higher-powered microprocessors (i.e., 486 chips). In fact, small monochrome CRTs are approaching the obsolescence point. However, those that choose color today face a potential problem that stems from volatility in the notebook computer market. Specifically, color display manufacturers are likely to adjust their product lines according to the needs of their dominant buyers—notebook computer manufacturers. In the near future, this may lead some manufacturers to abandon 10.4-in. diagonal displays, for example, in favor of larger ones. This potential reinforces the need for medical device manufacturers to engineer flexibility into both their products and manufacturing processes.

Device Integration. Many medical devices are used in conjunction with others. This is particularly true of devices used in the operating room and intensive-care environments. Such environments are not known for high levels of integration between products. However, there is a growing trend toward integration. For example, anesthesia workstations used to be a conglomerate of devices placed on various shelves. Now, companies are working together to produce integrated solutions. If one extrapolates this trend, one can envision entire systems of components that work together, sharing data and control capabilities. Products equipped with color displays fit well with this vision of high-quality user interactions with technology. By contrast, products equipped with monochrome displays may be viewed as impoverished relative to other, higher-end devices.

Third-Party Applications. With the advent of medical information highways within hospitals, many products will be sharing information. As a result, displays previously dedicated to a specific function, such as presenting vital signs and waveforms, may also display laboratory results and E-mail. In such cases, the use of a color display will be essential to ensure that third-party applications look correct.

Economies of Scale. Because of the costs involved, equipping certain lower-end products that have limited user-interface requirements with color monitors may be viewed as overkill as well as detrimental to sales. However, companies that also market higher-end products equipped with color displays may discover economies of scale favoring across-the-board use of color displays. The economies may not accrue strictly on cost of goods. Rather, they may accrue from reduced inventory, engineering, servicing, and software development costs.


Applications for monochrome monitors remain. However, for those manufacturers weighing the trade-offs between color and monochrome displays, there are abundant reasons to select color. Presuming a professional application of color in screen design, a color display opens up new dimensions in user-interface design that enhance a product's overall usability and appeal. Still, the added dimension of color may pale in comparison to the benefit of minimizing the cost of a product that must compete in a price-sensitive market. This trade-off is what makes the decision between color and monochrome displays so difficult.

For some product manufacturers, the cost considerations may lead to the continued use of monochrome displays for a few more years while the cost of color displays continues to drop. However, individuals calculating the benefits and costs should take care to consider all of the potential benefits of color that extend beyond the visual appeal of a given product or even the economics of a single product line. Moreover, decision makers should ask themselves the pointed question: Which type of display would I choose if I had to look at it every day for many years? Chances are that the very same individuals use a desktop computer equipped with a color display.


1. Thorell LG, and Smith WJ, Using Computer Color Effectively—An Illustrated Reference, Englewood Cliffs, NJ, Prentice-Hall, p 7, 1990.

2. Thorell LG, and Smith WJ, Using Computer Color Effectively—An Illustrated Reference, Englewood Cliffs, NJ, Prentice-Hall, p 117, 1990.

Michael Wiklund directs the Usability Engineering Group and William Dolan serves as Usability Test Laboratory manager at American Institutes for Research (Lexington, MA), which provides user-interface design consulting services to manufacturers of medical, scientific, and consumer products.

Figure 1. Differentiating graphical elements using color versus line.

Figure 2. Using color to highlight critical information.


Some of the advantages of using monochrome displays are listed below. As is the case with any product or component, it's up to the manufacturers to decide what's best for their companies and customers.

  • Lower display cost.
  • Lower cost of associated hardware (controller, power supply, microprocessor).
  • Simplified software coding (depending on the nature of graphical elements).
  • Wider viewing angles (depending on the technology).
  • Established technologies (e.g., LCDs, CRTs, ELPs) that are less subject to rapid change.
  • Longer display life cycle (certain technologies).
  • Appearance of design frugality in the eyes of price-sensitive customers.
  • Subdued appearance that may not be as distracting to patients.


The relatively low cost of monochrome displays is often what keeps manufacturers from changing to higher-priced color displays. Following is a cost comparison.

Monochrome LCDs $200–$250
Monochrome ELPs $300–$400
Color TFT (active matrix) $600–$900

Source: Cost data shared by established medical device manufacturers that have performed benefit-to-cost analysis of VGA display alternatives.


Information displayed on-screen can be enhanced with the use of color. In many instances, color also allows more information to be presented on-screen. Following are some examples.

  • In a breathing circuit diagram, the lungs can be colored pink, making them more recognizable as a body part as well as indicating proper oxygenation.
  • Bar graphs associated with anesthesia gas flows can be colored to match standards for the labeling of compressed gases (e.g., green for oxygen in the U.S. market).
  • An arterial blood pressure waveform can be colored red to reinforce an association with oxygenated blood.
  • On-screen start and stop buttons can be colored green and red, respectively, to reinforce a traffic light metaphor (i.e., green means "go" and red means "stop").
  • To indicate high priority, important messages can be printed on a yellow, rectangular square to resemble Post-it notes.
  • To help users recognize components, maintenance diagrams can depict the actual appearance of the product using as many colors as necessary or even show a digitized color photograph.

Return to the MD&DI article index

Copyright© 1996 Medical Device & Diagnostic Industry

FDA Document Leaks Revisited

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

Originally published October 1996

James G. Dickinson

After seven months of inaction by FDA's Office of Internal Affairs (OIA), the FBI has taken over investigation of the most serious breach of security at FDA since the generic drug scandal. At the center of the case is last November's leak of proprietary product approval documents belonging to laser manufacturer Visx (Santa Clara, CA) to its competitor Summit Technology (Waltham, MA). Now, it will fall to the bureau to identify and locate the as-yet unknown FDA employee who leaked the documents.

This development, along with questions troubling to anyone who has entrusted secrets to FDA, was revealed at a July 31 hearing of the House Commerce oversight and investigations subcommittee, chaired by Joe Barton (R–TX). "If we can't maintain the confidentiality of documents at the FDA, then we should abolish the FDA," Barton commented.

Both Summit and FDA respectfully declined to appear at the hearing, citing their sensitivity to the ongoing investigation. But the only two witnesses who did appear—Visx CEO Mark Logan and former FDA device reviewer Mark Stern, who reviewed the Visx premarket approval (PMA) application—provided more than enough food for thought. Their testimony provoked both Barton and subcommittee minority leader Ron Klink (D–PA) to vow that the subcommittee will track the scandal to its end, regardless of the outcome of the November elections. Issues that particularly concerned the congressmen included the following:

*Why would Summit CEO David Muller, who has said he received the leaked documents in an envelope at his home last November 24, throw away the envelope? Barton expressed disbelief that anyone receiving such documents would do so.

  • Disregarding previous insinuations, subcommittee members accepted the assumption that Stern was not the person who leaked the documents to Summit, since his sympathies, if any, lay with Visx. Barton said he regarded Emma Knight, the lead reviewer of Summit's PMA application, now reassigned to FDA's biologics center, as a "major suspect." Knight was no longer handling the Visx application at the time she was purportedly faxed the Visx "approvable" letter that was found among the documents leaked to Muller.
  • Stern testified that while working on Visx's PMA application, he inferred that his assignment as an inexperienced reviewer was intended to slow down that application's progress while the Summit application was getting speedier treatment from Knight, a far more experienced reviewer. He was not permitted to see any of the Summit application's documents. (One possible explanation for this constraint may have been FDA Center for Devices and Radiological Health (CDRH) management's suspicion of Stern. Testimony was given that Visx CEO Logan had been heard to brag that he had Stern in his pocket and that Stern's mentor at Columbia University had been Steven Trokel, MD, a paid Visx consultant.) Stern said he was rebuffed in his attempts to bring his concerns to the attention of FDA commissioner David Kessler and deputy commissioner Michael Friedman.
  • CDRH Office of Device Evaluation director Susan Alpert recognized the gravity of the leak as soon as she heard of it. She implemented new document security measures and convened several office meetings at which she unsuccessfully begged the perpetrator to step forward.
  • According to subcommittee sources, FDA's OIA investigated the case as if it involved little more than employee misconduct. FDA shared no information with the subcommittee, and not until Barton advised Kessler on July 15 of the scheduling of the July 31 hearing did FDA turn the investigation over to the Health and Human Services inspector general. In less than 24 hours, the inspector general decided it was a criminal matter requiring referral to the FBI. According to a memorandum of understanding between the Department of Health and Human Services and the Justice Department, FDA is required to refer cases immediately when there is a suspicion of criminal conduct.

A subcommittee analysis of FDA records of OIA activities appears to show that the office stopped taking official actions against FDA staff at about the time the Summit case began. For the fiscal year 1995, the office reported 4 employee terminations, 5 resignations, 7 suspensions, 10 letters of reprimand, and 5 letters of admonishment. For the eight months to June 30 of this year, OIA reported only one letter of reprimand.

While not very conclusive, the subcommittee analysis nevertheless paints a picture of an agency that seemed to be struggling hard to sweep a major embarrassment under the rug as Congress was considering reform legislation.

The embarrassment in this case was a federal crime worth 12 months in prison, whether or not bribery was involved—the deliberate leaking of a company's confidential documents to one of its direct competitors. There has been no suggestion that Summit Technology or any intermediary on its behalf made a "cash or kind" payment or offer of a payment or favor to anyone at FDA. But such evidence is not necessary to make the leak a felony, as FDA should have known at the outset.

Visx insists that the FDA leak greatly damaged it in the marketplace, especially with respect to a future competitive advantage it expected to have over Summit. But it is society that stands to lose the most, if this episode erodes public confidence in FDA's integrity.

This new incident is strongly reminiscent of the 1988 generic drug scandal in most respects other than evidence of any transfer of gratuities.

  • In both cases, FDA employees allegedly favored certain companies while retarding the reviews of others.
  • In both, some companies were alleged to have such good connections inside the agency that they could work their wills and gain market advantage through manipulated FDA actions.
  • In both, upper agency management repeatedly turned a deaf ear to both trade complaints and protests from honest FDA employees.
  • In both, OIA (formerly the Division of Ethics and Program Integrity) participated in procedures that had the end result of actually sheltering, for a time, agency staff who had violated the law.

Apparently to forestall suspicions that the laser scandal reflects a dysfunctional agency unable to discipline itself, FDA associate commissioner for public affairs Jim O'Hara asserted after the July 31 hearing that the agency had conducted "a very intensive and thorough investigation, in coordination with other agencies" including the FBI. Because FDA statements might jeopardize the ongoing FBI investigation, he added, the agency had decided to remain silent even if it meant looking bad, confident that in the end a retrospective look would show that it had acted properly and diligently throughout the laser ordeal.

That seems a doubtful prospect. FDA has always presented a dense view of its decision-making processes, especially where its interactions with other agencies (in this case, the FBI) are involved. No public accounting of FDA's investigational decision making in the generic drug scandal has ever emerged. Neither has the agency publicly described its decisional roles in any of the major criminal investigations it has referred to the Justice Department in recent years.

Typically, after an investigation moves to another agency for further development, such as prosecution, FDA steps into the background and stays there. Judging from all the media reports that flowed from the generic drug scandal, for instance, one could conclude that FDA had virtually no role in any of the investigations. All of the public statements emanated either from Congress or the U.S. Attorney's Office in Baltimore. Yet the agency in fact assembled a special team of excellent investigators who were absorbed into the U.S. Attorney's office, and who did almost all of the case development against dozens of individuals in industry.

If a day of final reckoning in the laser scandal is reached, the behavior of the OIA will likely be as much a mystery as it is today. Questions that FDA won't be more willing to answer then than now will likely include the following:

  • Why wasn't the obviously criminal leaking of documents immediately referred to the FBI?
  • Why did OIA wait five months before beginning serious field investigations, such as taking fingerprints of people who might have handled the stolen documents and interviewing those outside the agency who had knowledge of events (such as Visx CEO Mark Logan and Summit vice president of regulatory affairs Kim Doney)?
  • Why did FDA not give Congress the same degree of cooperation, within the constraints imposed by the ongoing criminal investigation, as during the generic drug scandal?
  • Why did it take FDA more than nine months to publish the approval notice for Summit Technology's laser and to release its safety and effectiveness summary?
  • Why was FDA reviewer Emma Knight not reprimanded if, as has been alleged, she leaked that approval to Summit before it had been internally cleared? (Stern alleged that she gave the company the letter prematurely in order to help Summit's $110 million secondary stock offering on October 24, 1995.)
There are many other issues intertwined in this investigation. Most will probably not be addressed by the FBI, but should be picked up by Congress as soon as the FBI completes its work. None of these issues was the focus of the recent FDA reform debate. They raise questions about FDA's basic operating procedures, how it trains its employees, the effectiveness of FDA managers, the adequacy of security measures for company documents, and the efficiency of the agency's communications, both internally and externally.

Unless these issues are dealt with effectively and soon, there may well be more leaks. And then, as chairman Barton said, we might as well not have an FDA. In his perspective, leaks by FDA make the case for third-party product marketing reviews stronger and less controversial. At least a nongovernmental reviewing organization would have legal liability for any leaks by its employees.

James G. Dickinson is a veteran reporter on regulatory affairs in the medical device industry.

Copyright© 1996 Medical Device & Diagnostic Industry