MD+DI Online is part of the Informa Markets Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

CE Marks/IVD Directive, Computer Issues, and Complaint-Rate Data versus Risk Analysis

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

An MD&DI January 1999 Column

HELP DESK

Alan P. Schwartz, affiliated with mdi Consultants Inc. (Great Neck, NY), has been providing consulting services to the medical device industry for more than 20 years. Here, he answers several questions concerning the use of the CE mark.

My company manufactures and distributes products in the rapid diagnostic category—for example, HCG, pregnancy, hepatitis, and HIV tests. The lancets that are used to stick the patient's finger have the CE mark. Do the devices that perform the in vitro test also require the CE mark?

Although all medical devices, such as lancets, sold within the European Economic Area are required to have the CE mark on their label, the in vitro diagnostics used in test kits are covered by the IVD Directive (IVDD), which was adopted by the European Council of Ministers on October 5. The IVDD was published in the official journal of the European Commission on October 27, 1998. When publication occurred, the directive became law and manufacturers could start registering their IVD products.

Member states will have until December 7, 1999, to transpose the IVDD into national law. Beginning June 7, 2000, member states may start applying these provisions. IVDs that conform to current national legislation may still be marketed until 2003 and put into service for an additional two years.

However, experience has shown that this time frame is often not followed. In fact, Belgium has yet to transpose the Medical Devices Directive into its national law. Although your company could meet the IVDD's requirements any time between now and five years from the date of its publication, in vitro testing devices will not be required to have the CE mark until that final date. However, based on the medical device industry's recent experience, it may be a good idea to start complying with this new directive as soon as possible.

What are the labeling requirements for using the CE mark on the boxes for both natural and synthetic rubber examination gloves? Is there a required dimension for the CE mark?

Device labeling is one of the most misunderstood problems with the CE mark. Whenever possible, the CE mark should be a minimum of 5 mm in height. The CE mark should be printed as a distinguishable element of the device labeling. This is described in paragraph 13.3 in Annex 1 of the Medical Devices Directive.

Many manufacturers are also concerned about the number of languages and international symbols used in labeling. The greater number of international symbols used, the fewer foreign languages are necessary. A label should be printed in the language of the countries you intend to ship or distribute to. It appears that a total of six languages is an acceptable number. A manufacturer should determine which six languages to use for each particular product.

We manufacture Class II blood glucose­ monitoring systems. The European Commission ruled that all medical products sold in Europe as of June 13, 1998, must bear a CE mark to comply with the Medical Devices Directive. We already have CE marks under the directive EN 50082-1:1992. Is this enough to comply with the new regulations?

If your glucose-monitoring system is not an in vivo monitoring system, it would not fall under the Medical Devices Directive but rather under the new IVD Directive (IVDD), and you have time to implement the CE mark for your products (see the first question and answer for additional details). If the product is an in vivo monitoring device, a CE mark under the Medical Devices Directive is required. Be aware that complying with the MDD is different than complying with the EN 50082 directive, which covers electromagnetic compatibility (EMC) and has required electrical products to bear a CE mark since 1996. You should use the CE (EMC) mark on your label to comply with the EN 50082 directive, which would distinguish it from the CE marks required for the MDD or the IVDD.


Wayne Rogers is a principal investigator at Rogers Medical (Temecula, CA), and a member of MD&DI's editorial advisory board. He discusses two computer-related issues.

How are computer chips validated?

Computer chips per se are not validated. However, FDA has previously asked Intel, the manufacturer of Pentium chips, to perform a self-assessment to ensure that specific iterations were not repeated where intensive calculations and iterations occur. A device manufacturer should assess its requirements with the supplier to determine that the computer chips will meet the intended end use and provide the quality necessary. It should be part of purchasing control to provide evidence that the chips will meet their intended end use. The computer chips used in the final design must be included in the final verification or validation of the device.

Please provide some guidance on how to select a discrete-event-simulation software program that would be most appropriate for a medical device manufacturer. What companies offer this software?

The manufacturer is responsible for selecting the software program that is designed for an intended end use. There is no standard per se; FDA simply says that the program must work. The Internet is a good place to start gathering information. There are more than 80 manufacturers of discrete-event-simulation software programs. Contact some manufacturers and assess which of their programs best fit the application for your device's intended end use.


David Link is the executive vice president of Expertech Associates (Concord, MA). He discusses the importance of conducting risk analysis.

We manufacture Class I, low-risk devices and have received very few customer complaints involving injury. Can we use data showing this extremely low complaint rate per millions of devices sold instead of doing a risk analysis?

Analysis of complaints, while a valuable source of demonstrated risks under field conditions, is not a substitute for risk analysis. The design control section (820.30) in the FDA quality system regulations calls for risk analysis, where appropriate, when new products are being designed or major device modifications are being developed. The Medical Devices Directive, now mandatory in the EU, also includes requirements for risk analysis. Regardless of one's complaint experience, it would be prudent to conduct a risk analysis on all existing products to determine whether there are any risks that should be eliminated or reduced through redesign or managed by revised labeling. The risk analysis approach frequently used is found in the European standard EN 1441.


"Help Desk" solicits questions about the design, manufacture, regulation, and sale of medical products and refers them to appropriate experts in the field. A list of topics previously covered can be found in our Help Desk Archives. Send questions to Help Desk, MD&DI, 11444 W. Olympic Blvd., Ste. 900, Los Angeles, CA 90064, fax 310/445-4299, e-mail helpdesk@devicelink.com. You can also use our on-line query form.

Although every effort is made to ensure the accuracy of this column, neither the experts nor the editors can guarantee the accuracy of the solutions offered. They also cannot ensure that the proposed answers will work in every situation.

Readers are also encouraged to send comments on the published questions and answers.


Copyright ©1999 Medical Device & Diagnostic Industry

Choosing Conducting Material Interfaces for Seams and Joints

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

An MD&DI January 1999 Column

EMI FIELD NOTES

Selecting proper materials minimizes corrosion EMC problems.

Increasingly, the medical electronics industry—along with the rest of the electronics industry—is being forced to use shielding to meet electromagnetic compatibility (EMC) needs. Effective shielding requires enclosure seams and joints that mate conductively along the joining boundaries. This can cause considerable difficulty for mechanical designers, because it requires the surfaces to mate at frequent intervals. Designers must either provide frequent positive contact points or interpose a conductive resilient gasket.

Even after taking such a step, however, designers face one more hurdle: corrosion. When corrosion occurs at the mating interface, conductive contact is lost and, thus, the shield's effectiveness is degraded. Worse yet, the corrosion may not occur during development but rather surface only after the product has been in the field for some time. So, it is wise to consider the issue of materials compatibility during the design phase, when more options are available and sensible design decisions can be made.

Even with effective shielding, it is still important to consider materials compatibility. Photo courtesy of Tecknit (Cranford, NJ).

The purpose of choosing the proper conducting material interfaces for seams and joints in electronic products is to minimize the penetration of the electromagnetic wave either into or out of the product and to minimize the impedance at the seam interface. The failure mode that produces electrical discontinuity at the otherwise conducting material interfaces is galvanically induced corrosion, which results in nonconductive film growth between the conducting interfaces. Between 10 and 20% of EMC design problems are the result of failures caused by corrosion. The potential effects of corrosion on EMC can include one or more of the following:

  • Circuits that fail to work in the early design phases.
  • Features that do not operate as intended.
  • Excess emissions and susceptibility that cause interference with other circuits or other electronic products.
  • Products that fail regulatory tests required by IEC 60601 or FDA.
  • Limits on the life of electronic products and features.
  • Delays in market introduction and subsequent loss of market share.
  • The need for last minute, expensive fixes (field redesign).
  • Possible shutdown of manufacturing operations.

This article discusses four methods for choosing appropriate conducting material interfaces for seams or joints.

THE NATURE OF CORROSION

Galvanic corrosion can occur when two metals are joined in the presence of an electrolyte or humidity. When corrosion occurs, the impedance of the mating surfaces increases. In extreme cases, one of the metals will corrode (e.g., aluminum corrodes when mated to steel, a fact discovered by anyone who places an aluminum camper onto the steel bed of a pickup truck).

To combat such corrosion, some industries (notably the military-supply and automotive industries) have established test methods to determine materials compatibility and have published lists of compatible materials. These methods are summarized below.

Method 1. Military Standard 889B presents a limited choice of metals and alloys that are evaluated as compatible in marine and industrial environments. The difficulties with this standard are that no impedance values are given, the choice of metals or alloys is limited, and no surface treatments are recommended.

Method 2. Military Standard 1250A lists a number of metals and alloys, but recommends no surface preparations. Metals and alloys are considered compatible when the galvanic potential is equal to or less than 0.1 V (a few combinations at 0.25 V are also listed). No impedance values are given, and no environments are cited.

Method 3. The Society of Automotive Engineers (SAE) APR 1481 standard provides a more complete listing of metals, alloys, and surface preparations than MIL-STD 889B or 1250A. It states pressures and impedances but gives no environmental conditions.

Method 4. The present article is based on a fourth method, which provides more definitive data. Three studies have been performed using this method and their results have been reported.1–3 Although all three used the same general method, differences in test fixturing preclude direct comparison. Accordingly, results from all three sources are provided.

Impedance Range
(see scale below)
Conductive Interfaces
Initial% Change FinalM1/M2
101Sn/Sn
101 Sn/PbSn
202Ni/Ni
202 Ni/Zn with yellow chromate
303 Ni/Al with yellow chromate
303 Ni/Sn
283.33 3 Ni/PbSn
283.33 3 Ni/AlZn
1238.46 3 Sn/Zn with yellow chromate
1238.46 3 PbSn/Zn with yellow chromate
377.27 4 Ni/Zn with blue chromate
PbSn/AlZn or Zn with blue-bright chromate
1500.00 4 PbSn/Passivated 304 stainless steel
PbSn/Al with clear or yellow chromate
Scale Transfer Impedance (m())
10.5–0.8
20.8–1.6
31.6–2.8
42.8–5.0


Table I. Range of transfer impedance, both initial and final, after exposure to environment. Pressure and area were not stated, but the same type of fixtures were used throughout these experiments. Exposure to hydrogen sulfide (10 ppb), nitric oxide (200 ppb), and chlorine (10 ppb) give a 5–8-year lifetime for a commercial computer product.1

Impedance (m())Conductive Interfaces
Initial % ChangeFinalM1/M2
0.010 0.01 Sn/Sn (copper base)
0.01 100 1.01 Sn/Sn (steel base)
1.30 35 2.00 Al/Al (clad)
0.20 1500 3.20 304 stainless steel/304 stainless steel
1.20 442 6.50 Zn/Zn
0.10 2900 3.00 Ni/Ni


Table II. Transfer impedance, both initial and final, after exposure to environment. The environment was 40°C at 95% relative humidity for 1000 hours.2

Impedance (dB at 1 GHz)Conductive Interfaces
Initial % Change Final M1/M2
68 5.88 64 Sn/304 stainless steel
78 19.23 63 Sn/Ni
59 1.69 58 Sn/304 stainless steel
98 40.81 58 Sn/Ni


Table III. Transfer impedance, both initial and final, after exposure to the environment. The first two samples were exposed to a Battelle Class 3 flowing mixed gas, whereas the last two samples were exposed to 40°C/90–95% relative humidity for 240 hours.3

Basically, each study addressed the transfer impedance of the mating surfaces before and after the test. The test itself exposes the surfaces to various atmospheric conditions designed to encourage corrosion. Impedances of some mating surfaces increased significantly after the test. Note that some corrosion occurred even when joining similar metals, an indication that galvanic corrosion is not the only factor to consider.

The data from the studies are provided in Tables I–III. These data include the interfacial pairings, initial impedance, percentage change in impedance with aging, and final impedance after aging. These data—based on changes within each study's findings—are consistent for impedance per unit area and for pressure. However, they are not easily translated from author to author because of the differences in transfer impedance fixturing.

As presented here, the base material is treated as independent of the conductive interface. This is not always the case, since the base can sometimes become a part of the galvanic corrosion reaction. Base materials are usually selected separately according to factors such as cost as well as structural, mechanical, and jointing properties (e.g., welding). However, under certain conditions, the base may diffuse to the conductive interfaces, causing electrical discontinuity. When considering porous films such as chromates, the base material is especially important, because the electrolyte can penetrate the film and corrode the base material.

All three investigators measured the transfer impedance before and after exposure to corrosive environments. The measurement technique is discussed in IEEE P1302 and by Kunkel.2

CONCLUSION

Although the three tables address different materials and test conditions, they all indicate that compatibility of conductive mating surfaces must be considered during the design phase to prevent initially satisfactory mating from degrading and becoming ineffective. Once the bond of a seam or joint becomes ineffective, the device will no longer perform properly and may require expensive redesign to address EMC issues.

REFERENCES

1. B Archambeault and R Thibeau, "Effects of Corrosion on the Electrical Properties of Conducted Finishes for EMI Shielding" in Proceedings of the IEEE EMC Symposium (Piscataway, NJ: IEEE, 1989), 46–51.

2. G Kunkel, "Gasket and Gasketed Joint Considerations for EMI Shielding," ITEM (1994): 38–48.

3. WD Peregrim, "Comparison of RF Joint Corrosion in Different Environments" in Proceedings of the Third International Symposium on Corrosion and Reliability of Electronic Materials and Devices (Pennington, NJ: The Electrochemical Society 1994), 134–141.

Richard Haynes is the principal of Richard Haynes Consultants (Princeton, NJ). In addition to consulting on EMI issues, he conducts several seminars focusing on corrosion and reliability issues.


Copyright ©1999 Medical Device & Diagnostic Industry

Conductive Plastics for Medical Applications

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

An MD&DI January 1999 Column



SPECIAL SECTION

Polymer materials compounded with a variety of additives provide design flexibility in protecting against static accumulation, ESD, and EMI/RFI.

The rapid growth of thermoplastics in medical markets is a testament to the suitability of these materials to meet the demands of today's healthcare industry. Thermoplastics can be compounded with a variety of common and specialty fillers, reinforcements, and modifiers to yield specific properties in a wide range of applications.

Among these additives are electrically conductive modifiers that, when compounded with thermoplastics, can provide protection against static accumulation, electrostatic discharge (ESD), and electromagnetic and radio-frequency interference (EMI/RFI). Although conductive thermoplastics are traditionally found in electronic, business-machine, computer, and industrial applications, the medical community is realizing enhanced performance and value in using these specialty materials for everything from tools to trays.

STATIC AND EMI/RFI

The effects caused by static and EMI/RFI are as familiar as sparks jumping from fingertip to doorknob, static cling in fabrics and films, and electronic noise in communications networks. Static accumulation and discharge and EMI/RFI can be either man-made or naturally occurring phenomena and may not necessarily pose a problem.

However, when present in, on, or near electronic circuitry, moving materials, or flammable environments, they create hazards that must be controlled or eliminated. ESD can damage or destroy sensitive electronic components, erase or alter magnetic media, and initiate explosions or fires in flammable environments. Accumulated static charge can halt mechanical processes by clogging the flow of materials. Static-attracted contaminants can affect the purity of pharmaceuticals.

A conductive thermoplastic compound from RTP Co. (Winona, MN) is used in the main housing, battery door, and end cap of this remote heart-monitoring device manufactured by GE Marquette Medical Systems (Milwaukee, WI). The transmitter gives patients freedom of movement while electronically linking them to a remote computerized output system.

Electromagnetic and radio-frequency waves radiate from computer circuits, radio transmitters (including cellular phones), fluorescent lamps, electric motors, lightning, and many other sources. They become undesirable when they interfere with the operation of electronic devices. Consequences can include corruption of data in information storage and retrieval systems, inaccuracy in diagnostic equipment, and interruption of medical devices such as pacemakers.

MATERIAL SOLUTIONS FOR STATIC PROBLEMS

Static accumulation and electrostatic discharge are controlled or eliminated by adjusting electrical characteristics of at-risk materials or their immediate environment. Conductive thermoplastic compounds prevent static accumulation from reaching dangerous levels by reducing a material's electrical resistance. This allows static to dissipate slowly and continuously rather than accumulate and discharge rapidly—perhaps as a spark.

MATERIAL SOLUTIONS FOR EMI/RFI

Shielding of electronic circuitry controls electromagnetic or radio-frequency interference, thus ensuring operational integrity and electromagnetic compliance (EMC) with existing standards. Shielding preserves operational integrity by preventing electronic noise from penetrating to susceptible circuitry, and provides EMC by preventing emissions from escaping to adjacent susceptible equipment.

Figure 1. EMI/RFI is reflected off the source side of a shield or is rereflected off a second shield surface.

Conductive thermoplastic compounds provide this shielding by absorbing electromagnetic energy and converting it to electrical or thermal energy. These compounds also function by reflecting electromagnetic energy from the source side of the shield and also by rereflecting it from the second surface of the shield (Figure 1).

STRUCTURE OF CONDUCTIVE THERMOPLASTIC COMPOUNDS

A conductive thermoplastic compound is a resin that has been modified with electrically conductive additives, including carbon-based powder and fibers, metal powder and fibers, and metal-coated fibers of carbon or glass. Varying the percentage or type of conductive additive used in the compound permits one to control the degree of electrical resistivity (Figure 2).

Figure 2. Additive concentration effect on conductivity in a typical thermoplastic (nylon 6/6).

Recently, unique conductive additives such as metal oxide–coated substrates, intrinsically conductive polymers (ICPs), and inherently dissipative polymers (IDPs) have found commercial use in conductive thermoplastic compounds. Metal oxide–coated substrates were initially introduced as colorable substitutes for carbon black powder–filled plastics. When compounded into thermoplastics, these additives are able to provide a wide range of conductive properties and colors. ICPs are polymers with strong electrical conductivity. The newest type of additive, they are expected to play significant roles in conductive applications from static protection to EMI shielding. IDPs exhibit weaker electrical properties than ICPs; when compounded with other resins, they can impart antistatic properties to molded articles. IDP-containing compounds generally have lower ionic- and metallic-contaminant levels than conductive compounds containing traditional additives and are preferred for static-protective packaging of sensitive products.

SELECTION OF CONDUCTIVE ADDITIVES

Conductive thermoplastics are generally designed to meet physical performance criteria in addition to static or EMI/RFI control. Often, these materials must perform some structural function, meet flammability or temperature standards, or provide a wear- or chemical-resistant surface. In addition, conductive compounds may need to pass purity standards prior to acceptance in medical applications because of concerns with outgassing of volatile substances and contact with ionic or metallic contaminants.

The conductive additive for any application is chosen based on performance criteria of the molded article. If conductive performance is the only specification, almost any conductive additive can be used, and cost will ultimately control the selection. When some of these other criteria are included, the selection is determined by whether the cumulative effects of various additives are acceptable for the application. The specialty compounder should have qualified and experienced engineering personnel available to aid in the additive selection process.

MECHANICS OF CONDUCTIVITY

The mechanism of conductivity in plastics is similar to that of most other materials. Electrons travel from point to point when under stress, following the path of least resistance. Most plastic materials are insulative: that is, their resistance to electron passage is extremely high (generally >1015).

Conductive modifiers with low resistance can be melt blended with plastics—in a process called extrusion compounding—to alter the polymers' inherent resistance. At a threshold concentration unique to each conductive modifier and resin combination, the resistance through the plastic mass is lowered enough to allow electron movement. Speed of electron movement depends on modifier concentration—in other words, on the separation between the modifier particles. Increasing modifier content reduces interparticle separation distance, and, at a critical distance known as the percolation point, resistance decreases dramatically and electrons move rapidly.

 Conductive
Additives
Conductive
Levels
Resins
(Common
Abbreviation)
Polypropylene (PP)
Nylon 6/6 (PA)
Nylon 6 (PA)
Nylon 11 (PA)
Nylon 6/12 (PA)
Nylon 12 (PA)
Nylon 6/6, impact modified (PA)
Polycarbonate (PC)
Polystyrene (PS)
Acrylonitrile butadiene styrene (ABS)
High-density polyethylene (HDPE)
Low-density polyethylene (LDPE)
Acetal (POM)
Polysulfone (PSO)  
Polybutylene terephthalate (PBT)
Polyethylene terephthalate (PET)
Polyurethane thermoplastic elastomer (TPUR)
Polyphenylene sulfide (PPS)  
Polyethersulfone (PES)   
Polyester thermoplastic elastomer (TPE) 
Polyphenylene oxide, modified (PPO)  
Acrylic (PMMA)
Polyetherimide (PEI)   
Polyetheretherketone (PEEK)    
Polyurethane, rigid (PUR)    
Polycarbonate/ABS alloy (PC/ABS)
Styrenic thermoplastic elastomer (TES)
Olefinic thermoplastic elastomer (TEO)
Polyvinylidene fluoride (PVDF)    
Liquid crystal polymer (LCP)      
Polyphthalamide (PPA)     
Polyphthalamide, hot-water moldable (PPA)     
Polysulfone/PC alloy (PSO/PC)   
Aliphatic polyketone (PK)
Syndiotactic polystyrene (SPS)   
a Includes both permanent and nonpermanent (migratory, hydrophilic) additives.

b 1010–1012/square. c 106–1012/square. d Less than 106/square.


Table I. Thermoplastics that can be used in conductive compounds.

THERMOPLASTICS IN COMMON CONDUCTIVE COMPOUNDS

Nearly every type of polymer can be compounded with conductive fillers (Table I). The following materials are some of the more common medical polymers that can be rendered electrically conductive.

Polyetheretherketone (PEEK). PEEK is sterilizable via autoclave, EtO gas, or high-energy radiation and offers good chemical resistance. Common uses include catheters, disposable surgical instruments, and sterilization trays.

Polyurethane. Available in a wide range of hardnesses, polyurethane is a high-clarity polymer that can be sterilized using dry heat, EtO, or radiation. Medical applications include tubing, catheters, shunts, connectors and fittings, pacemaker leads, tensioning ligatures, wound dressings, and transdermal drug-delivery patches.

Polycarbonate (and Polycarbonate Blends). Capable of being sterilized by all common methods, polycarbonate has especially good toughness and impact resistance. Equipment housings and reservoirs are among the most common medical components made from the material.

Polysulfone (PSO). Possessing excellent thermal stability and toughness, polysulfone is resistant to a variety of chemicals and can be supplied in transparent grades. The polymer can be sterilized using autoclave, EtO, or radiation. Applications include instrument handles and holders, microfiltration devices for immunoassays, reusable syringe injectors, respirators, nebulizers, prosthesis packaging, sterilizer trays, and dental tools.

Liquid-Crystal Polymer. High strength and stiffness are among the notable physical properties of liquid-crystal polymers. These materials can be sterilized by all common methods and are used in products such as dental tools, surgical instruments, and sterilizable trays.

FEATURES OF CONDUCTIVE THERMOPLASTICS

Conductive thermoplastics offer a number of advantages compared with other materials, such as metals, for ESD protection or EMI/RFI shielding (Figure 3). Finished parts are lighter in weight, easier to handle, and less costly to ship. Fabrication of finished parts is typically easier and less expensive, and all common thermoplastic processing methods can be employed. Conductive plastic parts are less subject to denting, chipping, and scratching and often demonstrate more-consistent electrical performance than painted metal parts.

Figure 3. Conductivity values of thermoplastic compounds fall between those of unmodified plastics and metals.

A common misperception is that conductive plastics are always colored black; this is not the case. In fact, most conductive thermoplastics can be made in a wide variety of colors. With a precolored conductive thermoplastic, the color is inherent in the material rather than added as part of a secondary operation.

In addition, specially developed additives offer both conductivity and matched substrate color when electrostatic painting is required for critical color matching of devices assembled from dissimilar materials. Matching the color of the conductive compound to the paint makes scratches, chips, and abrasions less noticeable and maintains a homogeneous surface appearance. These conductive compounds significantly improve paint transfer efficiency and eliminate the need for conductive primers, leading to dramatic reductions of volatile-organic-compound (VOC) emissions. Electrostatic painting also significantly reduces overspray, saving cleanup and disposal costs.

Neither is opacity the only option, as a number of conductive thermoplastic compounds retain transparency while exhibiting static-control properties. Particular static-control additives can match refractive indices of some thermoplastic polymers, rendering clear or translucent parts. Contact clarity—the ability to read objects through a directly contacting plastic material—is a desirable property that can be achieved in packaging applications, enabling bar code imprints or laser markings to be accurately detected and read by automatic equipment. Contents of packages can also be identified by color coding, without violating the package seal.

For environments in which ionic contamination and ESD can cause millions of dollars in damage to electronic components, pretested thermoplastics and conductive additives can be compounded to meet tight tolerances for a wide range of impurities. Such high-purity formulations adapt well to today's ultrasensitive electronics that often feature high device speeds, small geometries, and dense storage capacities.

TESTING FOR CONDUCTIVITY

Three major characteristics are used to evaluate the electrostatic properties of ESD compounds. These are resistivity, both volume and surface; electrical resistance; and static decay rate. EMI/RFI shielding materials are additionally evaluated by shielding-effectiveness testing.

The most common test method to determine the conductivity of plastics has been ASTM D 257, which measures both volume and surface resistivity. Since electrostatic charge is a surface phenomenon, surface resistivity tends to be the more meaningful of the two. Surface resistivity is the measured resistance between two electrodes forming opposite sides of a square, and is reported as ohm/square. Volume resistivity (also referred to as bulk resistivity) is measured resistance through the sample mass. It is an indicator of how well a conductive additive is dispersed, and is expressed as ohm-centimeter.

Electrical resistance is defined as opposition to the flow of electricity. The EOS/ESD Association Draft Standard 11.11 measures surface resistance as opposed to surface resistivity.

Static decay is measured with Federal Test Method 101, Method 4046. This test measures how quickly a charge is dissipated from a material under controlled conditions, which is one parameter of actual electrostatic performance.

Shielding effectiveness is evaluated under ASTM D 4935-89, in which coaxial transmission-line methodology analyzes planar specimens under far-field conditions over a frequency range from 30 MHz to 1.5 GHz. Shielding effectiveness is represented as the ratio of power received with and without a candidate material present and is expressed in decibels of attenuation.

APPLICATIONS

Suitable for a variety of applications, conductive thermoplastic compounds can satisfy the medical industry's need for miniaturized, high-strength parts. Most can withstand state-of-the-art sterilization procedures, including autoclave, and many are certified for purity and pretested to minimize ionic contamination. Medical applications under evaluation or currently using conductive thermoplastics include:

  • Bodies for asthma inhalers. Because the proper dose of asthma medications is critical to relief, any static "capture" of the fine-particulate drugs can affect recovery from a spasm.
  • Airway or breathing tubes and structures. A flow of gases creates triboelectric charges, which must discharge or decay. A buildup of such charges could cause an explosion in a high-oxygen atmosphere.
  • Antistatic surfaces, containers, and packaging to eliminate dust attraction in pharmaceutical manufacturing.
  • ESD housings to provide Faraday cage isolation for electronic components in monitors and diagnostic equipment.
  • EMI housings to shield against interference from and into electronics.
  • ECG electrodes manufactured from highly conductive materials. These are x-ray transparent and can reduce costs compared with metal components.
  • High-thermal-transfer and microwave-absorbing materials used in warming fluids.

CONCLUSION

Conductive thermoplastics offer medical product designers unrivaled freedom in the control of ESD and EMI/RFI. These compounds do not generate high levels of static charge, can dissipate charges before dangerous levels accumulate, and can provide electrostatic and EMI/RFI shielding. Properly formulated, the materials can provide desired conductive characteristics while maintaining other required physical and mechanical properties. Already used in a varied range of applications—from strong, thin-walled sterilizable components to flame-retardant, precolored parts that can be electrostatically painted—conductive compounds are certain to become even more common as electronic devices proliferate and the technology evolves to meet new cost or performance imperatives.

BIBLIOGRAPHY

"Electromagnetic Shielding—A Material Perspective," Innovation 128 Tech Trends (Innovation 128, January 1996).

Huang, JC. "EMI Shielding Plastics: A Review." Advances in Polymer Technology 14, no. 2 (1995): 137–150.

Weber, ME. "The Processing and Properties of Electrically Conductive Fiber Composites." PhD diss., McGill University, 1995.

Larry Rupprecht is a senior product development engineer and manager of the Conductive Materials Group at RTP Company (Winona, MN). Connie Hawkinson is RTP's marketing communications manager.


Copyright ©1999 Medical Device & Diagnostic Industry

Process Considerations in the Extrusion of Microbore Tubing

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

An MD&DI January 1999 Column

COVER STORY

Successful processing of small-diameter medical tubing requires careful control over a multitude of variables.

With the evolution of medical science, the demand for smaller-diameter or microbore tubing has increased significantly. More diagnostic as well as therapeutic procedures are being performed using microbore tubing, some of which features complicated lumen geometries. The structural performance and tolerance requirements of many of these components can be difficult to achieve by means of conventional extrusion processes. Polymer extrusion is a highly involved, multivariable process, in which process variables often interact with one another in complex ways.

A statistical process capability index is a measure relating the actual performance of a process to its specified performance. The process is a combination of the manufacturing plant or equipment, the process or method itself, the people, the materials, and the environment. The minimum requirement is that three process standard deviations () on each side of the process mean be contained within the specification limits. This will ensure that 99.7% of the process output will be within the tolerances.

Many processes are found to be out of statistical control when closely examined using established control chart techniques. The root causes may be many, having different origins. Out-of-control conditions are often caused by an excessive number of adjustments made to the process. This behavior, commonly known as hunting, causes an overall increase in variability from the process (Figure 1). If the process is initially set at the target value µa and an adjustment is made on the basis of a single test result A, then the mean of the process will be adjusted to µb. Subsequently, a single test result at B will result in a second adjustment of the process mean to µc. If this behavior continues, the variability or spread of the results from the process will be greatly increased, with a detrimental effect on the ability of the process to meet the specified requirements.

Figure 1. Increase in process variability caused by frequent intentional or nonintentional process adjustments.

When a process is found to be out of control, the first action must be to investigate the assignable cause or special causes of variability. This may require, in some cases, the charting of process parameters other than the product parameters that appear in the specification. For example, it may be that the tubing's dimensions vary because of pressure variations in the die region caused by variations in the polymer's viscosity. A control chart of the die pressure, with recorded changes in the process temperature, may be the first step in breaking down the complexities of the relationships involved. It is important to ensure that all adjustments to a process are recorded, that the relevant data are charted, and that the instruments collecting the data are accurate and calibrated. Bad data are worse than no data!

Extruding small-diameter tubing requires extremely precise process control.

There are many potential assignable causes that can be responsible for a polymer extrusion process being out of control or incapable of producing products to the required dimensional specifications. This article addresses those process variables and their interactions that can adversely affect the variability and quality of a polymer extrusion process as well as the end product that it produces. Opportunities to improve overall process control through the optimization of extruder screw and tooling geometry, process sensors, instrumentation, and control tuning are also discussed.

MATERIALS

Selection and Characterization. Proper selection of the polymer or polymers to be used is imperative. An inappropriate grade of material (in molecular weight, molecular architecture, density, etc.) for an application can result in structural, dimensional, and/or cosmetic deficiencies in the finished part. One should look carefully at the intended use(s) of the finished part and then apply sound engineering skills in selecting those polymers that can meet the targeted performance requirements and be processed with the available equipment. The polymers should also be cost-effective.

Appropriate characterization methods should be applied to ensure uniformity in the molecular-weight distribution (MWD) of the material. Melt-flow-index (MFI) testing may be appropriate to determine or verify the molecular weight (MW) of the polymer; however, it is not the appropriate method for determining the MWD. Analytical gel filtration—that is, gel permeation chromatography (GPC)—serves as a reliable method for determining MWD.

Variations in the geometry of the distribution curve can be responsible for variations in a material's viscosity. A given set of extrusion process parameters, appropriate for one batch of material, may not be optimum for another batch. Variations in the MWD of a polymer may be of no consequence to the manufacturer of garden hoses but can create significant process problems for the manufacturer of microbore tubing. Appropriately administered test methods such as differential scanning calorimetry (DSC) to determine the MFI, GPC, and crystalline melting point (Tm) are extremely helpful not only in troubleshooting process variation caused by changes in the molecular architecture and MW/MWD but also in preventing unwanted process variation.

Formulation and Preparation. Proper compounding techniques are essential for ensuring the polymer's structural integrity. An improper temperature profile, screw geometry, and/or compounding method can easily be the cause of loss in MW from either mechanical degradation (excessive shear) or thermooxidative degradation, which can result from too much shear, an improper temperature profile, or a combination of the two. It is equally important to know that certain mixing or compounding techniques provide for better dispersive or distributive mixing capabilities than others. Not enough shear can result in an undesirable particle-size distribution, whereas too much shear can degrade the carrier resin. Inadequate distributive mixing capabilities can lead to a nonhomogeneous distribution of the additives in the base resin.

Materials formulation is a science in itself, quite often underrated and not seen as an important step in an overall extrusion process. In many instances, the extrusion hardware is unjustifiably blamed for being the cause of either unwanted process variation or structural or cosmetic deficiencies in the finished part. Some time spent up front in selecting the appropriate mixing/compounding equipment and process parameters is usually very cost-effective, since troubleshooting an extrusion process to determine the origin of problems caused by inadequately prepared materials can be a very lengthy and expensive undertaking.

Figure 2. Polymers manufactured by means of a condensation polymerization process must be appropriately dried to avoid the formation of volatile by-product molecules such as water, acetic acid, or hydrochloric acid.

Figure 3. An excessive amount of remaining volatiles in the resin of polymers manufactured by condensation polymerization can result in a loss of molecular weight from chain scission caused by hydrolysis.

Some materials must be appropriately dried prior to processing. These are typically polymers manufactured by means of a condensation polymerization process (e.g., PET, polyamide), in which reaction results in the formation of a small, usually volatile by-product molecule such as water, acetic acid, or hydrochloric acid (Figure 2). An excessive amount of remaining volatiles in the resin can result in a loss of MW caused by chain scission as a result of hydrolysis (Figure 3). This phenomenon may also adversely affect the cosmetic properties of the finished part.

It is strongly recommended that the resin manufacturer's recommended drying parameters be followed closely in order to prevent this type of polymer degradation. It is equally important to maintain the dryness of the material afterward, as some polymers are quite hygroscopic in nature and will easily and rapidly absorb moisture from the ambient atmosphere.

Some of today's high-performance engineering resins (some of them quite expensive) that have come about as a result of innovative polymer chemistry can offer the end-user a variety of outstanding properties only after the material has been properly formulated and processed.

EXTRUSION HARDWARE

Extruder Screw and Tooling Design. The extruder size as well as the L/D (length/diameter) and compression ratios of the extruder screw must be optimized for successful extrusion of microbore tubing. Typical extruder output ranges from 0.5 to 1.5 lb/hr. It is important to keep the residence of the resin inside the extruder within the appropriate limits. Excessive residence time (thermal history) in the extruder barrel can cause some polymers to quickly degrade. Care must be taken in extruder selection and screw design. Extruder screw design must take into consideration the bulk density and rheological properties of the polymer, as well as the required die pressure and polymer melt output. Typical production extruders used in the manufacture of microbore tubing range in diameter from 0.5 to 1.0 in., with L/D ratios from 15 to 24.

The extruder screw design should incorporate appropriate mixing capabilities, if needed. The primary objectives are to deliver a polymer melt of a homogeneous viscosity to the die region at a stable and uniform pressure and to ensure that additives, if required, are properly distributed and of uniform size. The shear rates imposed on the polymer must be appropriate, since excessive shear can lead to polymer degradation, whereas insufficient shear/compression can reduce the melting capacity of the screw and result in inappropriate dispersive mixing.

Figure 4. Leakage-flow mechanism in a single-screw extruder, in which pressure flow acts across the screw-flight gap.

Screw wear on both the major and minor diameters, as well as the inside diameter of the extruder barrel, should be periodically measured. Excessive clearances between the screw flight and barrel will adversely affect the heat-transfer characteristics between the barrel and the polymer melt and will increase leakage flow, which can potentially increase the thermal history of the polymer and reduce extruder output (Figure 4). Extruder output stability will also be adversely affected. Excessive wear of the minor diameter will change the plasticating and melt-conveying characteristics of the screw.

Figure 5. Breaker plate and screens for pressure control.

Tooling used (e.g., tips, dies, breaker plate) in the manufacture of microbore tubing should be designed and manufactured to ensure balance flow. The breaker-plate design should ensure that potential dead spots in front are eliminated (Figure 5). Screen-pack choice should be made carefully. A screen pack that is too dense may impose excessive shear onto the polymer, resulting in mechanical degradation (chain scission). A too-dense screen pack can also cause excessive internal barrel pressure and reduce the extruder's output. A screen pack that is not dense enough may result in inadequate filtering or back pressure. A certain degree of back pressure is desirable, as it provides for additional polymer mixing.

Figure 6. Streamlined profile-extrusion die.

Tooling geometry should be designed with the polymer's rheological properties in mind. Excessive compression—that is, too long a land length—may result in too much imparted shear. A land length that is too short may not provide for sufficient molecular alignment and can cause excessive extrudate swell (Figure 6). Impregnation of the tooling with selected materials (nickel-Teflon, carbonlike diamond, etc.) has proven to be beneficial in reducing "slip-stick" in the die region. Ultrasonic energy applied to the die can reduce extrudate swell by facilitating molecular alignment.

Figure 7. Draw-down ratios should be calculated carefully.

Figure 8. Careful calculation of tip and die strain will help ensure appropriate molecular alignment and dimensional stability.

Draw-down ratios—typically between 2 and 10 (Figure 7)—and tip and die strain (Figure 8) should be carefully calculated. Any deficiencies in these variables can result in inappropriate molecular alignment and ultimate dimensional instability in the product.

Visual polymer weld or knit lines resulting from the polymer's inability to weld or knit together after being divided in the flow divider or spider (in-line die) should and can be eliminated through improved tool design. Manufacturers extruding chlorinated or fluorinated polymers (e.g., PVC, ETFE, PVDF) should make sure that the wetted parts of their tooling are manufactured from the appropriate base metal in order to withstand the chemical side effects that these polymers produce during processing.

PRODUCT COOLING

Upon the exit of the extrudate from the die, the next step in extrusion typically involves a cooling process. Even cooling at the appropriate rate is paramount to ensuring dimensional stability, concentricity requirements, and proper rate of crystallization. A cascading flow of water at the entrance of the water bath will result in uneven cooling around the circumference of the tubing, which can be responsible for short-term dimensional variations and unwanted ovality. Turbulence in the water reservoir should be eliminated, since this can be a source of unwanted dimensional variation.

Proper control of the temperature of the cooling medium (air or water) is also critical. The polymer's morphology can be greatly influenced—positively or negatively—by the cooling temperature and cooling rate. For some materials, obtaining good dimensional concentricity control is only possible with air cooling. Surface imperfections are often caused by an improper water temperature. Achieving correct alignment of the microbore tubing as it exits the die and goes through the water trough or air tunnel and takeoff system is a must. Any misalignments will adversely affect the product's final geometry. Laser alignment is an inexpensive method to ensure that all components share the same centerline.

VACUUM/VACUUM-ASSISTED SIZING

Although vacuum/vacuum-assisted sizing is not commonly performed in the manufacture of microbore tubing, some manufacturers rely on this process to give them the required dimensional stability and control. Servo control of the sizing tank's vacuum is a must, especially if a feedback loop has been established between the laser scanner and the vacuum pump. Conventional pump drives (other than ac vector drives) may not have the required responsiveness, and tend to over- or undershoot the desired set point. For this technology to be of real benefit, optimum control algorithms must be in place. The algorithms must take into consideration parameters such as the line speed, time delay between the extrudate's exit from the die and measurement by the laser scanner, viscoelastic behavior of the material, and correction rate factor. Any deficiencies in either the hardware or control schemes may cause the process to make improper changes at the wrong time, thus making things even worse than they were initially.

PROCESS CONTROL

Process Sensors. As stated earlier, process control can only be as good as the feedback data (signals) received from the process. If the received data are not representative of the actual process conditions, erroneous process adjustments will be made, usually resulting in the product not meeting performance specifications. Thermocouples (T/Cs) and/or RTDs should be of the proper design and regularly calibrated. Improper location or mounting of these devices will result in an output that may not reflect actual process conditions.

A polymer melt with a uniform viscosity at a uniform and stable pressure in the die region is a must in order to achieve good dimensional control. T/Cs and RTDs are responsible for providing temperature feedback to the individual zone controllers or PLC (programmable logic controller), so that these in turn can apply the proper algorithm and subsequently modulate the heating or cooling medium. Corroded T/Cs or improperly mounted, uncalibrated T/Cs and RTDs will provide wrong information to the controllers, causing them to output erroneous signals to the heating or cooling hardware.

Pressure transducers come in a variety of designs. Several methods are employed to translate the mechanical deflection of the transducer diaphragm to the strain gauge. The use of push rods—capillaries filled with either mercury or a mixture of sodium and potassium—is the most common method. A relatively new transducer design comprises a strain gauge molecularly bonded within a sapphire wafer that is directly exposed to the polymer melt.

Response times and full-range accuracy are the important factors in transducer selection. Typical response times for various transducer types are as follows: 50–100 milliseconds for capillary style, 10–20 milliseconds for push-rod style, and 100–500 microseconds for sapphire wafers. If pressure transducer outputs are to be used in closed-loop process control (feed forward/feedback), it is imperative not only that the most accurate and responsive transducers are used, but that they are also properly sized (operating range) for the process. A 0–10,000-psi transducer used in a process location where the operating pressure range is 0–2500 psi will not provide the optimum resolution needed for proper process control.

Pressure transducers, like T/Cs, need to be calibrated regularly at operating temperature. They should be periodically removed from their location so that degraded material can be cleaned from the mounting well. Extreme care should be taken when installing and removing transducers, and the manufacturer's instructions should always be followed.

Noncontact Dimensional Gauging. Statistical rigor must be applied when evaluating the performance of a noncontact laser gauge. Quite often, gauging systems are placed in-line with an extrusion process that do not possess the resolution, repeatability, reproducibility, and thermal stability needed to perform the required tasks. Often, the laser gauge is an integral part of a process-control loop used to modulate factors such as vacuum, lumen air, and takeoff speed, so as to maintain dimensional stability. Important criteria in selecting a noncontact gauging system include the following:

  • Appropriate total combined error associated with repeatability, reproducibility, and thermal drift.
  • Amount of internal averaging. (Less is better.)
  • Resolution. (If one wants to control to the fourth decimal—0.0001 in.—the resolution needs to be two digits better.)
  • Long-term stability. (Does the laser's output vary as a function of time—e.g., 24 hours? Tests to evaluate some of these parameters should be done with high-precision standards such as pin gauges.)
  • Appropriate available signal outputs for control interface as well as data acquisition and statistical process control (SPC).

Controllers. Most small extruders used in the manufacture of microbore tubing incorporate single-loop controllers. Today's single-loop controller technology is excellent: most current controllers feature PID (proportional-integral-derivative) control and have self-tuning capabilities, and some are capable of "adaptive control" as well. Several manufacturers market controllers that not only provide for PID control but also make use of fuzzy logic (a form of artificial intelligence) algorithms. As always, the controller's output is only as good as the quality of the input signal and the applied control algorithm. Quite often, considerable process improvement can be realized by optimizing the feedback/feed-forward control algorithms.

Control Tuning. Long-term process variations (in pressure or dimensions) often come as a result of improperly tuned controllers or hardware that is incapable of responding to the controller's output signals. Finding the optimum P, PI, or PID algorithms for a temperature or pressure controller is a lengthy exercise that requires considerable expertise. It is imperative to allow the process to come to steady-state conditions after an algorithm change. This can sometimes take 20 minutes or more.

Several manufacturers make self-tuning temperature/pressure controllers that work quite well. It should be noted, however, that an optimum set of algorithms for one particular material or process may be far from optimum under different conditions. Some processors find it of benefit to first develop optimized control algorithms for each of their processes, store these on disk or on paper, and then download them to individual controllers either by hand or through the appropriate computer interface. All the resources spent on hardware, screw and downstream equipment design, and materials optimization will be of no benefit unless the proper process control methods are in place.

Process-Variable Interactions. Polymer extrusion consists of a significant number of variables. It is mathematically complex and often not fully understood. Some of the variables interact in a nonlinear fashion, which makes controlling them a challenge. Several important variables and their interactions either with another variable or with the product's physical properties or quality are listed in Table I.

Variation in...Will affect...
Melt temperaturePolymer viscosity
Polymer viscosityForming pressure within the die
Die pressure variationDimensional stability
Polymer viscosityExtrudate draw-down
Water-bath temperatureExtrudate draw-down/polymer morphology
Cooling rateRate of crystallization
Ambient air Die temperature/polymer viscosity/extrudate temperature draw-down/dimension(s)
Extruder drive regulation Die pressure/dimension(s)
Molecular weight Polymer viscosity/die pressure/dimension(s)


Table I. Polymer extrusion variables.

Advanced Multivariable Control. PID control is effective for single-variable control but cannot control more than one variable at a time, is incapable of nonlinear control, and is not adaptive to changing process conditions. For applications that require multivariable control, fuzzy logic is becoming increasingly popular. The flexibility and versatility that come with fuzzy logic are due to the fact that it does not make use of Boolean logic ("on-off" or "0-1") but allows for approximation of process values. Process interactions can be geometrically described and weighed (as in a neural network), and control algorithms can be designed that precisely match the magnitude of the prevailing process values and their respective interactions.

CONCLUSION

Precision polymer extrusion is often referred to as an art. Achieving successful fabrication of a product as complex as microbore tubing, however, requires a scientific process in which every process component and interaction can be mathematically quantified and explained. This article has presented the variables involved and the necessity for attention to details—from raw materials to extruder hardware, screw and tooling design to control schemes and instrumentation. Less than optimum conditions in any of these areas will adversely affect a microbore tubing extrusion process. Although variations in the polymer's MW or MWD are often unavoidable, they can be effectively handled by incorporating adaptive techniques in the control of the melt pressure. Careful advance planning is critical, and resources spent up front in developing a robust extrusion process are always cost-effective.

BIBLIOGRAPHY

Cheremisinoff, Nicholas. Polymer Mixing and Extrusion Technology. New York: Marcel Dekker, 1987.

Murrill, Paul. Fundamentals of Process Control. Research Triangle Park, NC: Instrument Society of America, 1988.

Rodriguez, Ferdinand. Principles of Polymer Systems. New York: Hemisphere Publishing, 1970.

Hans W. Kramer, PhD, is a research scientist at Medtronic Interventional Vascular (San Diego), where he specializes in the development of novel polymer formulations and processing technologies as well as product R&D. He also serves as an adjunct professor at San Diego State University, where he is presently involved in establishing a graduate-level polymer science program.

Photo by Roni Ramos


Copyright ©1999 Medical Device & Diagnostic Industry

Section 207: Is Your Class III Designation Really Final?

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

An MD&DI January 1999 Column

SECTION 207

Also known as "de novo," section 207 of FDAMA allows certain low-risk medical devices to be reclassified into Class I or Class II, thereby avoiding costly PMAs.

Almost 98% of medical devices are cleared for marketing by the premarket notification or 510(k) process, making this an extremely important process for the medical device industry. According to the Medical Device Amendments of 1976, a device is cleared for marketing when it is rated "substantially equivalent" to a suitable predicate device and comparative data on the two devices is provided. The Safe Medical Devices Act of 1990 defines a suitable predicate device as one that is legally on the market without having been cleared via a premarket approval (PMA) review.

The reclassification of the DOC Band (Cranial Technologies; Phoenix) into Class II is considered the first de novo action.

In general, CDRH has managed to make the substantial equivalence concept work quite well. Difficulties arise, however, when a simple, low-risk medical device submitted under a 510(k) has an unusual feature and CDRH is "unable to find" a predicate device. In this circumstance, the device is automatically declared to be Class III, requiring submission of a PMA application for market clearance. The problem with such a decision is that the preparation of a PMA application requires considerable capital, work, and other resources, including FDA resources. The cost of obtaining PMA for low-risk devices can become prohibitive because, unlike more-complex devices requiring such approval (e.g., a state-of-the-art pacemaker), the simple products are generally inexpensive.

SECTION 207

FDA's concerns about the problems associated with classifying low-risk devices in Class III eventually led to section 207 of the FDA Modernization Act of 1997 (FDAMA), known as "Evaluation of Automatic Class III Designation." (The term de novo is also used to describe section 207; however, this usage is more common within the device industry than within FDA.) This section provides a new way for FDA to establish a Class I or Class II designation for low-risk devices, even though no predicate is available.

Section 207 is part of the risk-based regulation of medical devices propounded by CDRH director Bruce Burlington in his reengineering program.1 According to an FDA backgrounder on the subject, the FDA Modernization Act of 1997 reinforces FDA's intent to focus its resources on medical devices that present the greatest risks to patients. Recognizing that it is not productive to require PMA for simple, low-risk devices, FDA has designated its compliance with section 207 as a priority2 and published a guideline on the subject for the industry and CDRH staff on February 19, 1998.3

FDA GUIDANCE DOCUMENTS

An FDA overview of FDAMA states that an applicant who submits a premarket notification and then receives a not-substantially-equivalent (NSE) determination, placing the device into Class III, may request in writing (within 30 days) that FDA classify the product into Class I or Class II.4 The request must include a description of the device, reasons for the recommended reclassification, and information to support the recommendation (see sidebar on page 120). Within 60 days from the date the request is submitted to FDA, the agency must classify the device by written order. A guidance document issued by CDRH is more specific, stating that "a signed order classifying the device should be sent to the requester by day sixty (60) following receipt of the request."5 If FDA classifies the device into Class I or Class II, this device can be used as a predicate device for other 510(k)s. Within 30 days of notifying the applicant that the device has been classified into Class I or Class II, FDA will announce the final classification in the Federal Register. If FDA determines that the device will remain in Class III, the device cannot be distributed until the applicant has obtained approval of a PMA application or an investigational device exemption (IDE).

This law imposes a very tight timeline on FDA regarding section 207 submissions. It is unclear from the law whether a designation of Class III under this part of the act would also require a notice in the Federal Register. Presumably, if FDA decided the device should be left in Class III, this decision would be published in the Federal Register as well. The case study described below resulted in a Class II designation, so this question remains unresolved.

The CDRH guidance document covering section 207 further clarifies FDA's intent regarding the process: "While the process is new, its implementation is based on the types of information and data ordinarily submitted in 510(k)s and/or reclassification petitions."5 The guidance further points out that the process limits consideration to devices that have not been previously classified under the Federal Food, Drug, and Cosmetic Act and that have been relegated into Class III by written order. The guidance emphasizes that this process is available only to devices for which a request under section 513(f)(2) has been filed within 30 days after receipt of an NSE determination on a 510(k). FDA points out that an alternative to using section 207 is the filing of a reclassification petition in accordance with 21 CFR 860.134.

Figure 1. Flowchart titled Evaluation of Automatic Class III Designation Process.5

Admittedly, the process outlined in section 207 (Figure 1) only applies to devices for which a 510(k) has just been judged NSE, as opposed to the wider applicability of a standard reclassification petition. However, a strong argument for using the section 207 process if it applies is that it imposes a very tight time restraint on FDA. Historically, reclassification petitions are not given high priority because FDA prefers to expend its resources on activities that have obvious deadlines.

A CASE IN POINT

The authors have had firsthand experience with section 207 of FDAMA. The following case is of special interest because it was the first section 207 action completed by CDRH. The device involved was a cranial orthosis, called the DOC Band, manufactured by Cranial Technologies Inc. (Phoenix). The DOC Band was developed as an alternative to surgery for the treatment of a condition known as positional plagiocephaly, meaning deformation of the head caused by persistent positioning in one orientation. Until recently, this deformity was confused with a similar condition known as craniosynostosis, which is a premature fusion of one or more sutures of the skull. Incidences of positional plagiocephaly have been increasing steadily since 1992, when the American Academy of Pediatrics recommended that infants sleep on their backs to reduce the risk of sudden infant death syndrome (SIDS).6 Whereas true craniosynostosis requires surgical intervention, surgery is rarely required for positional plagiocephaly. However, because of the lack of alternatives, cranial vault remodeling surgery was often performed on children who had positional plagiocephaly.

In order to facilitate review, the request for Evaluation of Automatic Class III Designation should include the following information:

  • A coversheet clearly identifying the submission as "Request for Evaluation of Automatic Class III Designation" under 513(f)(2).
  • The 510(k) number under which the device was found to be not substantially equivalent.
  • A statement of cross-reference to the information contained in the 510(k).
  • The classification being recommended under section 513 of the act.
  • A discussion of the potential benefits of the device compared to the potential or anticipated risks when the device is used as intended.
  • A complete discussion of the proposed general and/or special controls to ensure reasonable assurance of the safety and effectiveness of the device, including whether the product should be exempt from premarket review under section 510(k), whether design controls should be applicable, and what special controls would allow the agency to conclude the device was reasonably likely to be safe and effective for its intended use.
  • Any clinical or preclinical data not included in the 510(k) that is relevant to the request.


In order to manufacture the cranial orthosis, the manufacturer obtains a model of the infant's cranium by taking a cast of the head and then filling this negative mold with plaster of paris. The device is then formed around this model once certain refinements are made. The device works by applying a mild dynamic pressure to the prominent aspects of the infant's head, restricting growth in those directions while encouraging growth in adjacent, flattened areas.7 In a sense, the head is being persuaded to grow into a more symmetrical shape.

The 510(k) Submission. In the original submission of a premarket notification for this device on December 16, 1996, Cranial Technologies carefully tried to draw substantial equivalence to the Boston Body Jacket, a device that is used in much the same way to correct scoliosis. Although the jacket is used on the chest, the technology is the same: the manufacturer forms a cast and then creates an orthosis from the cast to encourage the body toward a more symmetrical shape. The materials used in the two devices are also the same. Because the DOC Band was a cranial orthosis, the company also drew equivalence to the halo systems used in traction and for stabilizing badly damaged vertebrae in the neck. Use of the halo is more severe because metallic pins are set into the skull so that the head can be held rigidly in one position to protect the spinal cord from further damage. Also, the halo system is commonly used with adults or older children as opposed to infants. Despite these differences, the parallel seemed fairly apparent. However, FDA disagreed that the Boston Body Jacket and the halo system were predicate devices, and on March 13, 1997, the agency rated the DOC Band as NSE. As a result, the cranial orthosis was placed in Class III requiring a premarket approval "because there was no predicate device."

Cranial Technologies requested a meeting with Office of Device Evaluation (ODE) officials to establish the agency's concerns, which ODE had not specified when releasing the NSE rating. The request for a meeting was not granted, but telephone conversations were held with an ODE supervisory reviewer, who described his concerns about the DOC Band, and with an FDA legal specialist, who concurred that an appeal of the NSE determination was permissible and reasonable.

On June 26, 1997, the company submitted an appeal of the NSE decision. CDRH acknowledged receipt of the submission on August 1, 1997, and agreed to reopen the premarket notification submission. Additionally, CDRH informed the company that they were having the submitted preamendment status information evaluated by the Office of Compliance. CDRH further stated that they would be contacting the company for additional information. On September 26, 1997, in the hope of fostering a favorable resolution of the appeal, the company decided to submit extensive, recently obtained information on the subject. This submission was followed by a few telephone discussions early in 1998.

ODE's concerns about this device were transmitted to the company in a letter dated March 12, 1998. The letter reiterated the NSE determination but showed some flexibility: "[ODE] recognize[s] that there were cranial helmets, indicated for use as protective post-surgical headgear, legally on the market before enactment of the Medical Device Amendments." Thus, FDA accepted the existence of a predicate, which was a major step forward. The letter continued that the DOC Band had been reviewed in comparison with this protective helmet, but after careful consideration, ODE had decided that the DOC Band was not substantially equivalent to devices marketed prior to May 28, 1976. According to the letter, the DOC Band "had a new indication for use that alters the therapeutic effect in comparison to [protective] cranial helmets, impacting safety and effectiveness, and is therefore considered a new intended use."

The March 12 letter provided a detailed list of FDA's concerns regarding the impact of this device on infant brain development:

  • The short-term and long-term effects on the child's growth and development.
  • The acute neurologic and dermatologic effects from treatment.
  • The risks associated with mechanical failure of the device.
  • The increased risk for head and neck trauma resulting from routine or sudden infant movements.

The letter indicated that Cranial Technologies had a right to file a submission for classification, but in view of the above concerns, "general controls would be inadequate and special controls difficult to develop." Despite FDA's response, the company decided to proceed with the request under section 207 since the alternative—a PMA application—was highly unappealing.

The De Novo Process. On March 26, 1998, Cranial Technologies submitted a section 207 de novo request for classification of the DOC Band, recommending that the device be placed in Class II. The submission closely followed the guidance document issued by CDRH on this process.5 The company provided extensive data backing its position and suggested several possible special controls to provide for safe and effective use of the device and to perhaps mitigate some of CDRH's concerns.

While CDRH's handling of the 510(k) (K964992) review and subsequent appeal had consumed 15 months, its response to the section 207 submission was prompt and highly interactive. James Dillard, deputy director, Division of General and Restorative Devices, was placed in charge of the section 207 process actions. Dillard established a working group headed by an expert reviewer from the General and Restorative Devices Division. The group also included an expert from the Division of Dental, Infection Control, and General Hospital Devices. Ensuing interactions involved many phone calls and faxes between Dillard's office and company officials and consultants. Both sides invested strenuous efforts to meet the section 207 deadlines.

The expert reviewer acknowledged receipt of the section 207 request on April 7. She also called and faxed Cranial Technologies on April 10, requesting that the company better define the device's indications for use on positional plagiocephaly, rule out related problems, and clarify other treatment options to be tried before starting orthotic treatment. In addition, she requested that the company provide, by April 17, outcome data from major clinical trials as well as the trials' protocols, summary data, discussions, and conclusions.

Although these materials were submitted on time, the expert reviewer and an ODE specialist phoned the company on April 21 requesting further information. The company faxed additional information to the expert reviewer; however, the information was not what ODE wanted. On April 27, the expert reviewer clarified what materials were wanted, and on April 30, the specified information was supplied to ODE. On May 14, the expert reviewer phoned the company requesting a new description of the device, including all variations; a refinement of treatment ages; and a revision of labeling, which had been provided earlier. The company responded on May 15 and interactions continued until May 29.

On May 29, Cranial Technologies received an order classifying the DOC Band into a new Class II device category. This new type of device was identified as a "neurology device under 21 CFR 882.5970," and described as "a cranial orthosis, which is a device intended for medical purposes to apply pressure to prominent regions of an infant's cranium in order to improve cranial symmetry and/or shape." Most orthoses are classified as physical medicine devices in Class I and are exempt from 510(k) requirements. The authors believe that the device was placed in Class II because CDRH was concerned about the sensitive nature of an infant's rapidly growing skull and developing brain. The order further stated that, although some Class II devices are 510(k) exempt (section 510(m)), premarket notification was necessary for a cranial orthosis to provide reasonable assurance of the safety and effectiveness of the device. As a result, the device is not exempt from premarket notification requirements, and companies who intend to market such a device must submit a premarket notification to FDA prior to marketing the device.

CONCLUSION

The first section 207 action was completed on July 30, 1998, when the above classification of cranial orthosis devices was published in the Federal Register.8 This action paralleled that described in the May 29 letter from ODE. Although the publication in the Federal Register occurred about 60 days after the notification letter was received (instead of the 30 days prescribed by section 207), the company viewed the date of the notification letter as the vital date and later publication in the Federal Register as ratification of the ODE action.

The authors believe that the extensive scientific and clinical data provided might have been sufficient to arrive at the same result under regular classification procedures. However, experience indicates that CDRH, especially when it has safety and effectiveness concerns, rarely down-classifies a device. The outcome of a traditional classification process would have been doubtful, and certainly would have been time-consuming.

In this first de novo action, ODE established many precedents and effectively accomplished its goals. Other de novo actions have since been initiated resulting in one device being reclassified to Class II and other devices being forced to remain in Class III. The second device reclassified to Class II was the Perio 2000 system, a generic type of device that FDA identified as an "in vivo sulfide detection device" (21 CFR 872.1870). The effective operation of section 207 also indicates that CDRH's risk-based regulation of devices is becoming a reality. Not only was the final outcome favorable for the manufacturer and the patients who needed the device, but it also enabled FDA to make its decision after establishing special controls to ensure safe and effective use of the device.

REFERENCES

1. PJ Frappaolo and KC Richter, Re-engineering: Year 1 Accomplishments and Future Plans (Washington, DC: FDA, 1997).

2. Federal Register, 63 FR:6193–6194, February 6, 1998.

3. FDA Modernization Act of 1997, Guidance for the Device Industry on Implementation of Highest Priority Provisions (Washington, DC: FDA, February 1998).

4. Overview — FDA Modernization Act of 1997 (Washington, DC: FDA, March 1998).

5. New Section 513(f)(2)Evaluation of Automatic Class III Designation, Guidance for Industry and CDRH Staff (Washington, DC: FDA, February 1998).

6. "AAP Task Force on Infant Positioning and SIDS," Pediatrics 89 (1992): 1120–1126.

7. TR Littlefield, et al., "Treatment of Craniofacial Asymmetry with Dynamic Orthotic Cranioplasty," Journal of Craniofacial Surgery 9 (1998): 11–17.

8. Federal Register, 63 FR:40650–40652, July 30, 1998.

H. Neal Dunning is president of Neal Dunning Associates Inc. (Bethesda, MD), and Timothy R. Littlefield is director of research and development for Cranial Technologies Inc. (Phoenix).


Copyright ©1999 Medical Device & Diagnostic Industry

New Methods to Assess the Performance of Prototype Form-Fill-Seal Packages

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

An MD&DI January 1999 Column

PACKAGING

A study describes novel test procedures and equipment to detect packaging problems early in the development process.

The International Organization for Standardization (ISO) has defined a medical product as "the combination of both the medical device and/or additional components with the final package."1 However, the realities of product development often leave development of the package itself until long after the rest of the product has been fully defined and, perhaps, is in limited production. When a project nears the end of the development period, major changes made to the device itself are difficult and costly, which forces all aspects of device/package compatibility and package performance to be achieved exclusively through package design modifications. A further problem in this typical development pattern is that when package development is finally undertaken, pressure to market the product soon is very high, thus encouraging the package development engineer to make package design decisions based on intuition rather than on scientific evidence from tests. Even in those instances when testing is done, the large number of products required for the standard packaging tests often leads to a product release before the tests are completed.

The package chosen for testing was that of the Insyte N catheter (Becton Dickinson; Sandy, UT), comprising an EVA/K-resin/EVA bottom web and coated Tyvek lid.

The net result of these problems in package development is often a compromised packaging system. Industry warranty results indicate that up to 40% of cases involving claims on medical supplies are the direct result of faulty packaging, and that these failures result in costs in excess of $8 billion in the United States alone—confirming that the system currently used for packaging design is a major problem.2,3

The medical industry is particularly vulnerable to packaging failures. Some of the consequences of packaging failures have been identified as follows:

  • Increased risk of patient infection if product sterility is compromised by defective seals, pinholes, fragile packaging material, or packaging that shreds, delaminates, or tears upon opening.
  • Hampering of surgical procedures because of difficulties in product identification or aseptic transfer, or delays that occur when a product selected for use must be replaced because the package is either initially defective or damaged upon opening.
  • Increased hospital costs due to discarded products or excessive storage-space requirements.
  • Increased manufacturer costs for refund or replacement of damaged products and recall of products with potentially compromised sterility or integrity.4

An improved product development system would include the development of a prototype package early in the overall process, simultaneously with the development of the prototype device. However, an impediment to this early package development has been the lack of reliable tests that can be performed on only a few samples, as would be required if the tests were done when only prototype devices and packages were available.

The purpose of the study presented in this article has been the development of tests—including the testing apparatus—that will allow for early development of medical packaging, so that problems can be noted early enough in the development sequence that real consideration can be given to modifying any negative device design aspects that could otherwise impair packaging performance.

The process of designing and testing a package has been defined as the following steps:

1. Define the environment.

2. Design and fabricate the prototype product (or select an existing product).

3. Define product fragility.

4. Choose the proper protective packaging.

5. Design and fabricate the prototype package.

6. Test the prototype package.3

All of these stages were considered in the process of developing the tests outlined in this article. These criteria imply that we have made efforts to create tests that simulate actual conditions under which packages have been known to fail, or conditions that packages are likely to encounter that would result in failure.

The introduction of test methods that simulate actual package-failure conditions will be of benefit in two ways. First, prospective package configurations and materials can be placed under simulated failure conditions faced in the distribution environment. Depending on the results they achieve, they can be either accepted or rejected. If the latter occurs, improvements can be made to the package design until it is found acceptable. The second benefit is that the new test methods can be used in the early stages of the product's design, before production has started. This will allow the design of the package to take place concurrently with the design of the device, making the product design truly the combination of both the device and the package.

EXPERIMENTAL PROCEDURE

The obvious practical focus of this study suggests that the films and the packaging types examined should be limited to those having common application in commercial packaging equipment. Therefore, this study was restricted to the evaluation of testing on packages made using standard form-fill-seal machines and films made from the widely used EVA/K-resin/ EVA film (supplied by CT Film).

An important concept in the development of the tests is the identification of hazard elements that would be critical to the performance testing of packages and would be included in the procedures. Three critical hazard elements (package failure modes) were identified through discussions with medical device packaging personnel and through analysis of packages that had previously failed. These hazard elements are:

  • Flexing of the film.
  • Abrasion of the formed film caused by the device.
  • Puncture of the formed film by the device.

Flexing is a general term meant to apply to any change in the dimensions or shape of the film. For instance, the film can stretch, buckle, and twist as the package is flexed, compressed, and rotated during shipment or handling. On occasion, the forces on the package can cause the web to crease, thus creating areas of stress concentration that weaken the film and increase the possibility of cracks or pinholes forming in the film. Ship tests on typical finished products using procedures developed by the International Safe Transit Authority (ISTA) have confirmed the importance of film flexing as a hazard element.5 Twelve cases of medical devices containing 2400 products were tested, and 4 packages failed the ISTA testing. Examination of the failed packages indicated that pinholes had developed where creases had formed in the film. With the formation of these pinholes, the sterile barrier of the film was compromised.

Abrasion is one of the most common hazard elements because of the vibrations that occur during shipment. A common failure from vibration results from the interaction between some feature on the device (such as a rough surface, a protruding element, or a shelf) and the film. In a shipping test using ISTA procedures on 1200 products for which abrasion was thought to be a potential problem, 19 of the product packages failed from the effects of abrasion.

Shock causes package failure when the device is forced against and then penetrates the film. These shocks can come from such incidents as takeoffs and landings of planes, railcar switching, road potholes and speed bumps, and package drops. Upon the completion of ship testing, 1800 products were examined for shock-related failures, and 3 of the failures were found to have been caused by the shock from a sharp edge of the device puncturing the film.

NEW TESTS

Two new tests were created to investigate flex, abrasion, and shock hazard elements during the early phases of product and packaging design. When testing is done at an early stage of development, the final device shape will not have been established. Therefore, a "dummy device" must be created to simulate the likely shape of the final product. This dummy device should, as much as possible at this stage of development, contain those features most likely to have a strong impact on the package (e.g., sharp corners, protrusions, etc.).

The tests are designed for single packages. Good statistical analysis will usually require that multiple samples be tested, but only a few (just enough for good statistical evaluations) are generally required. The results should, of course, be averaged using normal statistical methods.

The creation of these new tests required careful definition of the environmental conditions under which they were conducted. The conditions (which are recounted in detail as each of the tests is described) were developed using standard, finished-product test practices as guides. These guide or reference test procedures were ASTM D 4169, Standard Practice for Performance Testing of Shipping Containers and Systems; ISTA Procedure 1A, Preshipment Test Procedures; and ASTM F 392, Standard Test Method for Flex Durability of Flexible Barrier Materials.

The intent of the present article is both to report the findings of the new tests and to encourage others to investigate the use of similar tests, so that some industry standardization might be achieved. Therefore, the authors will be pleased to make available machine part drawings, parts lists, schematics of the electrical circuits, and ladder logic schematics. The detailed procedures, written in a standard test-method format, are also available.6

Flex Test. This test is to allow the investigation of flexural crack durability of a single form-fill-seal package during the early stages of product development. In order to give a realistic simulation of the most likely flexural-type forces on a package, specifications were established for the new flex-test machine so that the apparatus could:

  • Allow the combined motions of compressing, twisting, untwisting, and decompressing to be counted as one cycle.
  • Allow the package to have a range of twisting motions between 0 and 270 degrees.
  • Allow the package to have a range so that it can be crushed from 0 to 1 in.
  • Hold packages measuring from as small as 0.125 x 0.500 in. to as large as 5.00 x 12.00 in.
  • Allow the operator to run any number of cycles.
  • Run at a reasonable rate of speed, from 40 to 65 cycles per minute (fast enough to gain data quickly but not so fast that mechanical complications become paramount in the design).

A schematic diagram of the flex-test machine is presented in Figure 1. Twisting and untwisting within a specified range is accomplished using a rotary actuator, which is attached to a sliding table in which the slide length can vary from 0 to 1 in. through the use of stop collars and a positioning bolt. A programmable logic controller (PLC) is used to control the cycling of the part through the twist, compression, untwist, and decompression steps. The PLC allows the sequence of commands to be programmed and executed in the proper order and within the required cycle time necessary for the test specifications. A counter permits the test administrator to observe the number of cycles completed.

Figure 1. Flex tester.

The test method has been titled "Testing Method for Flexural Crack Durability of a Single Form-Fill-Seal Package." In brief, the test procedures and conditions are as follows:

1. Condition the packages to be tested at controlled room temperature and humidity, including using the same sterilization method to be employed with the finished part.

2. Place a dummy device in the package.

3. Place the package in the machine so that the center of the package is in line with the center of rotation of the rotary mechanism.

4. Set the machine to go through an entire test cycle (four steps).

5. Set the speed of the machine at 43 cycles per minute.

6. Decide on the total number of cycles to be run or, alternately, examine the parts for failure after a predetermined number of cycles. (The failure testing can be done with standard air and water seal leak tests.)

7. Report the findings, which should include the type of flexible film tested, initial thickness of the film before forming, sterilization method, atmospheric conditions under which the package was tested, number of cycles, settings of the test constraints (degrees of rotation and stroke length), and number of pinholes or other changes in the package.

Abrasion and Shock Test. The abrasion and shock test was created to examine the interactive effects of abrasion and shock on a form-fill-seal package. A review of the hazard elements of abrasion and shock reveals that in many cases these two elements interact with one another. For example, a package that is traveling by truck and trailer will be exposed to road vibration in its journey to the end-user but, at the same time, will also be exposed to shocks from potholes, speed bumps, railroad crossings, etc. Because of the combined occurrence of these two elements in the distribution environment and the interactive effects they have on the package, they were combined into a single test to give a more accurate account of their joint action. A sketch of the abrasion-and-shock-test apparatus is provided in Figure 2.

Figure 2. Abrasion and shock tester.

In choosing the type of vibration to use in the test, two considerations were taken into account. First, what kind of vibrations were being used in standard test methods, which presumably reflected the vibrations encountered under actual shipping conditions? Second, what type of equipment corresponding to realistic vibrational situations could be purchased at relatively modest expense?

The current test methods, such as ASTM D 4728 and ISTA 2, use vibration profiles that plot forces (in G units) against frequency for various common shipping methods (e.g., truck, rail, or air). Typically, the forces will rise to a maximum at one or more frequencies that are characteristic for the particular type of shipping encountered. Therefore, to duplicate these tests, a vibratory table capable of many vibration frequencies would be needed. However, both equipment cost limitations and a recognition of the required level of test precision suggested that a vibration table with only single-frequency capabilities should be used. The frequency chosen—21 Hz—was within the peak range for all of the common transportation methods, thus ensuring that each method would be represented, although not necessarily at its maximum vibrational level. A package mounting accessory attached to the table allows packages of different sizes to be tested.

Besides undergoing vibration, the package is also subjected to shocks that would be typical of those encountered under various shipping conditions. Because the interaction between the device and the package is so critical in determining how the packaging film resists penetration by the device, it was decided that the level of shock impact should vary so that a wide range of package/device relationships could be accommodated. This variability was achieved by mounting the dummy device on a pneumatic cylinder with a load cell attached, so that the force of the dummy device on the film could be regulated and monitored by the test administrator. The purpose of the machine is to subject the package to vibrations and then to occasional shocks from having the dummy device rapidly pressed into the film.

This second test method has been titled "Testing Method for Abrasion and Shock Durability of a Formed Bottom Web of a Form-Fill-Seal Package." In brief, the test procedures and conditions are the following:

1. Condition the packages to be tested at controlled room temperature and humidity, including using the same sterilization method to be used with the finished part.

2. Attach the dummy device to the load cell, which is in turn attached to the pneumatic cylinder so that the critical features of the device will contact the film.

3. Place the package in the machine so that the package is under the pneumatic cylinder and will come in contact with the dummy device, which is mounted on the end of the cylinder. The package should be oriented so that when the dummy device contacts it, the contact will simulate what occurs under actual packaging conditions.

4. Move the pneumatic cylinder so that the dummy device presses against the film and then increase the pressure—by adjusting the slide mechanism on the machine—until it registers 10 times the normal weight of the device.

5. Set the stroke length of the pneumatic cylinder.

6. Begin vibrations, so that the film is subjected to vibrational forces.

7. While the table (and attached package) is vibrating, activate the cylinder so that the dummy device rapidly presses into the film at the predetermined pressure at 3, 8, and 13 minutes.

8. After 15 minutes of vibrations, the pressure should be recalibrated to 10 times the weight of the device because of the natural decrease in pressure that will occur as a result of stretching of the film.

9. Repeat the cycle as many times as necessary to create failures, or—if a minimum time for acceptability has been determined—for the minimum time.

10. Report the results, including type of flexible film tested, initial thickness of the film before forming, sterilization method, atmospheric conditions under which the package was tested, frequency of vibration, cylinder stroke length, number of test cycles performed, and presence or absence of pinholes or other changes in the film. The presence of pinholes can be determined by any one of a number of standard tests for film integrity.

DATA ANALYSIS (RELIABILITY MODEL)

When a new package design has been created, one of the areas of interest lies in ascertaining when the package will fail—that is, in determining the reliability of the package. The reliability of a unit has been defined as "the probability that it will perform its intended purpose adequately for a given length of time under specified conditions."7 In the case of package testing, reliability studies are important for determining the probability of a package failing within the distribution environment at a given time.

All packaging will fail within the distribution environment if exposed to hazard elements for long enough. Prototype package testing requires determining the probability of failure at a given time, and under specified conditions, for a package that is known to work in the distribution environment. Given this information, new package designs can be tested under the same conditions, and probabilities of failure determined for the same time intervals. By comparing the failure probabilities of the prototype package to those of the known package design, a decision can be made on the reliability of the package. If the probability of failure of the prototype is lower than that of the known package, the prototype design can be accepted for continued development. However, if the probability of failure of the prototype is higher than that of the known package, the prototype design should be rejected and an alternative sought.

When modeling time to failure, it is important to choose the proper type of distribution to analyze the binary outcome variable (acceptable/pass or not acceptable/fail). For this study, the logistic distribution was chosen, because it is flexible and easy to use.8 Data collected from a known package can be analyzed using this distribution and future comparisons made to help determine whether a prototype package will survive the distribution environment.

The binary response of a package failing or not failing can be quantified using the logistic regression model:

(x) = exp(ß0 + ß1x) / [1 + exp(ß0 + ß1x)],

where (x) = E(Y/x).8 The quantity E(Y/x) is interpreted as the expected value Y, given the value of x. With the value of Y denoting the outcome variable (failure = 1, no failure = 0) and x denoting the independent variable (any given time from - to +), E(Y = 1/x) can be read as the probability of a failure occurring at x time. This analysis method will be applied to the results obtained from testing specific packages using the two tests developed in this study.

TEST PACKAGE DESIGN

The package chosen to be tested in the new performance tests was that used for Becton Dickinson's Insyte N catheter, one of the company's high-production-volume catheters. This product represents an ideal "safe" standard, since it has been on the market for many years and has not been known to have experienced any package failures caused by the distribution environment. The package, which measures approximately 6.66 in. long, 1.17 in. wide, and 0.52 in. deep, is made using an EVA/K-resin/EVA bottom web, with an 8-mil (0.008-in.) preformed thickness, and a coated Tyvek lid. Postforming wall thicknesses are approximately 3.5 mil (0.035 in.) in the flats and 2 mil (0.020 in.) in the corners at each end.

TEST RESULTS

Data collected from testing of the Insyte N package using the new flex test are given in Table I. Fifteen packaged products were divided into five lots, so that three specimens could be tested at each test period. The test procedures outlined earlier were used with the following test periods of minutes per cycle: 30/1290, 60/2580, 90/3870, 120/5160, and 150/6450. The integrity tests were performed using Becton Dickinson's QCGE-83 air and water seal leak test, with the number of failures also reported in Table I.

Test Period
(Cycles)
Sample
Number
Results
Failure = 1
No Failure = 0
Total Number
of Failures
1290 (at 30 min) 1
2
3
0
1
0
1
2580 (at 60 min) 4
5
6
0
0
0
0
3870 (at 90 min) 7
8
9
0
1
1
2
5160 (at 120 min) 10
11
12
1
1
1
3
6450 (at 150 min) 13
14
15
1
1
1
3


Table I. Specimen-integrity test results for flex test.

After obtaining the results of the integrity testing performed on the flex-test specimens, a logistic regression model was created to give probability of failures at the times tested. For the flex-test specimens, the logistic regression model is:

(t) = exp(–3.697 + 2.987 x t) / [1 + exp(–3.697 + 2.987) x t].

This model was created using the equation on page 134 and a standard statistical software package, and allows for the calculation of predicted probabilities of failure for the package at the different test periods used.9 The predicted probabilities of failure for the flex-test specimens at the different test periods are listed in Table II.

Test Period (min) Predicted Probability of Failure
30 0.099465
600.329646
900.686458
1200.906955
1500.977476


Table II. Probabilities of failure of flex-test specimens at given times.

The abrasion and shock test was performed on 18 Insyte N packages (the same product used in the flex test), following the procedures for this test presented earlier. The frequency of the vibration table was set at 21 ± 1 Hz, with a table amplitude of roughly 0.25 in.; cylinder stroke length was set at 0.050 in. The products were divided into six lots, with three specimens tested at each test period. Test periods were 30, 60, 90, 120, 180, and 240 minutes. After the abrasion and shock test was completed, each package was integrity tested using Becton Dickinson's QCGE-83 air and water seal leak test. Results of the abrasion and shock tests are given in Table III.

Test Period
(min)
Sample
Number
Results
Failure = 1
No Failure = 0
Total Number
of Failures
301
2
3
0
0
0
0
604
5
6
0
0
0
0
907
8
9
1
0
1
2
12010
11
12
0
1
1
2
180 13
14
15
1
1
1
3
240 16
17
18
1
1
1
3


Table III. Specimen-integrity test results for abrasion and shock test.

Once the results of the integrity testing performed on the abrasion-and-shock-test specimens were obtained, a logistic regression model was created to predict the probability of failure at the times tested. For the abrasion-and-shock-test specimens, the logistic regression model is:

(t) = exp(–5.937 + 3.706 x t) /[1 + exp(–5.937 + 3.706) x t].

Again, this model was created using standard statistical software.9 The results are shown in Table IV.

Test Period (min) Predicted Probability of Failure
30 0.016731
60 0.097901
90 0.409051
120 0.815325
180 0.994463
240 0.999863


Table IV. Probabilities of failure of the abrasion-and-shock-test specimens at given times.

An examination of the failures from both tests reveals that they were consistent with those found in packages that failed in the actual distribution environment. Packages tested in the flex-test apparatus failed because of pinholes formed in the areas where the material had been stressed by constant flexing and compressing, whereas packages tested with the abrasion and shock test failed because of pinholes and tears.

Software Risk Management for Medical Devices

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

An MD&DI January 1999 Column

SOFTWARE RISK MANAGEMENT

Editor's note: if you need to print this article, please see our print-ready version
As more devices integrate software, early risk management is critical to ensure that the devices are trustworthy.

Medical devices combine many engineered technologies to deliver a diagnostic, monitoring, or therapeutic function. The number of device functions that depend on correctly operating software continues to increase. Project managers are now making software development and quality assurance the predominant portion of many development budgets. Even for a product with numerous mechanical or electronics elements, software can consume as much as 70% of a multimillion-dollar development budget. Even projects involving simple devices that have basic user interfaces and provide only straightforward therapy—such as the delivery of energy to the body—may allocate 40 to 50% of their budget to software and software-related activities.

The growth of software in medical systems can be traced indirectly to the increased use of commercial off-the-shelf (COTS) software. Consistent with trends in other markets, this growth encompasses both the amount of software contained in a device and the key functions to which it is applied. As software becomes a more critical component in many devices, software risk management is becoming more important. Risk-management expectations now include application-specific software embedded in a device, COTS software used in the computing environment, and software-development engineering tools.

The basic principles of risk management are based on good engineering, common sense, and the ethic of safety. Standard, judgment-based techniques yield work products that are accepted by the engineering and regulatory communities. Patterns now exist that define common responses for certain types of software failures. This article considers the key concepts, the work products that result from analysis, and the management aspects that are necessary to achieve safe software.

BASIC RISK MANAGEMENT

Effective software risk management consists of three activities. First, developers must acknowledge that certain device risks can result from software faults. Second, developers must take appropriate actions to minimize the risks. Third, developers must demonstrate that the means taken to minimize the risks work as intended. Throughout these activities, the focus is on the potential for harm to the patient, the care provider, and the treatment environment.

The basis for decision making regarding software failure risks centers on different forms of analyses. This work links a specific hazard to an envisioned software failure. Assuming a significant hazard exists, the developer must minimize the hazard by applying software or hardware technology or by undertaking other modifications during the development process. Formal tests, which measure device performance when software failures are artificially forced, demonstrate the hazard. Analysis, safe design and implementation, and testing must all be applied fully to software to satisfy the question of having applied the best practices. It is important that these activities are seen as linked, so that once risks are understood the development team is committed to a remedial process.

Software risk assessment as described in this article is directed toward the software contained within a medical device. Product risk is usually analyzed separately from the processes necessary to understand and respond to development risks inherent in software-based projects.1 However, project risk linked to a flawed development process can result in the introduction of flaws that can lead to reduced software safety. Project risk assessment is popular in the engineering press as a means of understanding threats to meeting software delivery goals. Structured around the subjective evaluation of many parameters relating to the development process, the tools available, and the team capabilities, an honest acknowledgment of weaknesses in these areas can indicate potential risks within the product software. Because process flaws and team weaknesses can lead to software faults, project risk analysis is strongly recommended to minimize their effects.

Figure 1. Risk-exposure mapping.

A cornerstone to risk management is the notion of risk exposure. Exposure is defined as a function of the potential for loss and the size of the loss. The highest possible exposure arises when the loss potential and the size of the loss that might occur are both judged as high. Different risk-exposure levels arise with different values for loss probability and size. A two-dimensional plane that diagrams exposure is shown in Figure 1.

For example, assume that a design for a bottle to hold a volatile fluid has a high potential for leaking. Further, assume that should the fluid leak, a fire will start. Given this situation, the exposure is high, as illustrated by point A in Figure 1. The developer might choose to apply a gasket to reduce the possibility of leaks, which would result in the risk exposure marked as point B. Alternatively, adding another compound might reduce the volatility of the fluid if it does leak. If this is undertaken, the potential for loss would shift to the risk exposure represented by point C. A combination of actions would result in the risk being shifted to point D, providing lower risk than any single solution.

This abstract example embodies two key points about applying risk management to software. First, the assessment of the factors that contribute to the risk exposure are taken as informed judgments from individuals who understand the failure mechanisms. Second, a variety of options implemented alone could reduce the risk exposure, but a combination of approaches often yields the best result. This exercise allows the developer to define the mitigation steps. Response patterns for certain software faults are becoming common—much like a gasket is the accepted solution for reducing the potential for a leak.

KEY CONCEPTS

The general concepts of hazard and risk analysis have been presented in previous articles.2,3 Applying general risk management concepts to software requires adapting approaches originally developed for analyzing systems dominated by mechanical and electrical subsystems. As with many engineering areas, risk management is easier to enact if a foundation has been built on key concepts—in this case, concepts particular to software. Such concepts are discussed below, along with a means for applying them together. It is important to understand these concepts in order to tailor risk management techniques to a particular organization. Understanding them enables a product manager to present a foundation for the risk-management plan before presenting the means of implementing it. This also implies that although techniques presented can be changed later, the end result must meet any challenge based on the fundamentals.

Safety Requirements. All medical devices must fulfill a set of operational requirements. These requirements include a subset focused on patient and provider safety. Most of these requirements are derived as a part of the initial engineering, including functional requirement needs analysis, architecture specification, initial risk analysis (IRA), and other processes used by the development team to define the initial concepts and operational requirements for the device.

Software risk analysis typically involves several processes that clarify the role of software in meeting the system safety requirements. Properly conducted, software risk analysis identifies how software failure can lead to compromised safety requirements and ultimately to patient or user hazards. Software risk analysis is applied at different levels of detail throughout product development. Therefore, this analysis supports the formulation of a systemwide risk analysis to understand how all aspects of the system support the safety specification.

Software risk analysis can identify the need for specific hardware or software devoted to supporting safety requirements. Such analysis can also pinpoint the need to modify the design or to reconfigure the software environment. Risk analysis is almost always applied to embedded software to understand its function as the primary safety-significant software. It can also be applied to design tools, compilers, automatic test software, and other supporting software that could indirectly affect system safety.

Software risk analysis assumes that the product software is organized into a hierarchical interconnection of functional building blocks. The execution of the code within a building block provides some function in support of the device requirements. A building block can be a subroutine, a function, or an object; a collection of functions, often called a module; or even a full subsystem, such as the operating system. The relationship of the building blocks—based on the way they interface and depend on one another—is also important. Although the concept of building blocks is an abstraction, this idea provides the structure needed to develop and understand the role of the software.

Trustworthiness. Software is expected to reliably perform a function. However, highly reliable software may not necessarily provide for the safe operation of the device. More importantly, software must be absolutely trustworthy. Trustworthiness hinges on the tenet that when the software fails, the device behavior must be predictable. Typically, when device software fails, the unit operation shifts so that the system is in a safe state. This enables the system to operate with the lowest risk to the patient, the operator, and the environment. Usually, but not always, a device is considered in a safe state when all electromechanical operation is stopped and an alarm system is activated. This standard may not be high enough for a device such as a pacemaker; for this type of device, it might make sense to diminish software control of the device and provide only electronic control. Another alternative is to transfer control to an independent control subsystem that has a separate processor and software. This can become more complicated if the safe state of the device is related to sequential or cyclic medical therapy. For example, a safe state for an intraaortic balloon pump depends on the balloon's inflation. Stopping the system and activating an alarm while the balloon is inflated in the vessel would not be a safe state, whereas if failure occurs when the balloon is deflated, then stopping and alarming would be the safe state.

Since the development of software is a person-intensive, knowledge-based activity, it is common to associate highly reliable software with an increased attention and effort per line of code delivered for a particular use. Some correlation also exists between the maturity of the development process—including the formal verification and validation processes—and the number of defects found in the resulting code. Highly reliable software, such as that being developed for the Federal Aviation Administration's new air-traffic control system, is estimated to cost $500 per delivered source line.4 Starting with a base of $100 per engineer hour, development costs for medical device software, written in C, seldom are more than $90 per delivered source line, even in highly critical, life-sustaining devices. Medical device industry norms don't provide for the level of funding necessary to develop and formally ensure that software is highly reliable; a device in which a software failure places the patient in jeopardy is simply considered to be poorly engineered. Given the economic realities of the medical device business, designers therefore usually apply their efforts to achieving trustworthiness rather than NASA-level reliability.

ESTABLISHING RISK INDEXES

An important part of risk analysis is understanding how critical an unsafe condition might be. A risk index is a derived value that depends on the probability and the severity of the hazard. In traditional risk analysis, values for key parameters are multiplied to yield a numeric risk index called criticality. Based on military specification MIL-SPEC 1629A, this method is typically not used for analyzing software. Instead, using guidance from the more recent system safety specification, MIL-SPEC 832C, a table can be constructed that provides the risk index for each combination of qualitative assignments for both the occurrence probability and the loss or hazard severity. A simple version is shown in Table I. Note that this table is similar to the two-dimensional risk illustration shown in Figure 1.

  Hazard Severity/Loss
Probability of OccurrenceMinorModerate Major
Improbable Low Low Moderate
Remote Low Moderate High
Occasional Moderate High High
Reasonable High High Very high
Frequent High Very high Very high


Table I. Example of a simple risk index.

The use of a risk-index table to look up an identified risk combination has proven to be quite useful. It is important to remember the following points when applying this method.

  • The risk-index table should be formally documented, including a description of the qualitative parameters for each occurrence and severity.
  • A development team or quality group may define its own table with different labels and values.

The identified risks will be important information for many on the development team. Since these individuals may join the team after the risk analyses have been completed, enough detail must be provided so they can easily understand the context for the risk judgments. The values and appropriate actions can be developed so that management shares the decision responsibility for high-risk items. A separate table should describe the level of acceptability of the risk-index values (Table II). For example, each risk index can be tied to a specific hazard or loss of safety and with a cause. Because the cause might be linked to mitigation, documents that contain risk indexes must indicate whether the indexes were assigned before or after specification of the hazard mitigation. If assigned before mitigation, the risk index can be used to indicate the need for mitigation mechanisms. If it is assigned after mitigation, the risk index should show how well the cause–mitigation pairing reduces loss.

Risk-Index Value Action
Very high Unacceptable—requires (further) mitigation
High Acceptable only with engineering and quality executive signoff
Moderate Acceptable with project manager signoff
Low Acceptable with no review


Table II. Example of a risk-index value and action assignment table.

SAFETY-SIGNIFICANT VARIABLES

Program execution typically involves setting and altering values. Many values have little effect on the device's meeting the system safety requirements. Some variables, such as the dosage rate for an infusion pump or how much energy a defibrillator should discharge, do relate directly to device safety. An operator can input such values directly through a device's front panel. Computed variables containing crucial control values also play a role in device safety. One example is the stepper motor rate for achieving a given pump dosage. Variables whose values affect device safety are termed safety-significant variables.

Traditional risk analysis includes determining the probability that the system will threaten humans. Analysis performed according to MIL-SPEC 1629A includes multiplying numeric probabilities for occurrence, severity, and detectability. This process can confuse engineers new to software risk analysis. Software risk analysis as currently practiced for medical device development does not reliably support quantification at this level.

SAFETY-SPECIFIC SOFTWARE

Software risk analysis hinges on the idea that not all software is directly involved in meeting the device's safety requirements. The support of the safety requirements is spread unevenly among the software's building blocks. Modules that fulfill the safety requirements are typically termed safety-critical or safety-significant. For example, a module that contains an algorithm for controlling the energy level applied to a patient is much more safety-critical than one that provides a background housekeeping task. The engineering literature also describes safety-related software as composing a safety net to ensure safety when safety-critical software fails.

The concept that not all software within a device is safety-critical might be difficult to understand because, for simple devices, the source code is compiled into a block of executable machine instructions. Abstract boundaries do not apply to the monolithic block of machine instructions. It is easy to see that any software failure can eventually result in the failure of software responsible for safety requirements. This threat usually boils down to three points:

  • Other software can corrupt variables affecting safety performance, which means that safety-critical information must be maintained so that corruption can be detected.
  • Other software can cause execution threads to fail, resulting in the execution of code out of normal sequence. Well-engineered, safety-critical software ensures the proper sequence of critical code segments.
  • Other poorly engineered software can consume or mismanage computing resources such as processor computation bandwidth and working memory. This can lead to rendering safety-critical software nonfunctional. As C++ has become more popular, engineers must address so-called memory leaks that result from software execution threads exiting the environment without freeing up memory resources. Safety-related software must reduce memory leaks. One solution is to separate localized resource control from system control.

MITIGATION

Risk management depends on the premise that software failure can result in increased risk. Developers must define approaches for reducing, or mitigating, the risk. This requires designers to couple potential software failures to mitigations. Such pairings are described below, from the least dependable to the most critical for reducing potential software failure.

Inform the User. A potential risk might be linked to an information prompt. For example, "Failure: Information written to wrong area of screen buffer. Mitigation: Provide user documentation relating to expected display." Obviously, this particular mitigation is weak because it relies on activities outside the control of the development and quality teams. Developers must review the instructions to ensure that the screen layout provides a key to information found in the different screen areas.

Displaying information and related hazards is particularly difficult. Development teams often simply indicate that a trained care provider should be able to detect presentations that do not make sense or that have distorted values. Mitigation quality depends on both the value of the screen information to the care provider and on how well the care provider is trained. When information is critical to therapy, some designs provide a means for reading the display buffer back to ensure information validity. More dangerous is dead facing, in which the device display is commanded blank but the device continues to administer a therapy.

Development Process. A typical pairing here would be: "Failure: Flawed breathing algorithm implementation. Mitigation: Independent review." This indicates that something exceptional will be done within the development process. To complete process mitigation, the mitigation must be described in the software development plan and audited to ensure that an independent review did occur and, equally important, that any findings were subsequently acted on.

Software Mechanisms. A pairing that expresses a software problem might be presented as "Failure: Overwritten pump-speed variable. Mitigation: Variable redundantly stored, accessed, and changed by a single function." Adding such special software is common and is considered good practice because it enforces structured access and enables corruption detection on every access. These mechanisms can be weakened, however, if sloppy use—such as moving a critical variable to a locally used variable—is allowed. This reinforces the fact that software mechanisms might require parts of the development process, such as code inspection, to detect and enforce usage rules.

Hardware Mechanisms. An example of a failure that calls for hardware mitigation would be "Failure: Runaway execution thread. Mitigation: Hardware watchdog timer." Installing a separate hardware safety mechanism is considered good practice because the hardware relies on an independent technology to provide the device safety net. However, if the software fails to interface properly with the watchdog circuitry, and the start-up test fails to detect the malfunction, this particular hardware mitigation could be ineffective.

Execution Diversity. This type of mitigation depends on the system's architecture having a safety supervisor. A safety supervisor employs a separate processor with its own software to ensure that the primary processor and software stream operate according to device safety requirements. It is common for this type of architecture, described in the literature for process control systems, to be found in off-the-shelf processor and software packages.5

In the European Union, the response speed of the mitigation mechanism for certain devices is considered important. For example, specifications for a syringe infusion pump require that the dosage rate be proportional to the motor speed, since overdose is a common hazard for this type of device and can be caused by software runaway inducing a high motor rate. A common form of mitigation is a watchdog timer. Depending on how the timer is implemented, the time elapsed from a software fault to an error to detection to a safe state with the pump motor stopped could be too long to prevent a dangerous dose of a drug from being uncontrollably administered. Although a mitigation exists, it may not reduce the risk exposure if it is slower than the speed of the therapy. A lock-and-key software that can command the motor speed would provide a faster mitigation and would relocate the risk exposure to a safer region of the software.

Lock-and-key software depends on safety-critical software functions only being executed if the operator presents the proper key. Properly implemented, lock-and-key software also detects a software system jump in the middle of a function. Lock-and-key, then, would detect rather than initiate a command for illegal execution entry at the start of the function. Should entry appear further down the function, lock-and-key would detect an illegal command within microseconds after motor commanding. Combining watchdog and lock-and-key solutions provides the most protection.

CAUSE-AND-MITIGATION PATTERNS

One of the most recent paradigms to appear for software developers is the idea of patterns. Patterns are based on the observation that a collection of software objects can be connected in certain ways to solve common problems. Although this assumption is abstract, this approach reinforces the idea that at some level the solutions for multiple problems take on a similar form.

The paradigm of patterns can be applied to streamline cause and mitigation pairing. Patterns for a limited set of software failures are listed in Table III. The list of mitigation mechanisms in the table is loosely ordered from more common to less common approaches. All patterns assume a solid development process that includes code inspection and verification testing as a baseline. The patterns represent only a starting point. The challenge is to specify additional patterns that might be unique to the product's architecture or implementation environment.

FailureMitigation Mechanisms
Data/variable corruptionRedundant copies; validity checking, controlled access
CRC or check sum of storage space
Reasonableness checks on fetch
Hardware-induced problemsRigorous built-in-self-test (BIST) at start-up
Reasonableness checks
Interleaved diagnostic software
(See illegal function entry; data corruption)
Software runaway; illegal function entryWatchdog hardware
Lock-and-key on entry and exit
Bounds/reasonableness checking
Execution tread logging with independent checking
Memory leakage starves execution stream Explicit code inspection checklist and coding rules
Memory usage analysis
Instrumented code under usage stress analysis
Local memory control for safety-critical functions
Flawed control value submitted to HW Independent read back with reasonableness check
HW mechanism provides independent control/safe state
Safety supervisor computer must agree to value
Flawed display of informationBIST with user review direction in user manual
Read back with independent software check
Separate display processor checks reasonableness
Overlapped illegal use of memory Explicit inspection checklist item
Coding rules on allocation and deallocation
Special pointer assignment rules


Table III. Failure patterns and mitigation mechanisms.

LINKING MITIGATION SOLUTIONS

The potential for not implementing all safety-related software and thus not properly supporting the mitigation functions is great in large projects that have a number of developers. Therefore, applying traces to software is becoming more important. Typically, a trace links a downstream activity to something that was determined earlier in the product development life cycle.

Traces are a critical part of software risk management. At a minimum, mitigation demonstration must be linked to the tested cause-and-mitigation pair. A more conservative approach—as might be applied to software found in blood-bank devices and systems—is to link each cause-and-mitigation control and safety requirement to specific requirements in the product's software requirements specification.6 Linking safety requirements to specific logic routines that accomplish the function also ties acknowledgement to mitigation and to demonstration.

Software Risk Management for Medical Devices

CONTENT="Package+Testing,Integrated+Engineering,small+motor,Syringe,Pump,Integrated+Circuits,stepper+motor">

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

An MD&DI January 1999 Column

SOFTWARE RISK MANAGEMENT

As more devices integrate software, early risk management is critical to ensure that the devices are trustworthy.

Medical devices combine many engineered technologies to deliver a diagnostic, monitoring, or therapeutic function. The number of device functions that depend on correctly operating software continues to increase. Project managers are now making software development and quality assurance the predominant portion of many development budgets. Even for a product with numerous mechanical or electronics elements, software can consume as much as 70% of a multimillion-dollar development budget. Even projects involving simple devices that have basic user interfaces and provide only straightforward therapy—such as the delivery of energy to the body—may allocate 40 to 50% of their budget to software and software-related activities.

The growth of software in medical systems can be traced indirectly to the increased use of commercial off-the-shelf (COTS) software. Consistent with trends in other markets, this growth encompasses both the amount of software contained in a device and the key functions to which it is applied. As software becomes a more critical component in many devices, software risk management is becoming more important. Risk-management expectations now include application-specific software embedded in a device, COTS software used in the computing environment, and software-development engineering tools.

The basic principles of risk management are based on good engineering, common sense, and the ethic of safety. Standard, judgment-based techniques yield work products that are accepted by the engineering and regulatory communities. Patterns now exist that define common responses for certain types of software failures. This article considers the key concepts, the work products that result from analysis, and the management aspects that are necessary to achieve safe software.

BASIC RISK MANAGEMENT

Effective software risk management consists of three activities. First, developers must acknowledge that certain device risks can result from software faults. Second, developers must take appropriate actions to minimize the risks. Third, developers must demonstrate that the means taken to minimize the risks work as intended. Throughout these activities, the focus is on the potential for harm to the patient, the care provider, and the treatment environment.

The basis for decision making regarding software failure risks centers on different forms of analyses. This work links a specific hazard to an envisioned software failure. Assuming a significant hazard exists, the developer must minimize the hazard by applying software or hardware technology or by undertaking other modifications during the development process. Formal tests, which measure device performance when software failures are artificially forced, demonstrate the hazard. Analysis, safe design and implementation, and testing must all be applied fully to software to satisfy the question of having applied the best practices. It is important that these activities are seen as linked, so that once risks are understood the development team is committed to a remedial process.

Software risk assessment as described in this article is directed toward the software contained within a medical device. Product risk is usually analyzed separately from the processes necessary to understand and respond to development risks inherent in software-based projects.1 However, project risk linked to a flawed development process can result in the introduction of flaws that can lead to reduced software safety. Project risk assessment is popular in the engineering press as a means of understanding threats to meeting software delivery goals. Structured around the subjective evaluation of many parameters relating to the development process, the tools available, and the team capabilities, an honest acknowledgment of weaknesses in these areas can indicate potential risks within the product software. Because process flaws and team weaknesses can lead to software faults, project risk analysis is strongly recommended to minimize their effects.

Figure 1. Risk-exposure mapping.

A cornerstone to risk management is the notion of risk exposure. Exposure is defined as a function of the potential for loss and the size of the loss. The highest possible exposure arises when the loss potential and the size of the loss that might occur are both judged as high. Different risk-exposure levels arise with different values for loss probability and size. A two-dimensional plane that diagrams exposure is shown in Figure 1.

For example, assume that a design for a bottle to hold a volatile fluid has a high potential for leaking. Further, assume that should the fluid leak, a fire will start. Given this situation, the exposure is high, as illustrated by point A in Figure 1. The developer might choose to apply a gasket to reduce the possibility of leaks, which would result in the risk exposure marked as point B. Alternatively, adding another compound might reduce the volatility of the fluid if it does leak. If this is undertaken, the potential for loss would shift to the risk exposure represented by point C. A combination of actions would result in the risk being shifted to point D, providing lower risk than any single solution.

This abstract example embodies two key points about applying risk management to software. First, the assessment of the factors that contribute to the risk exposure are taken as informed judgments from individuals who understand the failure mechanisms. Second, a variety of options implemented alone could reduce the risk exposure, but a combination of approaches often yields the best result. This exercise allows the developer to define the mitigation steps. Response patterns for certain software faults are becoming common—much like a gasket is the accepted solution for reducing the potential for a leak.

KEY CONCEPTS

The general concepts of hazard and risk analysis have been presented in previous articles.2,3 Applying general risk management concepts to software requires adapting approaches originally developed for analyzing systems dominated by mechanical and electrical subsystems. As with many engineering areas, risk management is easier to enact if a foundation has been built on key concepts—in this case, concepts particular to software. Such concepts are discussed below, along with a means for applying them together. It is important to understand these concepts in order to tailor risk management techniques to a particular organization. Understanding them enables a product manager to present a foundation for the risk-management plan before presenting the means of implementing it. This also implies that although techniques presented can be changed later, the end result must meet any challenge based on the fundamentals.

Safety Requirements. All medical devices must fulfill a set of operational requirements. These requirements include a subset focused on patient and provider safety. Most of these requirements are derived as a part of the initial engineering, including functional requirement needs analysis, architecture specification, initial risk analysis (IRA), and other processes used by the development team to define the initial concepts and operational requirements for the device.

Software risk analysis typically involves several processes that clarify the role of software in meeting the system safety requirements. Properly conducted, software risk analysis identifies how software failure can lead to compromised safety requirements and ultimately to patient or user hazards. Software risk analysis is applied at different levels of detail throughout product development. Therefore, this analysis supports the formulation of a systemwide risk analysis to understand how all aspects of the system support the safety specification.

Software risk analysis can identify the need for specific hardware or software devoted to supporting safety requirements. Such analysis can also pinpoint the need to modify the design or to reconfigure the software environment. Risk analysis is almost always applied to embedded software to understand its function as the primary safety-significant software. It can also be applied to design tools, compilers, automatic test software, and other supporting software that could indirectly affect system safety.

Software risk analysis assumes that the product software is organized into a hierarchical interconnection of functional building blocks. The execution of the code within a building block provides some function in support of the device requirements. A building block can be a subroutine, a function, or an object; a collection of functions, often called a module; or even a full subsystem, such as the operating system. The relationship of the building blocks—based on the way they interface and depend on one another—is also important. Although the concept of building blocks is an abstraction, this idea provides the structure needed to develop and understand the role of the software.

Trustworthiness. Software is expected to reliably perform a function. However, highly reliable software may not necessarily provide for the safe operation of the device. More importantly, software must be absolutely trustworthy. Trustworthiness hinges on the tenet that when the software fails, the device behavior must be predictable. Typically, when device software fails, the unit operation shifts so that the system is in a safe state. This enables the system to operate with the lowest risk to the patient, the operator, and the environment. Usually, but not always, a device is considered in a safe state when all electromechanical operation is stopped and an alarm system is activated. This standard may not be high enough for a device such as a pacemaker; for this type of device, it might make sense to diminish software control of the device and provide only electronic control. Another alternative is to transfer control to an independent control subsystem that has a separate processor and software. This can become more complicated if the safe state of the device is related to sequential or cyclic medical therapy. For example, a safe state for an intraaortic balloon pump depends on the balloon's inflation. Stopping the system and activating an alarm while the balloon is inflated in the vessel would not be a safe state, whereas if failure occurs when the balloon is deflated, then stopping and alarming would be the safe state.

Since the development of software is a person-intensive, knowledge-based activity, it is common to associate highly reliable software with an increased attention and effort per line of code delivered for a particular use. Some correlation also exists between the maturity of the development process—including the formal verification and validation processes—and the number of defects found in the resulting code. Highly reliable software, such as that being developed for the Federal Aviation Administration's new air-traffic control system, is estimated to cost $500 per delivered source line.4 Starting with a base of $100 per engineer hour, development costs for medical device software, written in C, seldom are more than $90 per delivered source line, even in highly critical, life-sustaining devices. Medical device industry norms don't provide for the level of funding necessary to develop and formally ensure that software is highly reliable; a device in which a software failure places the patient in jeopardy is simply considered to be poorly engineered. Given the economic realities of the medical device business, designers therefore usually apply their efforts to achieving trustworthiness rather than NASA-level reliability.

ESTABLISHING RISK INDEXES

An important part of risk analysis is understanding how critical an unsafe condition might be. A risk index is a derived value that depends on the probability and the severity of the hazard. In traditional risk analysis, values for key parameters are multiplied to yield a numeric risk index called criticality. Based on military specification MIL-SPEC 1629A, this method is typically not used for analyzing software. Instead, using guidance from the more recent system safety specification, MIL-SPEC 832C, a table can be constructed that provides the risk index for each combination of qualitative assignments for both the occurrence probability and the loss or hazard severity. A simple version is shown in Table I. Note that this table is similar to the two-dimensional risk illustration shown in Figure 1.

  Hazard Severity/Loss
Probability of OccurrenceMinorModerate Major
Improbable Low Low Moderate
Remote Low Moderate High
Occasional Moderate High High
Reasonable High High Very high
Frequent High Very high Very high


Table I. Example of a simple risk index.

The use of a risk-index table to look up an identified risk combination has proven to be quite useful. It is important to remember the following points when applying this method.

  • The risk-index table should be formally documented, including a description of the qualitative parameters for each occurrence and severity.
  • A development team or quality group may define its own table with different labels and values.

The identified risks will be important information for many on the development team. Since these individuals may join the team after the risk analyses have been completed, enough detail must be provided so they can easily understand the context for the risk judgments. The values and appropriate actions can be developed so that management shares the decision responsibility for high-risk items. A separate table should describe the level of acceptability of the risk-index values (Table II). For example, each risk index can be tied to a specific hazard or loss of safety and with a cause. Because the cause might be linked to mitigation, documents that contain risk indexes must indicate whether the indexes were assigned before or after specification of the hazard mitigation. If assigned before mitigation, the risk index can be used to indicate the need for mitigation mechanisms. If it is assigned after mitigation, the risk index should show how well the cause–mitigation pairing reduces loss.

Risk-Index Value Action
Very high Unacceptable—requires (further) mitigation
High Acceptable only with engineering and quality executive signoff
Moderate Acceptable with project manager signoff
Low Acceptable with no review


Table II. Example of a risk-index value and action assignment table.

SAFETY-SIGNIFICANT VARIABLES

Program execution typically involves setting and altering values. Many values have little effect on the device's meeting the system safety requirements. Some variables, such as the dosage rate for an infusion pump or how much energy a defibrillator should discharge, do relate directly to device safety. An operator can input such values directly through a device's front panel. Computed variables containing crucial control values also play a role in device safety. One example is the stepper motor rate for achieving a given pump dosage. Variables whose values affect device safety are termed safety-significant variables.

Traditional risk analysis includes determining the probability that the system will threaten humans. Analysis performed according to MIL-SPEC 1629A includes multiplying numeric probabilities for occurrence, severity, and detectability. This process can confuse engineers new to software risk analysis. Software risk analysis as currently practiced for medical device development does not reliably support quantification at this level.

SAFETY-SPECIFIC SOFTWARE

Software risk analysis hinges on the idea that not all software is directly involved in meeting the device's safety requirements. The support of the safety requirements is spread unevenly among the software's building blocks. Modules that fulfill the safety requirements are typically termed safety-critical or safety-significant. For example, a module that contains an algorithm for controlling the energy level applied to a patient is much more safety-critical than one that provides a background housekeeping task. The engineering literature also describes safety-related software as composing a safety net to ensure safety when safety-critical software fails.

The concept that not all software within a device is safety-critical might be difficult to understand because, for simple devices, the source code is compiled into a block of executable machine instructions. Abstract boundaries do not apply to the monolithic block of machine instructions. It is easy to see that any software failure can eventually result in the failure of software responsible for safety requirements. This threat usually boils down to three points:

  • Other software can corrupt variables affecting safety performance, which means that safety-critical information must be maintained so that corruption can be detected.
  • Other software can cause execution threads to fail, resulting in the execution of code out of normal sequence. Well-engineered, safety-critical software ensures the proper sequence of critical code segments.
  • Other poorly engineered software can consume or mismanage computing resources such as processor computation bandwidth and working memory. This can lead to rendering safety-critical software nonfunctional. As C++ has become more popular, engineers must address so-called memory leaks that result from software execution threads exiting the environment without freeing up memory resources. Safety-related software must reduce memory leaks. One solution is to separate localized resource control from system control.

MITIGATION

Risk management depends on the premise that software failure can result in increased risk. Developers must define approaches for reducing, or mitigating, the risk. This requires designers to couple potential software failures to mitigations. Such pairings are described below, from the least dependable to the most critical for reducing potential software failure.

Inform the User. A potential risk might be linked to an information prompt. For example, "Failure: Information written to wrong area of screen buffer. Mitigation: Provide user documentation relating to expected display." Obviously, this particular mitigation is weak because it relies on activities outside the control of the development and quality teams. Developers must review the instructions to ensure that the screen layout provides a key to information found in the different screen areas.

Displaying information and related hazards is particularly difficult. Development teams often simply indicate that a trained care provider should be able to detect presentations that do not make sense or that have distorted values. Mitigation quality depends on both the value of the screen information to the care provider and on how well the care provider is trained. When information is critical to therapy, some designs provide a means for reading the display buffer back to ensure information validity. More dangerous is dead facing, in which the device display is commanded blank but the device continues to administer a therapy.

Development Process. A typical pairing here would be: "Failure: Flawed breathing algorithm implementation. Mitigation: Independent review." This indicates that something exceptional will be done within the development process. To complete process mitigation, the mitigation must be described in the software development plan and audited to ensure that an independent review did occur and, equally important, that any findings were subsequently acted on.

Software Mechanisms. A pairing that expresses a software problem might be presented as "Failure: Overwritten pump-speed variable. Mitigation: Variable redundantly stored, accessed, and changed by a single function." Adding such special software is common and is considered good practice because it enforces structured access and enables corruption detection on every access. These mechanisms can be weakened, however, if sloppy use—such as moving a critical variable to a locally used variable—is allowed. This reinforces the fact that software mechanisms might require parts of the development process, such as code inspection, to detect and enforce usage rules.

Hardware Mechanisms. An example of a failure that calls for hardware mitigation would be "Failure: Runaway execution thread. Mitigation: Hardware watchdog timer." Installing a separate hardware safety mechanism is considered good practice because the hardware relies on an independent technology to provide the device safety net. However, if the software fails to interface properly with the watchdog circuitry, and the start-up test fails to detect the malfunction, this particular hardware mitigation could be ineffective.

Execution Diversity. This type of mitigation depends on the system's architecture having a safety supervisor. A safety supervisor employs a separate processor with its own software to ensure that the primary processor and software stream operate according to device safety requirements. It is common for this type of architecture, described in the literature for process control systems, to be found in off-the-shelf processor and software packages.5

In the European Union, the response speed of the mitigation mechanism for certain devices is considered important. For example, specifications for a syringe infusion pump require that the dosage rate be proportional to the motor speed, since overdose is a common hazard for this type of device and can be caused by software runaway inducing a high motor rate. A common form of mitigation is a watchdog timer. Depending on how the timer is implemented, the time elapsed from a software fault to an error to detection to a safe state with the pump motor stopped could be too long to prevent a dangerous dose of a drug from being uncontrollably administered. Although a mitigation exists, it may not reduce the risk exposure if it is slower than the speed of the therapy. A lock-and-key software that can command the motor speed would provide a faster mitigation and would relocate the risk exposure to a safer region of the software.

Lock-and-key software depends on safety-critical software functions only being executed if the operator presents the proper key. Properly implemented, lock-and-key software also detects a software system jump in the middle of a function. Lock-and-key, then, would detect rather than initiate a command for illegal execution entry at the start of the function. Should entry appear further down the function, lock-and-key would detect an illegal command within microseconds after motor commanding. Combining watchdog and lock-and-key solutions provides the most protection.

CAUSE-AND-MITIGATION PATTERNS

One of the most recent paradigms to appear for software developers is the idea of patterns. Patterns are based on the observation that a collection of software objects can be connected in certain ways to solve common problems. Although this assumption is abstract, this approach reinforces the idea that at some level the solutions for multiple problems take on a similar form.

The paradigm of patterns can be applied to streamline cause and mitigation pairing. Patterns for a limited set of software failures are listed in Table III. The list of mitigation mechanisms in the table is loosely ordered from more common to less common approaches. All patterns assume a solid development process that includes code inspection and verification testing as a baseline. The patterns represent only a starting point. The challenge is to specify additional patterns that might be unique to the product's architecture or implementation environment.

FailureMitigation Mechanisms
Data/variable corruptionRedundant copies; validity checking, controlled access
CRC or check sum of storage space
Reasonableness checks on fetch
Hardware-induced problemsRigorous built-in-self-test (BIST) at start-up
Reasonableness checks
Interleaved diagnostic software
(See illegal function entry; data corruption)
Software runaway; illegal function entryWatchdog hardware
Lock-and-key on entry and exit
Bounds/reasonableness checking
Execution tread logging with independent checking
Memory leakage starves execution stream Explicit code inspection checklist and coding rules
Memory usage analysis
Instrumented code under usage stress analysis
Local memory control for safety-critical functions
Flawed control value submitted to HW Independent read back with reasonableness check
HW mechanism provides independent control/safe state
Safety supervisor computer must agree to value
Flawed display of informationBIST with user review direction in user manual
Read back with independent software check
Separate display processor checks reasonableness
Overlapped illegal use of memory Explicit inspection checklist item
Coding rules on allocation and deallocation
Special pointer assignment rules


Table III. Failure patterns and mitigation mechanisms.

LINKING MITIGATION SOLUTIONS

The potential for not implementing all safety-related software and thus not properly supporting the mitigation functions is great in large projects that have a number of developers. Therefore, applying traces to software is becoming more important. Typically, a trace links a downstream activity to something that was determined earlier in the product development life cycle.

Traces are a critical part of software risk management. At a minimum, mitigation demonstration must be linked to the tested cause-and-mitigation pair. A more conservative approach—as might be applied to software found in blood-bank devices and systems—is to link each cause-and-mitigation control and safety requirement to specific requirements in the product's software requirements specification.6 Linking safety requirements to specific logic routines that accomplish the function also ties acknowledgement to mitigation and to demonstration.

FOLLOWING BEST PRACTICES

Medical devices that perform similar clinical functions tend to have similar architectures. This is expected because dominant architectures form the basis for accomplishing a given function.7 This often means that industry expectations are set for what mitigation mechanisms will appear in a given device type. Any FDA reviewer who has looked at a number of software-controlled pumps will expect to find a pressure-limiting mechanism in a new one. A mitigation mechanism can become identified with best practices. When this happens, mitigation mechanisms for both software and hardware become defaults. This can lead to a design carrying defaults that might not make sense for the architecture's implementation. For example, watchdog timer circuits were developed for situations in which runaway or nonresponsive, loop-bound code represents a threat to safety. This standard has led to watchdog timer circuits being implemented in devices in which software failure had no effect on device safety. Watchdogs have simply become the expected software default.