MD+DI Online is part of the Informa Markets Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

The Voice From The Top: Henney's "Recall" Signals A New Era at FDA

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

An MD&DI January 1999 Column

Jane Henney begins her term as the new FDA commissioner by restating her commitment to ongoing agency reforms.

According to Genesis, God's first day on the job encompassed the creation of heaven, earth, light, day, and night from a stew of primeval darkness. Jane Henney had an arguably more modest agenda for her first day as the new FDA commissioner, which began in a well-lit ballroom with an address to the plenary session of the annual educational conference of The Food and Drug Law Institute (FDLI) in Washington, DC.

The mood in the room was expectant but upbeat: these days, not even FDA's harsher critics would characterize the agency—like the world in the beginning—as "without form, and void." But the anticipation in the crowd was an indication of the importance accorded the voice from the top—an acknowledgement that the future of FDA and of the industries it regulates will undoubtedly be shaped by Henney's clarity of purpose and steadiness of hand.

In returning to FDA for a second tour of duty, Henney rejoins an entity that she claims is "vastly different from the agency I left four years ago." What she described as her "lead priorities" reflect just how much the landscape has changed. "First and foremost," said Henney, she is committed to "the full and effective implementation of FDAMA, both in letter and in spirit."

While commending FDA for its implementation efforts to date, Henney specified that those efforts must continue to respect section 406(b) of the act—the provision that requires FDA to remain in active consultation with all "stakeholders" affected by its mandate. She was also explicit about the obligation to equate principles with results, not good intentions, and to judge public officials accordingly: "Policy is not what is planned, but what happens."

Henney's second priority will be "to strengthen the science base of the agency"—an oft-repeated pledge about which, Henney told the audience, she is entirely serious: "I assure you, this is not rhetoric." Among other measures, this will entail improved recruitment and retention of skilled personnel and better use of non-FDA government scientific resources. It is increasingly critical, said Henney, that FDA has adequate "scientific sophistication" to understand and regulate the multitudes of new products sure to emerge from a host of still-developing technologies. Current budgetary constraints will make a daunting goal even more difficult, given that the agency's "core budget has not risen in concert with its new responsibilities." (Sen. Orrin Hatch (R-UT), another speaker at the session, called FDA's budget "a sham.")

Even in the face of such potential problems, Henney maintains, "the best organizations find ways to constantly improve themselves," and she obviously intends for this process to prevail at the agency. What Henney terms her "forte," the reason she was "recalled" to FDA, is "the administration of large health-care organizations." But this impersonal-sounding phrase belies a staunch belief in the force of personality. What is always required, Henney says, is "much more than management. . . . It's really about leadership."

This issue of MDDI marks the debut of our special Medical Plastics and Biomaterials section. Each month, the dedicated MPB section will present in-depth technical articles and news devoted specifically to materials technology, exploring the full range of plastics and other biomaterials (ceramics, metals, composites, biologics) used in manufacturing and packaging medical products.

We're pleased to bring you something new for the New Year, and extend our best wishes for 1999!

Jon Katz

Copyright ©1999 Medical Device & Diagnostic Industry

Device and Biologic Combination Products: Understanding the Evolving Regulation

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

An MD&DI January 1999 Column


FDA is establishing new policies and guidelines to deal with combination device and biologic products. Manufacturers must keep abreast of the latest developments to avoid costly delays.

In recent years, regulating medical combination products has been a recurring challenge for FDA. A combination product—one that incorporates at least two of the regulated component categories of device, drug, or biologic into one product—presents FDA with unique difficulties for regulation and for determining which of its centers should have jurisdiction over the product. FDA's policies in this regard have sometimes been outpaced by the extremely fast-moving technology for combination products, especially in the subset of products that combine a device and a biologic.

The development of combination device and biologic products is a relatively new field, but one that has produced more than its share of controversy within the medical manufacturing community. As is the case with all combination products, the more time FDA spends debating jurisdiction and regulation, the more it costs the manufacturer, since the process for marketing approval cannot even begin until the agency decides which of its centers should have the power to grant that approval.

Used in the treatment of burns, TransCyte from Advanced Tissue Sciences (La Jolla, CA) is a temporary skin substitute derived from human fibroblasts.

Congress first acknowledged the need for specific regulation on combination products in the Safe Medical Devices Act of 1990 (SMDA). But the continual emergence of new, more-complex combination products quickly blurred any distinguishing lines and has complicated product designation and regulation. The issue of products combining a device and a drug, such as an asthma inhaler, has received considerable scrutiny over the past several years. But products combining a device and a biologic, such as organ replacement or assist devices, have received less attention. Recent trends, however, suggest that device and biologic combination products are quickly moving into the spotlight. A 1998 survey conducted by FDA identified hardware and tissue-engineered combination products as a rapidly growing trend in medical device technology.1

Even less than drug and device combinations, device and biologic products—which include, among other things, cellular and tissue implants, infused or encapsulated cells, artificial and replacement organs, heart valves and pumps, and cardiac, neural, and neuromuscular stimulation devices—do not fit neatly into existing regulatory paradigms. For example, as part of the question of regulation, FDA must take into account the possibility of tissue contamination and other hazards involved in using animal-derived tissues. With the science constantly evolving, it becomes difficult for manufacturers to remain informed of changes in regulation. What follows is a general breakdown of FDA's current policies and initiatives regarding combination device and biologic products, as well as a look at the shaping of new regulatory proposals.


The three centers within FDA that regulate combination products are those for Biologics Evaluation and Research (CBER), Devices and Radiological Health (CDRH), and Drug Evaluation and Research (CDER). Determining which of these centers will have jurisdiction over any particular combination product can be an involved process. To address this issue, FDA decided that the determination of regulation for combination products would be based on the primary mode of action of that product.

To illustrate the current procedure, we can look at a product such as the encapsulated dopaminergic cell. This product uses a device component to deliver a drug (dopamine) by means of a cellular mechanism. All three of the categories are included—drug, device, and biologic—so which center would regulate the product? In this case, FDA determined that the primary mode of action for the dopaminergic cells is in fact via a cellular mechanism; thus, CBER was granted primary jurisdiction over the product with consultation from CDRH for the device component.

Although a product's primary mode of action determines which center has jurisdiction, the designated center could very well lack necessary information regarding components of the product that are outside its area of expertise. For example, while CBER could reliably analyze the cellular component of the dopaminergic cells, could it be entrusted to accurately evaluate the drug and device aspects of the product? In response to this issue, SMDA created intercenter agreements among CBER, CDER, and CDRH. The agreements allowed the center with jurisdiction over a combination product to consult with the other centers regarding product components outside its specialty area. In the case of the dopaminergic cells, before making a decision about the product, CBER would consult with CDER about the drug component and with CDRH about the device component. Because the primary mode of action of the product is a cellular mechanism, CBER would retain jurisdiction and would ultimately make all decisions regarding marketing approval, but it would first enlist the expertise of the other two centers.

The centers involved with combination device and biologic products are CBER and CDRH. When CBER has jurisdiction over a combination product, the intercenter agreement enables it to consult with CDRH regarding the safety, effectiveness, and durability of any device components of that product. Likewise, CDRH may consult CBER about the biologic components of combination products under its jurisdiction. Generally, combination products intended for direct therapeutic application will be regulated by CDRH. Products with components that collect, separate, or process blood or blood products, analogous products, or cellular biologics—including cellular and tissue implants, infused cells, and encapsulated cells of tissue—are regulated by CBER.

Interactive wound-care products provide a useful illustration of how FDA currently regulates combination device and biologic products. Noninteractive wound-care products are regulated either by CDRH as devices (standard wound dressings), or, because they incorporate a drug, by CDER. However, the primary mechanism of interactive or biologically active wound dressings is not medicinal, but rather is achieved through a device or biologic component. These products serve as long-term skin substitutes or temporary synthetic skin and are intended to actively promote healing by interacting directly or indirectly with bodily tissues.

Interactive wound dressings can be divided into acellular products that seek to provide an enhanced environment for skin regrowth and cellular products that contain epidermal and/or dermal tissue. Examples of acellular interactive dressings include polymers or synthetic peptides linked with extracellular matrix constituents. Examples of cellular interactive dressings include products that contain allogeneic epithelial cells or fibroblasts cultured on biodegradable polymers and products that contain keratinocytes and fibroblasts that adhere to collagen substrates. These products meet both the definition of a device, since they work by mechanical mechanisms of action (e.g., they provide a macromolecular scaffold for tissue repair through temporary wound converage), and a biologic, since they contain biologic components (cells). In addition, interactive wound-care products that are considered to have a drug mechanism of action, and therefore would not be classified as wound dressings, may contain other biologic acellular components, such as growth factors and enzymatic debriding agents.

Regranex gel, manufactured by OMJ Pharmaceuticals Inc. (Manati, PR), is one such interactive wound-care product that recently obtained marketing approval. The product stimulates the recruitment and proliferation of wound-repair cells, and it is intended for the treatment of diabetic, neuropathic ulcers of the lower extremities. The active component of Regranex is becaplermin, a recombinant, human platelet—derived growth factor. Because the active component is the growth factor, Regranex is regulated as a biologic and was approved under a biologics license application (BLA). It can be considered an acellular wound-care product, with a primarily biologic/druglike mechanism of action, and not a device.

On the other hand, CDRH was granted jurisdiction over Apligraf, a new interactive wound and burn dressing manufactured by Organogenesis (Canton, MA). Apligraf is an artificial skin graft used to treat serious skin ulcers caused by venous insufficiency. It is produced from bovine collagen, human keratinocytes, and fibroblasts derived from human infant foreskins. The dressing provides wound protection and fosters the growth of healthy new skin. Comprising a scaffold of cells on a collagen substrate, the product can be considered a cellular interactive wound dressing. Despite its cellular composition, however, FDA ruled that the device mechanism of the scaffold architecture constituted Apligraf's primary mode of action and determined that CDRH should regulate the product. Similarly, Dermagraft-TC (Advanced Tissue Sciences; La Jolla, CA), a dermal-replacement wound dressing for use in plantar diabetic foot ulcers, is also being reviewed by CDRH but has not yet been approved. This product consists of neonatal dermal fibroblasts cultured in vitro onto a bioabsorbable mesh.


When a combination product has a clearly definable primary mode of action, FDA can assign jurisdiction quickly, and the intercenter agreement system works as it should. Unfortunately, this is not always the case. It is often difficult for FDA to identify and agree on the primary mechanism at work in a new product, which sometimes results in lengthy disputes over jurisdiction and delays in marketing approval.

If a sponsor suspects that a new combination product might cause debate over jurisdiction or the primary mode of action, the issue can often be resolved at the program level before such disputes begin. As soon as sufficient product information exists for the agency to make a regulatory designation, the manufacturer can contact the CDRH jurisdiction and device status expert or the CBER jurisdiction liaison (CBER ombudsman). Often, one or both of these representatives will expedite the decision-making process. If that approach fails, the sponsor can then make a formal request for product designation. The request (original and two copies) should not exceed 15 pages and should be filed prior to the submission of a premarket approval (PMA) application. All relevant sponsor information is required, along with a thorough description of the product, which should include:

  • The product's classification, common name, and proprietary name.
  • An indication of any component of the product that has already received or is not subject to premarket approval or an investigational exemption.
  • The product's chemical, physical, or biological composition.
  • The status of and brief reports on any developmental work, including animal testing.
  • Descriptions of the manufacturing process.
  • The sources of all components.
  • The proposed use or indications of the product and a description of all known modes of action.
  • The sponsor's identification of the primary mode of action and the basis for that determination.
  • The schedule and duration of proposed use, and the dose and route of product administration.
  • A description of any related products and their regulatory status.
  • The sponsor's recommendation as to which center should have primary jurisdiction, with accompanying rationale for this recommendation.

The request for designation is reviewed for completeness within five working days of receipt by FDA. If it is deemed complete, the agency will inform the sponsor that the request has been accepted for filing. Within 60 days of this filing date, the jurisdiction officer must issue a letter of designation. If the sponsor disagrees with the designation, it may submit a written request for reconsideration within 15 days of receipt of the designation letter. Alternatively, the sponsor may also take its request to the FDA ombudsman for final resolution.


Although it adheres to the established regulation and designation procedures in most combination product cases, FDA is beginning to recognize the need for new programs. In an attempt to match its regulatory process with the technological development of combination device and biologic products, FDA has sought new ideas and solutions to address specific needs in this area. One such proposal resulted in the creation of the Tissue Reference Group, an intercenter reviewing committee. The group is composed of six members, three from CDRH and three from CBER. Each member specializes in the issues concerning tissue-engineered components in new products. The group reviews only those products containing tissue-engineered components, and the hope is that there will be a much-faster response time for product designation.

Fetal cells inoculated into this cartridge replicate until they reach adult form and can take over liver functions (Vitagen; La Jolla, CA).

In addition to the Tissue Reference Group, CBER and CDRH personnel are working together to develop specific standards and guidelines for manufacturers to follow when developing human cellular and tissue-based components for combination device and biologic products. A document recently published by FDA, Proposed Approach to Regulation of Cellular and Tissue-Based Products, proposes a regulatory framework covering the broad spectrum of all uses of human tissue in the medical field.2 As it applies to combination device and biologic products, the proposal seeks to provide each center with the same clear, comprehensive set of regulatory guidelines for combination products containing tissue components. The approach applies to cells and tissues that are combined with nontissue components, are manipulated extensively, or are used for purposes other than their normal functions. It would require manufacturers who develop combination device and biologic products to maintain a level of scrutiny and rigor commensurate with the product's level of risk. The proposal is designed to achieve the following goals:

  • Prevent the unwitting use of contaminated tissues containing the potential to transmit infectious diseases such as AIDS and hepatitis. Although this point is directed more toward autologous or allogeneic tissue and does not specifically address the regulatory requirements regarding the tissue components of combination device and biologic products, it is an important factor in tissue regulation.
  • Prevent improper handling or processing that might contaminate or damage tissues. The manufacturer would be required to follow GMPs and have strict processing controls encompassing clinical safety and effectiveness concerns. The marketing application would have to contain a chemistry, manufacturing, and controls (CMC) section, unless it can be determined that safety and effectiveness requirements can be satisfied by the manufacturer meeting product specifications and processing controls.
  • Ensure that clinical safety and effectiveness are demonstrated in the use of all tissues. Manufacturers would have to submit a BLA, 510(k), or PMA and provide clinical safety and effectiveness data. For products operating via reproductive or metabolic pathways, clinical studies would be conducted under an investigational new drug (IND) application. Studies for products intended for local, structural reconstruction or repair would be conducted under an investigational device exemption (IDE).

In addition to these stipulations, all human tissue and cell product manufacturers would be required to register their establishments and list their products with CBER, regardless of whether the products are regulated as devices or biologics. Furthermore, the agency would require that all labeling and promotion be clear, accurate, balanced, and nonmisleading. Overall, the proposed guidelines attempt to standardize tissue regulation and require all manufacturers to adhere to a strict protocol in the development and testing of tissue-based products.


FDA's regulation of combination device and biologic products has often been fragmented and unclear. Current proposals for more-specific, comprehensive regulation of tissue-based products provide evidence that FDA recognizes the growing problem and is working on developing new policies and guidances. Issues pertaining to product safety—for example, viral contamination or contamination that could arise during manipulation of the tissue—are at the forefront of FDA's concerns. Until it establishes comprehensive and accurate processes for designating jurisdiction and determining a product's primary mode of action, the agency will continue to exercise its discretion on a flexible, case-by-case basis regarding the more-complex or problematic products. Manufacturers should make every attempt to keep abreast of the latest developments at FDA and should design product development and clinical testing protocols that are responsive to the evolving regulatory climate.


1. WA Herman, DE Marlowe, and H Rudolph, Future Trends in Medical Device Technology. Results of an Expert Survey (Rockville, MD: Center for Devices and Radiological Health, FDA, 1998).

2. Proposed Approach to Regulation of Cellular and Tissue-Based Products, Docket No. 97N-0068 (Rockville, MD: Center for Biologics Evaluation and Research, FDA, February 28, 1997).

Sharon A. Segal, PhD, is practice director, medical devices, for The Weinberg Group Inc. (Washington, DC).

Copyright ©1999 Medical Device & Diagnostic Industry

Making Device Software Truly Trustworthy

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

An MD&DI January 1998 Column

When Bill Wood examines the reason for conducting software risk analysis, he says, "We've seen devices fail and kill patients. One device that failed because of software shot high energy into a patient. Those sorts of things started piquing people's interest in understanding that the role of software had changed. And the failure of software very definitely could result in patient injury," he says.

In his article on page 139, Wood describes how the role of software has evolved, why conducting software risk analysis is critical, and how to conduct such analysis. From a safety viewpoint, he says, what's most important is whether a device is trustworthy. "When software does fail, are there things that will stop the failure from progressing to becoming a hazard to the patient? A truly trustworthy device is one that fails in a way that I can always predict," he says.

Bill Wood emphasizes trustworthiness.

Trustworthiness in medical devices is much more important than reliability, he explains. "What we're doing here is proving the trustworthiness of the system as opposed to the reliability. A device that is not reliable will fail and will be deemed ineffective, so we must have some level of reliability. But for an engineer, you want to isolate yourself on the risk side and say, 'Why am I looking at this?' You're saying, 'It's because I want something that's trustworthy.'"

Until a few years ago, medical software development had not been on the leading edge. When it was introduced into devices in the mid-1980s, most software was only reporting status or running reports. According to Wood, one reason that software use in medical devices is growing is because simply changing the software can often improve a device. This growth, he says, has meant that software is becoming responsible for the functional capabilities of devices.

Another factor that has heightened the need for software risk management is the increased regulatory attention prompted by software failures. Off-the-shelf software, he says, has introduced components of which the medical device developers have no structural knowledge, since they had no part in developing the software. "The off-the-shelf software brings up all sorts of issues that relate to safety. We discovered that with FDA, you couldn't just do a lot of hand waving to work your way through that. You have to show the analysis."

Wood emphasizes that good software risk analysis requires that engineers focus on the user very early. "With safety, you must focus on it very early, because it will have a definite impact on your architecture. A cascading series of analyses results in safety requirements. If you can codify those requirements—write them down and respond to them by either altering your architecture or making specific requirements that drive your implementation—then as you sharpen your pencil and perform more detailed analysis with fault trees and FMEAs, you can discover holes in your thinking."

People who are good at risk analysis, he says, can imagine how a failure progresses within a system. They can then use their hypothesis to figure out what it would take to stop potential failures. Risk analysis, says Wood, enables engineers to visualize the problem and the solution, and, later, to insert defects to see if their thinking is correct.

Wood points out that patterns are emerging that provide a basis for how to resolve and reduce typical hazards. The field is starting to mature, he says, which has led to the repeated use of certain solutions, such as the expectation that watchdog hardware will always be applied. The notion of patterns, he says, arises out of object-oriented design, which applies the concept of "supersets" to different problems. "One of the reasons I decided to write an article on software risk management was so we can think about the reapplication of what we've learned."

Copyright ©1999 Medical Device & Diagnostic Industry

A Practical Guide to ISO 10993-3: Carcinogenicity

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

An MD&DI January 1999 Column

ISO 10993

This final installment in MD&DI's series of articles on the international biocompatibility standards discusses when and how to conduct carcinogenicity testing. Last month's installment covered Sample Preparation and Reference Materials.

THE POTENTIAL of a device material that comes in contact with a patient to cause or incite the growth of malignant cells—that is, its carcinogenicity—is among the issues addressed in the set of biocompatibility standards developed by the International Organization for Standardization (ISO). Part 3 of the ISO 10993 standards, which covers genotoxicity, carcinogenicity, and reproductive toxicity, describes carcinogenicity testing as the means "to determine the tumorigenic potential of devices, materials, and/or extracts to either a single or multiple exposures over a period of the total life-span of the test animal." The circumstances under which such an investigation may be required are indicated in ISO 10993-1, Table 2: Guidance for Supplementary Evaluation Tests. Specifically, such testing should be considered for a device that will have permanent contact (longer than 30 days) with tissues, either as an implant or as an externally communicating device. Although this definition clearly covers all permanent implants, including those that are designed to be absorbed, and extracorporeal devices that will be needed for the remainder of a patient's life, the standard further indicates that "carcinogenicity tests should be conducted only if there are suggestive data from other sources." Thus, not every implant or extracorporeal device needs to be subjected to this time-consuming and expensive testing.


Most investigators and regulators can agree on the criteria for undertaking carcinogenicity testing, but there is almost no consensus on how to actually do it. Rodents are invariably chosen as the test species because their relatively short life spans makes it practical to carry out lifetime studies. Traditionally, the effects of industrial chemicals, pesticides, food additives, and pharmaceuticals are evaluated via lifetime feeding, inhalation, or dermal application studies using two rodent species. However, there are no comparable validated in vivo or in vitro models available for testing devices or biomaterials. In addition, debate continues within the scientific community about whether two rodent species are needed and, if not, which rodent strain is preferable.

ISO 10993-3 provides some guidance by referencing the Organization for Economic Cooperation and Development (OECD) protocols 451 (Carcinogenicity Studies) and 453 (Combined Chronic Toxicity/Carcinogenicity Studies). Although these protocols were written for lifetime studies in rodents to evaluate chemicals that may be introduced into the body by means other than implantation, which is the type of exposure for many devices, they can nevertheless be used to select the key elements of a study. Such elements may include the number of test animals, the kinds of observations needed, the extent of histopathological evaluations, the number of survivors required at the end of the study, and the type of statistical evaluations that would be most meaningful. Having a basic study design in mind, a manufacturer can then construct a technically sound framework for carcinogenicity testing of a device or biomaterial.

The latest committee draft of ISO 10993-3 also cites the American Society for Testing and Materials document ASTM F 1439-92: "Performance of Lifetime Bioassay for Tumorigenic Potential of Implanted Materials." This guidance document was written specifically to address device or biomaterial evaluations, but recognizes the limitations of such efforts, stating that "the recommendations given in this guide may not be appropriate for all applications or types of implant materials." The ASTM method requires a minimum of 60 male and 60 female rodents per treatment or control group and identifies a basic study design as consisting of at least one group exposed to the test material, another group exposed to a reference material, and likely a sham surgical or vehicle dose group. Thus, the total number of animals in the study would be 360. Devices designed for use solely in male or female patients may be tested in rodents of the same gender, thus halving the number of animals required. A study should last a minimum of 18 months for mice and 24 months for rats.

The standard requires that tests be "appropriate for the route and duration of exposure or contact," which raises the issue of how to expose the test animals to the test article. For an extracorporeal device that will be in direct or indirect contact with blood, the testing may be conducted by injecting extracts of the device. In such cases, it is imperative to determine what is in the extract, which extraction vehicle or vehicles best represent the expected human exposure, and what the dose should be and how often and by what route it should be given. The goals are to mimic human exposure and to exaggerate that exposure on a milligram per kilogram of body weight basis. The decisions made on these matters should be worked out in concert with the FDA personnel that will review the device prior to marketing.

Devices that will be implanted present all of the challenges indicated above, and more. Virtually all solid materials with an extensive surface area cause what are known as solid-state tumors in rodents when implanted for long periods of time. The tumorigenic effect is due to the size and shape of the implant, not to leachable chemicals. (This phenomenon is also referred to as the Oppenheimer effect.) Investigators designing long-term implant studies must find ways to work around this effect or ensure that the number of animals in the study is sufficient for pathologists to distinguish solid-state from chemically induced tumors. Whenever possible, of course, the device should be implanted in the rodent body in an anatomic location that simulates clinical use.

Because the purpose of carcinogenicity studies is to evaluate the tumorigenic effects of lifetime exposure to a device or its extracts, the condition of tissues as revealed by both gross and microscopic examination is the most important end point to evaluate. A full complement of tissues, up to 40 per animal, must be harvested and preserved for examination by a pathologist. Such evaluations are both time-consuming and expensive, however, and the extent of the histopathology necessary remains a debatable issue.


Obviously manufacturers must take very seriously the requirement to understand the carcinogenic potential of their medical devices. However, before beginning these costly, time-consuming studies, it is good practice to conduct preliminary investigations. One should explore the chemistry of the device materials with special attention to their extractables, including degradation products. Information can be gathered about the absorption, distribution, metabolism, and excretion of these chemicals. In most instances, if detailed data about the carcinogenicity of the specific chemicals in the device materials are already available in the technical literature, animal studies may not be required.


This article brings to an end this multipart series summarizing the ISO 10993 biocompatibility standards. Previous installments, which appeared in MD&DI throughout 1998, covered such topics as materials characterization; tests for hemocompatibility, cytotoxicity, sensitization and irritation, and systemic toxicity; and sample preparation. Readers are encouraged to consult those articles and to read the standards for complete details.

The standards are available in the United States from the Association for the Advancement of Medical Instrumentation, 3330 Washington Blvd., Ste. 400, Arlington, VA 22201-4598; phone 703/525-4890, fax 703/276-0793;

Paul J. Upman, PhD, is a senior scientist and Richard F. Wallin, DVM, PhD, is president of NAMSA (Northwood, OH).

Copyright ©1999 Medical Device & Diagnostic Industry

FDAMA: One Year LaterAn anniversary review of the legislation

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

An MD&DI January 1999 Column


A year after the FDA Modernization Act was signed into law, a top HIMA official assesses the benefits of the new provisions and raises ongoing concerns regarding future implementation.

This article is excerpted from the testimony of HIMA executive vice president James S. Benson before the House Commerce Committee hearing of October 7, 1998—one year to the day from the passage of H.R. 1411, which ultimately became the Food and Drug Administration Modernization Act of 1997 (FDAMA).

Anniversaries are a time for reflecting on the past, assessing the present, and setting a future course. In that spirit, allow me to recall for you . . . the underlying purpose and promise of FDAMA, express our concerns about FDA's implementation of the law, and share a vision for the future.

For years prior to the enactment of the new law, the medical device industry was facing increasing development and review times, which threatened the viability of this industry. Review times were far in excess of statutory time frames. Manufacturers were faced with inconsistent, unpredictable, and overly burdensome requirements. Unacceptable bureaucratic hurdles abounded in the system. As a result, the delivery of important technological advances to patients was delayed or, in some cases, denied. Products were regularly available to people outside the United States years before American citizens could benefit from them. We feared the decline of America's primacy in medical research. We saw the beginning of an exodus of our scientific infrastructure to areas of the world with fewer regulatory barriers. It was clear that immediate steps needed to be taken to reverse what could have been a disastrous end result had Congress not intervened.

In all fairness, during the legislative process, CDRH began taking steps to right the situation. In fact, some of the reforms contained in the law codify FDA's own initiatives. Even before FDAMA was enacted, review times were turning around and the backlog of pending applications was shrinking. FDA continues to deal creatively with the challenges that invariably arise when faced with the complex issues and work load presented by regulation of medical devices. Reengineering initiatives such as the new 510(k) paradigm, the PMA modular review, and the product development protocol are designed to work synergistically with the new law to create a new and improved agency. These initiatives are having a measurable positive impact on the way the agency does business.

However, much of the progress that we have witnessed may not have been possible were it not for the interest and the work of this committee. Let's step back a moment and reflect on several highlights of FDAMA from a medical device perspective.


Clear Mission Statement. For the first time, the agency has a specific statutory mission statement that reflects the need for FDA to "promote the public health" by taking prompt action to review medical devices. In other words, FDA's mission is both to protect the public from unsafe products and to make sure safe products are available to patients "in a timely manner." Congress struck the theme. Many of the provisions of FDAMA are orchestrated to support that theme. If the theme is executed as Congress envisioned, the results will help countless patients who can benefit from new and improved medical technology.

Improved Governmental Processes. As President Clinton proclaimed on signing FDAMA, "This Act . . . will ease the regulatory burden on industries, protect consumers, and cut red tape . . . making government operations faster and more efficient." The new law holds forth the promise that taxpayer dollars will be spent wisely and well on appropriate functions. It establishes a new direction for FDA. It should help create a climate conducive to the innovative, iterative process that characterizes the evolution of devices. In other words, in the new, user-friendly environment promoted by this legislation, companies should be able to look to FDA to help them bring new helpful technology to the marketplace in the most efficient way possible.

Patient Benefits. FDAMA facilitates patient access to technology. It does so by creating efficiencies in the two major ways in which products come to market—the premarket notification (510(k)) program and the premarket approval (PMA) program. It also does so by creating new statutory rights of access by patients to investigational devices under certain circumstances. It makes the requirements governing devices for smaller patient populations with rare conditions more rational.

Predictability, Consistency, and Harmonization through Use of Standards. One of the more important provisions of FDAMA is the requirement for FDA to recognize standards developed by national and international consensus organizations. Full implementation of this provision will mean greater predictability and consistency in the review process. It will help erase the differences among various countries' regulatory requirements to the extent that international standards are recognized by FDA. It will allow FDA staff to spend more time on riskier devices. Some time will appropriately be spent on participating in the consensus standards development process. This has the corresponding benefit of contributing to FDA's need to stay on top of the science associated with standards activities.

Collaborative Process. In FDAMA, Congress explicitly provided for interactive meetings between industry and FDA at key points in the device-submission process. This collaborative approach to resolving issues involving the more complex and breakthrough devices is intended to foster device development and speed products to patients.

Efficient Use of Taxpayer Dollars. Many of FDAMA's provisions are intended to redirect FDA's increasingly scarce resources away from unnecessary functions into core tasks that are essential to its public health mission. These include exemptions of low-risk devices from the 510(k) process; third-party review of certain devices; an efficient alternative classification system for novel devices that do not require full-scale premarket approval; and the ability to file simple notices for certain manufacturing changes and changes to investigational devices, including certain changes to clinical trial protocols. These reforms should allow FDA to redirect resources to activities associated with more complex devices.


President Clinton signed FDAMA into law on November 21, 1997. Many of the provisions were to take effect 90 days later. While it is still too early in the implementation process to have a complete picture of the extent to which FDA is executing both the spirit and the letter of the law, we do have some early information suggesting trends.

Noncollaboration Policy. CDRH has done yeoman's work in implementing the many device provisions contained in FDAMA. Guidance documents and regulations have been issued in a timely manner. Many are straightforward and procedural. Others are problematic. Soon after FDAMA was enacted, we offered to work with the agency to develop guidance documents for the more complex provisions that might benefit from the expertise and experience of the device industry and others. However, earlier this year, the agency announced it would not discuss details about FDAMA implementation prior to the public issuance of implementation documents. We believe this policy to be contrary to Congress's clear intent that the post-FDAMA era was to be marked by cooperation, collaboration, and closer communication with the regulated industry. There are many examples of past collaborative efforts between industry and the agency that have resulted in excellent products that would not have been possible without the synergistic exchange that characterizes these efforts. We believe the process has suffered as a result of this edict. Perhaps the agency should have been less concerned about issuing the large volume of guidances and other implementation documents within such a short period of time and more concerned about producing the best possible documents—informed by industry and other appropriate parties in the spirit of collaboration envisioned by FDAMA.

Reviewer Commitment to Letter and Spirit of FDAMA. Another general area of concern HIMA has about implementation of FDAMA stems from reports we have received from many of our members that the clear message of FDAMA has not yet been heard by the reviewers who are at the front lines of the agency. These front line "soldiers," who have enormous responsibility and are good solid scientists, are the most immediate contact for manufacturers seeking product approval. This makes it all the more important that FDAMA's clear message of collaboration—the "new marching orders"—be heard loud and clear. Perhaps the most difficult challenge for FDA's leadership is to change the organizational culture to reflect the new spirit of the legislation.

Least Burdensome Means for Showing Effectiveness or Substantial Equivalence. One of the more important provisions of FDAMA is the requirement that the agency consider the "least burdensome" appropriate means of showing effectiveness for PMA products. For 510(k) products, the law requires the agency to consider the least burdensome means of showing substantial equivalence when there are technological differences between the device and its predicate. The agency has not included reference to this provision in any of the FDAMA implementation documents. Some reviewers seem not to be familiar with the requirement or appear to discount it. There are legitimate differences of opinion as to what constitutes an acceptable study protocol, statistical approach, or other elements of proof required by FDA. When a manufacturer suggests a less burdensome, but equally valid, means to meet the regulatory requirements of the law, it should not be summarily rejected but should be given serious consideration. We urge FDA to rethink its approach to these important provisions and promote their underlying intent and understanding with reviewers and managers alike.

Dispute Resolution. The agency's implementation of the dispute resolution provision is another issue that is troublesome to us and ties into the previous issue of least burdensome means. When there is a difference of opinion between the agency and a manufacturer, for instance, on the type of data or size of the trial needed to prove effectiveness, and the dispute is scientifically based, FDAMA envisioned an independent, stand-alone, prompt review mechanism to resolve the dispute. The proposed regulation implementing this section is no more than a minor amendment to an existing general appeal mechanism. FDA recently decided against issuing a final rule at this time on this subject, citing substantial adverse comment. We urge FDA to adopt HIMA's proposal to address this provision through establishment of a roster of experts with diverse knowledge about devices and disease states, several of whom could be convened on short notice to hear these disputes. Such experts would be expected to have current conflict of interest clearances from the agency.

Funding. The issue of funding and efficient use of agency resources is one that we raise often in the appropriations context. However, since one of the themes of FDAMA is efficient execution of the law in accordance with statutory time frames, and since such execution cannot be done without appropriate allocation of resources, we believe this issue is relevant for this committee's consideration. We are concerned that funds may be cannibalized from the device program and spent on nonstatutory functions such as tobacco programs. We urge the committee to ensure that taxpayer dollars are being spent first and foremost on implementing the clear mandate of the medical device laws. Funds should not be spent on initiatives that have not been authorized and given priority by Congress.

Dissemination of Information. We would also like to express our concern about the dissemination of information regulation. We share the dismay expressed by Senator Frist (R–TN) at the confirmation hearing for FDA commissioner-designate Jane Henney, MD, that the regulation does not carry out the intent of Congress. As presently crafted, its terms impose tight restrictions that practically nullify the ability of manufacturers to make use of this provision. We urge the committee to press the agency to go back to the drawing board and completely rethink its approach to the provision, especially in light of the recent opinion by the Honorable Royce Lamberth in the federal district court case brought by the Washington Legal Foundation. That decision struck down three FDA policy documents that imposed severe restrictions on the dissemination of information regarding off-label uses of FDA-approved drugs and medical devices.


HIMA has been actively involved in commenting on the full array of FDAMA implementation documents. We will continue our efforts to work with the agency to optimize the content of those documents. . . . As we embark on the second year of the post-FDAMA era, we want to refocus on what we hope the future holds for the medical device industry. . . . Key elements of this vision are: patient lives being saved, enhanced, and made more comfortable and less painful through advances in medical technology; productive agency-industry collaboration, cooperation, and clear communication; clearly understood and reasonable rules; speedy, efficient dispute resolution; and wise use of agency resources. We hope we can look back five years from now and all parties can take pride and satisfaction in knowing that this act and its aftermath have contributed enormously to a new and improved FDA, a productive and innovative industry, and, most importantly, a healthier patient population.

James S. Benson is executive vice president of HIMA, a medical device industry trade association based in Washington, DC. He was formerly at FDA, where he served as acting commissioner, deputy commissioner, and director of CDRH.

Copyright ©1999 Medical Device & Diagnostic Industry

Design of Experiments for Process Validation

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

An MD&DI January 1999 Column


Used as one of the statistical tools for validation, design of experiments can help identify which factors need to be controlled in order for a system or product to pass the ruggedness test.

Design of experiments (DOE) has become an essential tool for the validation of medical manufacturing processes. A good description of why this statistical technique should be used is the assertion that processes "should be challenged to discover how outputs change as variables fluctuate within allowable limits."1 As an example of the benefits that such a validation tool can provide, this article describes a DOE that was run on a particular durable medical device known as a paraffin heat-therapy bath.

Figure 1. One of the paraffin therapy bath test units, which holds a gallon of molten wax.

The Therabath paraffin therapy unit (WR Medical Electronics Co.; Stillwater, MN) holds one gallon of molten paraffin wax and is used by osteoarthritis patients during physical therapy (Figure 1). To help loosen their joints, patients dip their hands into the heated paraffin, which is then allowed to slowly solidify into a wax glove. Oils in the wax help keep the heat at a comfortable level, facilitate removal of the glove, and moisturize the skin. To enhance the perceived benefit to the skin, vitamin E and various scents and colors are added for this application.


Six factors, identified by letter, were tested at low and high levels: the ratio of two component waxes (A); the ratio of wax to oil (B); the supplier of wax (C); the amount of dye (D); the amount of perfume (E); and the amount of vitamin E (F). The amounts of vitamin E, dye, and perfume were very small in relation to the amounts of wax and oil.

If every combination of factors had been tested for a full two-level factorial design, a total of 64 experiments (26) would have been conducted. The investigators instead chose to do a highly fractionated (1/8), two-level factorial design, for a total of eight runs (26—3). All six factors were given ratings for every test run.

Running a highly fractionated design has its drawbacks. Although the testing is completed faster, the amount of data generated is proportionately reduced as well. And while main effects may be obvious, they will be aliased (perfectly correlated) with interactions between two or more of the other factors included in the design.

Ten employees made up the subject panel. In order to reduce subject bias and to counteract reduced sensitivity to heat over time, the experiments were blind and in random order. Also, rather than dipping an entire hand into the paraffin, as patients would, subjects dipped only one finger into each bath and used a different finger for each test.

Following each dip, subjects noted their sensory evaluations of color, scent, heating, oiliness, and quality of the wax glove from one (worst) to nine (best). The results were analyzed by individual and then averaged. This method provides a powerful tool for discriminating changes in performance of the device, assuming that each individual's ratings are fairly consistent from one combination to the next. If needed, analysis could also reveal the relative differences between individuals.

To fulfill the purpose of validation, the outcome of the experiment should reveal no change in response caused by a variation in one of the factors. Such a lack of results would prove the system's ruggedness. Conversely, a significant result—which is referred to as a failure in this context—usually requires more experimentation to reveal the true cause or causes.


Analysis of the DOE with a commercially available statistics package revealed that user perceptions resulted in significant results, which means that the paraffin formula did not pass the ruggedness test. Figure 2 shows a half-normal probability plot for color. Half-normal plots show the absolute value of an effect on the x-axis as square points; estimates of error are displayed as triangles. The biggest effects, those to the right, are most likely to be real (significant). The effects grouped near the zero effect level presumably occur by chance and thus represent experimental error. The y-axis is constructed to be linear in the normal scale, so the near-zero (insignificant) effects fall on the line emanating from the origin (0, 0).

Figure 2. A half-normal probability plot for color shows that 99% of the effects are expected to be less than 1.6. Effects below this level are lined up in normal fashion, but the change to color registered an abnormally large effect of 2.39.

One effect stands out in Figure 2—D, the level of dye. Standard statistical analysis of variance (ANOVA) reveals a probability of less than 0.1% that an effect this significant could have been caused by chance. Although there may have been aliased interactions, we made the assumption that the responses for color would be affected only by the amount of dye (D). Obviously, the panel preferred higher levels of dye.

Figure 3. A half-normal probability plot for scent shows one significant value, although not as big an effect as the one for color (0.86 versus 2.39).

For scent, factor E—the amount of perfume—was the most important effect (Figure 3), but it did not stand out as strongly as did color. Also, a standard deviation value of 4.09 from bath 1 signals an unusual occurrence (outlier). When plotted on a graph that displays the t-value of each run—how many standard deviations apart a result is from what was expected—this outlier falls outside the recommended limits of ±3.5 (Figure 4).

Figure 4. A graph of ±-values that shows an outlier detected in the scent category.

Figure 4 resembles a control chart, with the 4.09 value falling outside the upper control limit. (If graphed on a bell-shaped curve, commonly used in statistics, the value would be near the positive value end where the curve has flattened out.) Outcomes within the upper and lower control limits (or standard deviation boundaries) represent common cause variability; those values outside the limits are likely due to special causes.

Bath 1 was assumed to be an outlier, since it's unlikely to have happened by chance. Further investigation revealed that the temperature in bath 1 was significantly high, generating more scent than usual.

After removing the outlier, perfume stood out even more clearly as the most likely cause of the effect on scent. ANOVA shows the probability of this effect happening by chance to be less than 1%. Again, it seems reasonable to make a leap of faith that factor E (perfume) was the cause for perceived changes in scent and not any aliased interactions. In other words, the panel preferred higher levels of perfume in the formula.

Figure 5. A half-normal probability plot shows there were no significant effects for the perception of heat.

Figure 6. A half-normal plot of effects for oiliness shows that a main effect and an interaction provided significant results. However, due to aliasing caused by fractionation of the factorial, further experimentation would be necessary to accurately determine the actual cause.

The statistical analysis revealed nothing significant for perception of heat (Figure 5). Perceptions of oiliness were significantly affected (Figure 6), but the aliasing of main effects made it impossible to draw any definite conclusions. For example, it makes no sense that factor E, the dye, would affect oiliness, but one of its aliased effects might. Unlike the results for color and scent, there was no obvious explanation for these effects. At this stage, the capability of the low-resolution design was exhausted. More experimentation was needed to uncover the true causes for the failure of the ruggedness test.


Some aliasing of main effects can be eliminated by adding a second block of experiments with all variable levels reversed (for example, high versus low amounts).2 This technique is called a foldover. The combined results normally remain somewhat aliased.

Before doing the foldover, dye (D) and perfume (E) were eliminated as factors based on the assumption that they affected only color and scent, respectively. Dye and perfume were set at their midpoint levels, and color and scent were dropped from further consideration. The addition of the eight foldover runs resulted in a full (i.e., no aliases) 16-run factorial for the remaining four factors, which avoided further aliasing.

Analysis of the combined data continued to show no significant impact on perceptions of heat. This was an important finding because, prior to the DOE, the manufacturer was concerned that users would be sensitive to variations in melt point caused by changes in the ratio of wax and oil. For this attribute, the formula passed the challenge of validation, since it was robust to expected variations.

Figure 7. A plot of combined results shows that effects for the glove are not significant, meaning that this attribute passed the ruggedness test.

Figure 8. A plot of effects for oiliness using combined data shows an unusual three-factor effect on perception (wax to wax ratio, wax to oil ratio, and amount of vitamin E).

Regarding the perception of the quality of the wax glove, the first experiment seemed to indicate some effects, but after the data were reviewed for the entire series of runs, including the foldover, none of the factors was shown to significantly affect user perceptions (Figure 7). Therefore, this attribute also passed the ruggedness test.

Figure 9. Three interaction graphs show the wax/oil relationship at low, medium, and high levels of vitamin E (left to right, respectively). Squares represent positive levels of wax to oil, triangles represent negative levels.

The final results on perception of oiliness (Figure 8) indicate a dependence on the combination of three factors: the ratio of the two component waxes, W1 and W2 (A), a higher ratio of total wax to oil (B), and the amount of vitamin E (labeled D in the plot). Although such three-factor interactions are very unusual, they are more likely in experiments that involve mixtures. The series of interaction graphs shown in Figure 9 demonstrate the complex behavior governing the perception of oiliness. In order to provide the best formula of paraffin wax, the highest-rated variables, as determined by the results of the experiment, should be combined in the product. This combination is most readily identified using a cube plot (Figure 10).

Figure 10. The cube plot shows the best combination (upper right front) for the three factors that affect oiliness. The determining factor is the highest numerical value, not the plus or minus symbols.


Based on the results from the two-step DOE, several product recommendations were made in order to arrive at a cheaper, improved paraffin blend.

  • The cheapest supply of raw wax material can be used, as this variable did not significantly affect any of the tested perceptions.
  • More color and scent should be added, which may also help mask the variability of native colors and scents.
  • The amount of vitamin E should be reduced, and the ratios of W1 to W2 wax and of wax to oil should be increased.


This study provides an example of how to apply a two-level factorial DOE to validation testing, and demonstrates the flexibility of the approach should the validation fail. In this application, the use of foldover runs offers an insight into how variations in factors can affect processes or products.

DOE is just one of the statistical tools used in validation to challenge a system and identify which factors to control. Other tools, such as statistical process control, should also be employed to show that the system can produce consistent outputs over time and meet specifications with a high level of confidence and reliability.


The authors would like to thank Dave Sletten of WR Medical (Stillwater, MN) for doing the experimental work and Patrick Whitcomb of Stat-Ease (Minneapolis) for providing valuable advice on the setup and analysis of the DOE.


1. JS Kim and JW Kalb, "Design of Experiments: An Overview and Application Example," Medical Device & Diagnostic Industry 18, no. 3 (1996): 78–88.

2. DC Montgomery, Design and Analysis of Experiments, 4th ed. (New York: Wiley, 1997), 413.

Mark J. Anderson is a principal of Stat-Ease Inc. (Minneapolis) and WR Medical Electronics Co. (Stillwater, MN), and Paul J. Anderson is vice president of research and development and a principal of WR Medical Electronics Co.

Copyright ©1999 Medical Device & Diagnostic Industry

Year-End Report Notes Faster Reviews, More Changes at FDA : Device Interaction Alert : Henney Confirmation : PMA "Not Approvable" : Balloon Stent Alert : FDA One-Sidedness Continues : MRA Transition Delayed : CDRH Draft Inspection Handbook

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

An MD&DI January 1999 Column

CDRH gives itself high marks for fiscal year 1998 accomplishments.


For fiscal year 1998, FDA's Center for Devices and Radiological Health (CDRH) turned in, in its own words, an "outstanding performance," based on various measures of the time it takes to review premarket approval (PMA) applications and premarket notifications (510(k)s). Glossed over in the Center's report was one traditionally employed performance measure: absolute numbers of approvals. These were marginally down for PMAs and not even mentioned for 510(k)s—about which, more in a moment.

CDRH's report, providing a preliminary glimpse of the soon-to-be-released comprehensive annual report from the Office of Device Evaluation (ODE), focuses more on quality than on quantity, hailing 11 important diagnostic or therapeutic advances among the 46 PMAs. Four of these approvals were humanitarian-device-exemption PMAs for orphan devices intended to meet the needs of fewer than 4000 patients.

The report cites continuing significant reductions in average review times—12.4 months for a PMA compared with 16.6 months in FY 1997 and 25.9 months in FY 1996. "And the median total time to approval in 1998 was just 8.7 months," according to the report. "This year, CDRH had the shortest review time for any substantial number of PMAs in a decade."

One-third of the PMAs passed last year were approved in less than the statutory 180 days allowed, and 63% were approved in less than a year. "Furthermore, 28 of the 46 PMAs were never overdue in any review cycle."

For 510(k) applications, the average review time was 114 days in FY 1998, compared with 130 days in FY 1997 and 145 days in FY 1996. Fifty-nine percent of 510(k)s were reviewed within 90 days, about the same as the previous year (58%) and better than FY 1996 (50%).

The report gives no absolute number for 510(k)s received in FY 1998, but ODE director Susan Alpert told this writer it was 4623. This is 8.4% fewer than in FY 1997 (5049)—a decline Alpert attributes primarily to the Class I devices that were exempted from the premarket notification requirement during the year.

With the absolute number of PMA applications approved remaining virtually static (actually down by two) and 510(k)s down, CDRH is driven to soft measures of its performance, such as maintaining for the second year a zero backlog of 510(k)s, PMAs, and PMA supplements, and recitation of unquantifiable achievements such as continuing improvement in communications with industry and creative management initiatives.

Alpert stated that original IDE submissions increased significantly in FY 1998, with about 70% being approved, and "that's quite good." The new IDE review board, formed to ensure a consistent FDA approach to scientific issues, has been meeting quarterly, Alpert said, and "is working quite well."

Two IDEs have so far been appealed to this board by their sponsors. "We got some very strong recommendations," said Alpert. "In one, the board agreed with the division in terms of what was needed, but made some suggestions about how to communicate that better to the company. In the second case, there were several issues raised by the division that the board felt weren't necessarily showstoppers—and that's the kind of leveling that we want to get from this board."

Alpert declined to give more specifics because the identity of investigational devices is confidential. The IDE review board comprises all of the division directors, representatives of Alpert's office, senior clinical people from the divisions, and a senior manager from one of the other FDA program centers. In addition to any company appeals that may come in, the board routinely examines the IDE letters most recently issued by the office as to their scientific and communications qualities.

Among other achievements that don't show up in hard numbers, Alpert cited the elimination of "a lot of old PMAs. . . . We have worked through them or worked with the companies to understand their fate. We're not carrying a lot of deadwood in the system anymore. And all of the ones we're carrying are active. That means that the staff is working on things that are current and have a goal. This has been a tremendous benefit."

The ODE director also hailed the new real-time review process for supplements, a process in which reviews are conducted live with company participation and final decisions made within days rather than weeks or months. "People don't even think of it as something new anymore—they just automatically do it," Alpert said. "That releases a tremendous amount of resources. It changes the entire interaction, and it gives you a focused time, a focused issue, and it gives you dialogue. We've done more than 70."

Alpert acknowledged that the process "wouldn't be possible without industry's willingness to change the way it works with FDA. They have to send in clean and complete submissions, and they have to do a different kind of work to do a real-time review. And they've done it."

In October, CDRH director Bruce Burlington issued a "dear colleague" alert advising that minute ventilation rate-adaptive implantable pacemakers may occasionally interact with certain cardiac monitoring and diagnostic equipment, causing the pacemakers to pace at their maximum programmed rate.

The CDRH letter cited Telectronics' (St. Jude Medical) META and Tempo series, Medtronic's Legend Plus and Kappa 400 series, and ELA Medical's Chorus RM and Opus RM series as examples. It referenced several reported incidents in the preceding year in which minute ventilation rate-adaptive pacemakers paced at their maximum rate when patients were connected to cardiac monitoring and diagnostic equipment.

While none of the incidents resulted in patient death or injury, CDRH said it was "concerned that this unexpected rise in the pacing rate could be misdiagnosed as clinically significant tachycardia, resulting in unnecessary therapy, or that patients with compromised cardiac reserve (e.g., with unstable angina or myocardial infarction) may poorly tolerate the higher pacing rates."

The cause of the problem, according to the letter, was a mixing of bioelectric impedance—measurement equipment signals from other devices in the vicinity of the pacemakers—for example, cardiac monitors, echocardiograph equipment, apnea monitors, respiration monitors, or external defibrillators.

CDRH urged deactivation of the minute ventilation sensor during treatment with such other devices, selection of the appropriate maximum pacing rate to minimize risk, and cautioning of pacemaker patients.

As its last act of official business before adjournment, the U.S. Senate confirmed the Clinton administration's nomination of Jane E. Henney as FDA's new commissioner. In the partisan wrangling over the 1999 federal budget, abortion, tobacco, and other issues, there had been doubt that her nomination would reach the floor at all.

In the end, only Senate majority whip Don Nickles (R–OK) stood in the way of Henney's confirmation. The nominee flew to Washington to assure him, with reinforcement from HHS secretary Donna Shalala, that she would not be, as he feared, "a tool of the administration to push its liberal political agenda"—especially on using FDA to find a commercial sponsor for the abortion drug RU-486. Nickles relented, issuing a statement that he believed Henney "won't try to implement legislation through regulation."

Several major Washington industry associations issued statements immediately after the Senate vote applauding her success. The Health Industry Manufacturers Association (HIMA) was not among them. Its only comment on Henney thus far was made last June, when news of her nomination was officially announced. Then, HIMA said it "expressed hope that the president's nominee for FDA commissioner, Dr. Jane Henney, will make a priority of new initiatives intended to speed up delivery of medical technology to patients."

The younger, smaller, and more aggressive Medical Device Manufacturers Association (MDMA), however, quickly issued a statement welcoming Henney as the new commissioner. In a public letter to her, chairman Wayne Barlow and executive director Stephen Northrup told Henney she now had "the opportunity to make a tremendous difference in the lives of our nation's citizens, the vast majority of whom at one point in their lives will rely upon the life-enhancing and lifesaving products developed by the medical device industry. As you prepare the FDA to meet the promise and the challenges of the next century, MDMA and its members stand ready to work with you as we together seek to bring safe, effective, and innovative medical technologies to patients with all deliberate speed."

Anika Therapeutics (Woburn, MA) has received word from CDRH that its PMA for Orthovisc sodium hyaluronate, a device for treating osteoarthritis of the knee, is not approvable, and that additional clinical data are needed to show effectiveness. In its letter to the firm, CDRH suggested an agency consultation in the design of a new study.

With a nationwide total product recall under way by Boston Scientific/Scimed (Natick, MA) less than two months after approval, CDRH director of surveillance and biometrics Larry G. Kessler issued a "dear colleague" letter on October 8 about the firm's NIR ON Ranger w/SOX Premounted Stent System.

Kessler said his office had received "reports of device failures, including balloon ruptures leading to vessel dissection, balloon leaks resulting in incomplete stent deployment and/or stent migration, and difficulty deflating and removing the stent delivery system." As of October 8, Boston Scientific/Scimed had knowledge of one patient death and 26 patient injuries associated with these failures, Kessler said.

"We have recently learned that the balloon portion of the delivery catheter develops pinhole leaks and ruptures at inflation pressures as low as 3 atm," Kessler wrote. "This problem manifests during the stent deployment procedure. Preliminary failure investigation conducted by the manufacturer indicates that the cause of the balloon problem appears to be related to the SOX manufacturing process."

He urged immediate discontinuation of any use of the product and return of unused catheters to the manufacturer for a no-cost exchange with the company's NIR ON Ranger without SOX.

Through its so-called grassroots regulatory partnership movement, CDRH reengineering, reinvention of government, and the FDA Modernization Act, FDA has come a considerable distance toward meeting industry halfway on many issues.

On one overture from industry, however, the agency will not be budging: FDA will not present the affected company's side of the case when it posts warning letters on its Internet home page.

The suggestion came from a Glaxo Wellcome attorney at a FDLI seminar in September. He argued that simple fairness demanded that firms at least be given the option of including their responses to such agency letters, either by simultaneous attachment to the warning letter itself on FDA's site or by a link to the company's own home page.

This, he felt, would mitigate some of the marketplace damage that occurs when trade competitors use FDA letters to disadvantage rivals in the purchasing community. His comment did not address the fact that most companies in the past have seemed reluctant to add weight to those unfortunate letters by offering any public comment on them whatsoever—but perhaps the new communications opportunities provided by the World Wide Web will change that traditional hesitancy.

An internal FDA working group took up the idea in October but ruled against it. FDA communications staff director William Rados said the working group felt there were too many legal and logistical concerns. FDA would have to delay the posting of its letters if it were to include the company responses, and there were confidential commercial data issues as well as potential copyright issues to be negotiated in specific cases.

Already the subject of congressional skepticism because of its ambitious scope, the U.S.-EU mutual recognition agreement transition period for medical device and drug inspections that had been scheduled to begin on November 1 was delayed for at least a month. The cited reason: "minor technical issues" that had delayed a triggering exchange of letters between FDA and EU that should have taken place before October 1.

CDRH has released its draft handbook for performing inspections under the center's Quality Systems Inspections Technique (QSIT). An 18-month pilot program implementing the technique in three FDA districts started in October. The 85-page handbook is available on-line at

Copyright ©1999 Medical Device & Diagnostic Industry

Joining the Game: Orthopedist Brings Science, Strategy to Device R&D

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

An MD&DI January 1999 Column


As both an athlete and a researcher in biomechanics, Charles Dillman can personally relate to sports injuries—whether a slipped disk or a torn ligament. His 20 years in orthopedics have focused on sports medicine, culminating in his appointment as group vice president of research and development in Huntersville, NC, for Orthofix International N.V., a company specializing in minimally invasive therapy for bone repair and reconstruction. "My job is to combine the R&D efforts of the four companies that make up Orthofix and essentially make the whole greater than the sum of its parts," Dillman explains. "As I build up the individual technologies of these companies, I'm also looking for synergies among them and trying to introduce new technologies."

This balancing act has given Dillman the opportunity to learn about the business world and develop new skills. Prior to working for Orthofix, Dillman worked mainly as a biomechanics researcher with orthopedic surgeons. Since 1995, he has been professor of surgery at the University of Massachusetts Medical Center and has also directed graduate programs on the biomechanics of human performance relating to sports activities at the University of Delaware and the University of Illinois. He was intrigued by the position at Orthofix because it introduced him to the medical device industry, while enabling him to use his background in surgery, and gave him a new perspective on the differences between the business world and the world of academia: "Business is much more organized, and it has good systems and approaches to solving problems. In academia, you select a problem and conduct research, but there's not a lot of thought about the consequences of what you're going to do. In business you've got to analyze the project before you start, defend it, and make sure that it's going to have some return."

Charles Dillman's fascination with the mechanisms of the human body keeps him motivated—in research and business.

Dillman applies this forward-looking approach to his role within the company: "The goal is to manufacture products for today's market but also try to balance this with an eye toward the future. Most companies in my experience have had to favor satisfying immediate needs. Although they're interested in the future, they want to hedge their bets and put everything into the present. The challenge for me is to try and convince the company to invest more resources in the future."

At the same time, Dillman is faced with the challenge of bringing high-impact products to market quickly. Athletes, like the general population, want treatments that allow them to recover as quickly as possible. An avid athlete himself until he injured his back, Dillman can identify with the desire to return to work as quickly as possible. "Being in this field forces you to come up with more-effective treatments that will allow the injured person to return to his or her normal status. But research by its nature is a slow process. The question is how do you choose a product that's going to make an impact and increase sales quickly, as opposed to choosing a product that may take four or five years to develop." Dillman encourages the company to develop relationships with surgeons and researchers at leading universities, so as to explore new ideas and concepts that extend beyond present-day thinking. He is also involved in his own research in shoulder mechanics aimed at helping design better surgical procedures.

Dillman's interest in sports medicine also led to a position as director of science and medicine for the U.S. Olympic Committee, which in turn led to an invitation to serve as a member of the medical commission for the International Olympic Committee. As such, he is in charge of running various educational programs geared toward disseminating information on sports medicine to other countries, especially underdeveloped countries. His work has recently earned him the Olympic Order, the highest award the International Olympic Committee presents, for his outstanding contribution to the world Olympic movement.

With researchers gaining a greater understanding of biological mechanisms and how the body repairs itself, Dillman foresees a move from minimally invasive surgery to noninvasive therapies. Among other advances, he expects a host of new products to more effectively treat complex fractures. For example, Orthofix is currently involved in research on pulsed electromagnetic fields and how they appear to increase blood flow and activate certain cellular growth factors.

Dillman attributes his success in biomechanics research to his keen interest in such topics—inquiries that explore the human body and how it works. "It's so complex and interesting," he relates. "That's what keeps me motivated."

Kassandra S. Kania is assistant editor of MD&DI.

Copyright ©1999 Medical Device & Diagnostic Industry

Digital Imaging Heralds Waning of Film Era

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

An MD&DI January 1999 Column

Advances in digital detectors drive the evolution of radiography.

As digital technologies begin to emerge in radiography, a century of total dependence on x-ray film is drawing to a close. Several companies are now selling medical products that generate electronic radiographs. More are in the final stages of development.

But unlike other digital imaging technologies such as magnetic resonance imaging (MRI) and computed tomography (CT), there is as yet no clearly superior way to make digital radiographs. Engineers have available a bewildering array of solid-state x-ray sensors. Even those devices in the mainstream differ markedly. They may use different materials, such as selenium or silicon, which may be applied in various ways and shapes. Some sensors rely on charge-coupled devices (CCDs) similar to the ones built into digital cameras. These CCDs can be optically linked or stitched into a mosaic of chips. Or a CCD array might be transported mechanically across the patient.

Figure 1. Cutaway view of the DirectRay detector array packed into a flat box. An amorphous selenium coating is placed over the thin-film-transistor matrix and the associated readout electronics (Sterling Diagnostic Imaging; Greenville, SC).

The engineering challenges are similarly diverse. Silicon requires a scintillator to generate flashes picked up by photodiodes. Selenium records the impact of x-rays directly but must be electrically charged to record the x-ray strikes and recharged before the next exposure. Both silicon- and selenium-based systems are expensive to manufacture—and vulnerable to defects. A single dust particle can short circuit a whole line of data.

CCDs are a mature technology and, as a result, are inexpensive. Their manufacture is relatively consistent and defects, consequently, are minimal. But their small size requires either tiling or optical coupling to a scintillator to record the flashes indicating that x-rays are passing through the body, and both of these solutions have drawbacks.

Meeting these challenges, while difficult, offers extraordinary opportunity. The demand for digital data is growing. Physicians are looking for increased efficiency through networking and the rapid transmission of data from point to point. Digital detectors promise improved diagnostic confidence, and even expanded clinical utility as pathologies not apparent on film appear in computer-enhanced images.


The promise of such enhanced imaging is already apparent in the three digital products now on the market. Not surprisingly, these digital systems differ substantially in the form or way they capture digital data. The first modern digital x-ray system, Thoravision from Philips Medical Systems (Eindhoven, Netherlands), records x-ray data on a thin film of selenium wrapped around a drum. The selenium directly records the impact of x-rays as electrical charges, which are read by a probe and transmitted to a computer for reconstruction into a two-dimensional image. This system, commercialized some four years ago, can handle the largest patient, providing 19 x 17-in. coverage.

"We have done millions of patients," notes Hans Kleine Schaars, marketing manager of the common imaging subsystems group at Philips Medical Systems. "The only issue is that it isn't flat."

Figure 2. Developed for fast diagnosis and therapeutic intervention, this system includes an integrated CT scanner, a solid-state x-ray detector (center, above the table), and an articulated arm. The silicon detector produces real-time images to guide physicians during interventional procedures (Picker International; Cleveland).

The drum-based approach also makes the system heavy and difficult to site. Ideally, digital systems should fit neatly into existing radiography suites. Such is the case with the radiography product from Sterling Diagnostic Imaging (Greenville, SC), called iiRAD. This product, like Thoravision, uses a selenium-based detector. But the sensor is fashioned into a thin-film-transistor (TFT) array and packed into a flat box that fits into standard radiographic tables or hangs vertically like a conventional chest-film Bucky. Charges deposited by x-rays in the selenium are recorded and read off the TFT for interpretation by a computer. Sterling began selling a general-purpose and a dedicated chest system earlier this year. "A lot of orders [for these systems] are being tied to PACS [picture archiving and communications systems]," says Jim Culley, a product manager for Sterling Diagnostic. "We are getting into major hospitals."

The digital detector, called DirectRay, will get an even wider dispersion as OEM deals with Sterling kick in (Figure 1). DirectRay is being provided to device companies, including Fischer Imaging (Denver), for integration into other x-ray machines under their own labels.

The AddOn-Multi-System from Swissray International (Hitzkirch, Switzerland) does not use selenium but rather four CCDs, which record flashes of light that occur when x-rays strike a scintillator. A fiber-optic system conveys flashes to the CCDs, consisting of photodiodes configured into vertical and horizontal rows. Each photodiode stores an electrical charge proportional to the intensity of the flash, then transfers this energy row by row to the edge of the CCD, where the signal is amplified and sent on to a computer.

These three devices may soon be joined by digital products using a markedly different type of technology. Unveiled this past November at the annual meeting of the Radiological Society of North America, these products will each incorporate a sensor made largely from amorphous silicon. This film of silicon is matched to a TFT array. A scintillator coating the silicon produces flashes with each x-ray strike; the TFT then turns these flashes into signals, which are conveyed to a computer.

GE Medical Systems (Milwaukee), Siemens Medical Systems (Erlangen, Germany), and Philips are each expected to unveil a dedicated chest or general-purpose radiography suite based on flat panels made from amorphous silicon. On October 27, GE announced receipt of FDA clearance for a digital radiography system optimized for chest exams. And radiography is likely to be only the first step.

Panels made from amorphous silicon have the potential to support applications in real-time x-ray imaging technologies, including general fluoroscopy, angiography, and cardiac catheterization. Picker International (Cleveland) may be the first to commercialize such a real-time imaging solution, as part of its advanced system that combines an 8 x 10-in. silicon-based detector, made for fluoroscopy, with a CT scanner (Figure 2). The flat-panel fluoroscopy is designed to allow real-time guidance for interventional procedures, while the CT provides three-dimensional or slice-based information. Clinical tests of this integrated system began in the fall.

"The oncology market is really embracing this technology for procedures such as brachytherapy [in which radioactive seeds are implanted near tumors]," says Richard Silver, sales and marketing manager for the Picker x-ray division. "We will continue to focus on the fluoroscopic applications, taking it into larger fields of view as the flat-panel technology evolves."


Detectors made of amorphous silicon are also being groomed to play key roles in digital mammography. But they will likely be preceded in the marketplace by other types of digital detectors. Trex Medical (Danbury, CT) has developed a full-breast digital detector comprised entirely of CCD chips. Charge-coupled devices have a long history in mammography, having been applied more than five years ago in stereotactic biopsy equipment. In this role, the CCD is used essentially for targeting suspicious lesions. Its major constraint is size: in order to cover an entire breast, Trex engineers have stitched the relatively small CCD chips together like a high-tech quilt. A coat of cesium iodide, acting as a scintillator, is deposited directly on a fiber-optic taper. This taper is bonded to the CCDs, which read the flashes of light resulting from x-ray impacts.

Fischer Imaging is clinically testing a mammography system that uses a CCD but in a much different design than that of Trex. Fischer uses a slot-scanning device, which emits a narrow beam of x-rays that fan out laterally, exposing breast tissue between the x-ray source and a slot-shaped CCD detector. The detector sweeps across the breast, creating in its wake a stream of digital data that are compiled into an electronic image.

The Trex and Fischer designs, while different, both address the same problem inherent in CCDs—a lack of coverage. Coverage is an even greater challenge when doing radiography. Swissray engineers solved the problem by optically coupling the CCDs to a scintillator so that the total chest could be captured.

But there are drawbacks to these solutions. The abutments between CCDs in the Trex detector create a checkered pattern of thin lines where data cannot be recorded. Algorithms that interpolate data from one point to another fill in the missing pixels, smoothing over these lines to create a homogenous image. In reality, some data are lost—but the loss is so small that it is not likely to affect the diagnosis, according to company executives.

"Any adverse effect on imaging is kept well below any level of significance," says Richard Bird, director of clinical and product development at Trex.

Swissray boasts that no data are lost in its use of CCDs, thanks to the optical couplings, which actually provide overlapping data sets. A scintillator is divided into four regions, each covered by rapid scanning optical systems. Flashes in these regions are recorded by each of the CCDs, which create corresponding electrical signals for computer reconstruction.

The challenge is to process the data into a cohesive image, subtracting redundant data and then creating an accurate image optimized to depict bone and soft tissue clearly. All this processing is done in less than 20 seconds, according to Ueli Laupper, CEO of Swissray America Inc. "When the images appear, the appropriate algorithms are already applied," Laupper says. "The radiologist can immediately start his diagnostic study. No change of contrast or windowing or leveling is needed."

The accurate presentation of these data must take into account the differences in sensitivity that inevitably occur from using more than one CCD. Even though the manufacture of these devices is routine, performance parameters can vary slightly from one CCD to another. CCDs in the Trex mammography detector are also susceptible to this problem. But the digital nature of the technology provides the solution. Calibration software has been written to align the different CCDs while adjusting the data accordingly during image acquisition.

The Fischer Imaging device is spared these problems. Slot scanning creates neither voids nor overlaps in the data. Instead, the sensor creates myriad individual pictures of small areas of the breast that are then pieced together by the computer to form a mosaic of the breast.

The several seconds needed to traverse the breast, however, makes the system susceptible to motion artifacts created by movement of the patient or gremlins in the mechanism that transport the detector and x-ray source. Fischer engineers reduce—if not eliminate—the risk of diagnostic error resulting from motion artifacts by compartmentalizing the individual images, addressing them as a series of subsecond, high-resolution snapshots. Additionally, the Fischer engineers designed the breast tray to ensure that the source and detector would remain at the same distance throughout the sweep. In the process, they designed a machine that causes less discomfort to the patient.

"They've designed the breast support tray to be slightly curved and it is that curvature that keeps the array and tube at exactly the same distance throughout the sweep," says Cynthia Malin, a product marketing manager at Fischer Imaging. "As a side point, when they did that they made a more comfortable mammography machine, because the compression is more even across the breast, with less force being applied."


Amorphous silicon plates would seem to have the edge over competing technologies. Flat-panel sensors come off the production line at EG&G Amorphous Silicon (Santa Clara, CA) as 41 x 41-cm sheets that require no stitching, optical coupling, or mechanical transport across the body. When plugged into their surrounding electronics, these panels can capture an entire chest image in a single shot of x-rays (Figure 3). Alternatively, the company can precut these sheets into smaller sizes for detectors to be used in mammography, angiography, or even bone-densitometry products.

"It's the same concept as a semiconductor manufacturer that makes a bunch of devices on a single wafer and then cuts up [the wafer]," says Andres Buser, the general manager of EG&G Amorphous Silicon.

Getting to this point, however, has not been easy—or cheap. The basic research to develop this detector technology cost GE Medical Systems more than $100 million during the past decade. The cost of developing and building the manufacturing line at EG&G Amorphous Silicon is not publicly known but is likely to be in the tens of millions of dollars. And production still has a long way to go. At full capacity, the EG&G plant can produce only a few thousand detectors per year.

"We run them in lots of eight, but there are some steps in the process that take a whole day," Buser explains.


If digital x-ray systems are to be widely adopted, the cost per unit must come down. The end-user products now in the market—analog devices that use film—are among the least expensive of all diagnostic imaging systems. By comparison, a full-breast diagnostic mammography system from Trex, when approved by FDA, will cost more than $350,000.

The need to generate digital images and the promise of clinical efficiencies possible with digital imaging are expected to help justify the cost of digital x-ray equipment. But such justifications can go only so far.

Figure 3. GE Medical Systems (Milwaukee) and EG&G Amorphous Silicon (Santa Clara, CA) have teamed up to manufacture a digital x-ray detector that produces x-rays without film. Cynthia Landberg, PhD, of the GE Research and Development Center mounts a prototype detector in an apparatus designed to test functionality.

Vendors must control the cost of the sensor if they are to be successful. That harsh economic fact may be the greatest challenge facing this technology. It has already claimed the financial health of one supplier of sensor components, OIS Optical Imaging Systems (Northville, MI).

The company shut down its manufacturing plant in early September, citing unacceptable losses from the production of flat-panel sensors and displays. The plant reopened late that month by order of the U.S. Department of Commerce, which forced company owners to initiate production to meet government contracts to make displays for military weapons systems. The federal order was a reprieve for Sterling Diagnostic Imaging, which only weeks earlier had begun selling its iiRAD system, whose digital detector requires the selenium plate made by OIS.

"This is the best news," said Sterling spokesperson Jayne L. Seebach in response to the government's order. "But you better believe we will be simultaneously finding another supplier."

It will be a tough search. Only two companies other than EG&G Amorphous Silicon and OIS have the in-house expertise and equipment necessary to make solid-state imaging detectors—dpiX and Trixell SAS. One, dpiX, is the manufacturing arm of Xerox PARC (Palo Alto, CA), which can draw from deep financial pockets at Xerox, if necessary, until the market is fully developed. Trixell is a consortium formed in January 1997 by Siemens Medical Engineering Group, Philips Medical Systems, and Thomson Electroniques. Based in Moirans, France, Trixell has a built-in market for its solid-state detectors—Philips and Siemens—and hopes to sell them to other OEMs as well.

Other companies—the makers of LCD flat panels, for example—have the fundamental resources to make the components, but they will have to be convinced to do so. Simply, the global market for digital detectors is defined in the tens of thousands per year, as opposed to the millions of units per year of demand for LCD panels that comes from the computer industry alone.


At the very least, the vendors of imaging equipment are committed to digital x-ray. They have little choice. The practice of medicine is growing increasingly dependent on computers and networking. To be cost-effective and efficient, these networks must include x-ray images, which account for about 70% of all the imaging studies done in the United States. Additionally, as doctors grow increasingly accustomed to the benefits of digital image display in other modalities such as MRI, CT, and ultrasound, they will demand the same from radiography and the various applications of x-ray fluoroscopy.

Copyright ©1999 Medical Device & Diagnostic Industry

Avoiding Gaps in Clinical Trials Liability Insurance

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

An MD&DI January 1999 Column


When companies test high-risk procedures or devices, the potential for costly liability claims skyrockets. Having the right liability insurance can be vital to a company's success.

When developing an insurance plan, a biomedical or medical device company must take its clinical trials department into special consideration. It surprises most people to learn that while their company may be covered for general liability, it might not be covered for clinical trials liability—a specific, high-risk area that requires unique coverage to provide the maximum insurance protection. A company needs to purchase additional liability insurance specifically designed to cover clinical trials.

Even if it does have separate insurance for clinical testing, a company must be certain that the policy is accurately tailored to cover all of the additional exposures and liabilities that clinical testing always brings to a company.

The terminology and design of clinical trials liability insurance can vary considerably from policy to policy; knowing exactly where a company does and does not have coverage can mean a difference of millions of dollars in a potential judgment. What follows is a general breakdown of clinical trials liability insurance—information that every company should understand to ensure that it receives the best coverage possible.

Finding the right insurance plan for clinical trials liability can be difficult and time-consuming, but it is crucial. In the long run, selecting the best plan will save the company time and money. Good clinical trials insurance not only protects the company from liability exposure, it also provides an increased level of comfort for investors by demonstrating prudent financial management. Assessing the company's specific insurance needs and then choosing the right coverage plan, however, is easier said than done.

The first step to obtaining full insurance protection from liability occurs within the company. In addition to identifying precisely where it needs specific coverage, a company must make sure it has done everything possible within its clinical trials department to minimize the risk of a lawsuit. Even the best insurance plan cannot protect a company that strays from safety measures, informed consent rules, or proper test procedures. While consistently following procedures cannot guarantee a company immunity from all liability claims, it can considerably decrease the risk involved. To avoid unnecessary risk, a company must establish and maintain a policy of strict adherence to the required clinical trials protocol. All personnel should be fully aware of the issues concerning informed consent, safety measures, and testing procedures. In an already high-risk business, failure to follow regulations greatly increases a company's odds of being successfully sued and leaves potential juries little cause for sympathy on that company's behalf.

Once a company has verified that it follows all protocol regulations to the letter, it is time to search for possible insurance carriers. Remember, a company needs clinical trials liability coverage in addition to its general liability and products liability coverage. With that in mind, a company should consider only those carriers that have demonstrated financial security and shown a commitment to serving the unique claims of the biomedical and medical device manufacturing industries.

Clinical trials insurance is a highly specialized product, and because of this, not every carrier can fulfill the needs of companies searching for such specific coverage. However, if the company's current insurance provider offers clinical trials insurance, it is worth analyzing and considering the specifics of its policy, depending on how encompassing the coverage is. Although there are occasional reasons to use separate insurers, it is generally safer to have both products liability and clinical trials coverage with the same carrier. This avoids coverage disputes over claims, and reduces the possibility of gaps in coverage.

It is crucial to understand that clinical trials liability coverage is uniquely formatted to fit clinical testing and is not a general-coverage plan. Once in place, the plan should cover bodily injury and/or property damage resulting from the insured's negligence while testing the product. The insurance is limited to the clearly specified trials of clearly listed products and for no other tests or products. Before purchasing a plan, a company must be certain that the policy covers all necessary areas and should know what areas are not covered.

After choosing the most appropriate policy, the company must decide on a coverage limit. There is no set rule for establishing coverage limits or minimums, but the consensus in the insurance community is that a clinical trials liability policy should carry a minimum limit of $1 million and can have upper limits of $10 million through $20 million or more. Of course, a company's specific needs—and sometimes the needs of the testing facility and its risk levels—will dictate an acceptable range for these limits.

When setting a figure, it is wise to consider the time period addressed by the policy and the possibility that future claims against the company might increase considerably over time. Remember, there is often a long process involved in developing a product and having it approved. Because of this fact, what initially may have seemed to be adequate limits may later fall substantially short. Furthermore, it is not uncommon for claims to be filed several years after the clinical trials are completed, during which time the amount of an average claim may have risen. A company should consider all these factors when deciding on a limit and choose one that will cover both its present and future needs adequately.

After selecting a limit, a company is typically required to decide the amount of the deductible. At this point, it is worth considering the development of a self-insured retention plan (SIR) for the company, as opposed to accepting one of the insurance carrier's deductible plans. With a standard deductible, a company loses much of its input and consultation rights when certain claims are brought against it. For example, if the claim is up to or less than the amount of the deductible, the insurance carrier has little vested interest and will often simply settle the claim. The insured must accept the settlement without any chance to protest or provide input. But if the insured company has an SIR, they can use their own legal counsel to make any settlement up to the amount of the SIR. This provides significantly more control over settlement claims than a deductible affords, and the insured can work to arrange a settlement that might be more beneficial, for diverse reasons, to the company and to the injured parties. Too often the only concern that an insurance company has is the bottom line cost. Selecting an SIR gives insureds, not the carrier, the ability to negotiate to the betterment of all concerned. Depending on how comfortable a company is with its level of risk, it typically selects an SIR with a lower range between $10,000 and $50,000 and an upper range as high as $150,000—$200,000 or more.

After all these decisions have been made, the insured will be presented with the often-confusing language of the various options regarding the terms and extents of coverage. The first choice to consider is whether the coverage should include defense within or outside the policy limits. If a company selects defense within coverage, the cost of defending the claim is included in the policy limit. The carrier will only pay for the cost of defending against a claim up to the coverage limit and not beyond. This option can decrease premium payments but will erode the total amount available for claims because of the legal expenses. The concern is that there may be inadequate coverage remaining to pay for any judgment or settlement levied against the company. With a defense outside the policy limits, the insurance carrier will provide for the cost of defense in addition to its coverage of the judgment or settlement ultimately made on the claim up to the full limits of the policy. This option increases premium payments but can significantly decrease the amount paid by the insured should a claim arise. A company should do careful research before deciding whether to accept defense within or outside the policy limits.

The next important condition determines the coverage structure of the policy. There are basically two forms under which liability policies are written: the "claims-made form" and the "occurrence form." Unfortunately, insurance carriers do not often allow the insured to make a choice about this aspect of coverage, but it is still important to understand each form. A claims-made form provides coverage only if the claim is filed during the policy period. As long as the claim is registered within the agreed-upon coverage dates, the insurance carrier is responsible; if, however, the claim is filed at any time after the coverage period—even if the accident occurred during that coverage period—the insurer is not responsible.

With an occurrence form, on the other hand, the insurance carrier provides coverage for incidents occurring during the coverage dates, regardless of whether the policy is still in effect when the claim is made. This is the much broader and more prudent route for a company to take, and in the unlikely event that a choice is presented, the company should opt for an occurrence policy. In the case of nearly all companies with high-risk exposure, however, insurers will not allow a choice, and the company will be automatically assigned a claims-made policy form.

Even when obliged to accept a claims-made policy, a company can still ensure that it is covered for claims filed after the policy has expired. Whenever the company buys a new policy or renews its current one, it is imperative that the retroactive date be exactly the same as it was on the original policy. This makes the insurer responsible for any incidents that occurred while the company was under coverage, regardless of when the claim was filed.

If a company changes insurance or decides not to renew a claims-made policy, it should purchase what is referred to as tail coverage or an extended reporting period. This will provide coverage for any claims that are reported after the policy period has expired or elapsed. Before selecting an insurance carrier, companies should verify that all these options are available.

Clinical trials bring with them substantial risk, and unavoidable accidents can occur. Companies that take the time to obtain the right clinical trials liability insurance will find it well worth the effort. Failure to do so exposes the company to needless risk and can cost a fortune in money, time, and other resources.

Glen B. Carlson is a vice president of Calco Insurance Brokers & Agents Inc. (Orange, CA). He has more than 15 years of insurance experience and specializes in the biotechnology, biomedical, and medical device industries.

Illustration by Ken Corral

Copyright ©1999 Medical Device & Diagnostic Industry