MD+DI Online is part of the Informa Markets Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Sitemap


Articles from 1996 In June


Hurdles to Harmonization:FDA Struggles to Balance Priorities

Medical Device & Diagnostic Industry Magazine | MDDI Article Index

Originally published June 1996

International Standards

Inside the walls of the beleaguered FDA, officials are struggling to communicate to manufacturers the dynamics of FDA's structure, its priorities, and its role in both establishing and endorsing international standards. Shrinking resources have forced the agency to adopt a rigid stance in determining its priorities, complicating its participation in the harmonization process. To frustrated device companies outside its walls, the agency often appears as though it is dragging its feet or ignoring industry's need for the harmonization of requirements.

Although committed to harmonization, FDA also considers other factors when adopting standards, including whether a standard sets performance limits and whether it calls for sufficient evidence to demonstrate safety, said Donald Marlowe, director of FDA's Office of Science and Technology in the Center for Devices and Radiological Health, at the Association for the Advancement of Medical Instrumentation (AAMI)/FDA International Standards Conference on Medical Devices last March.

According to FDA, lack of resources is hampering the agency's ability to participate in international standards development. Because its resources will decline 2­3% over the next six to eight years, FDA has set priorities that put endorsing most international standards third on the list at best. "The priorities at the agency are first the revised GMP regulation and then the pilot programs. Reviewing specific standards and preparing guidances for them becomes tertiary when the agency allocates its resources," says Kim Trautman, GMP/quality systems expert in the device center's Office of Compliance.

But FDA's reduced resources are really not the issue, says Barry Page, convenor of International Organization for Standardization (ISO) technical committee (TC) 194 working group 11, which develops standards for ethylene oxide (EtO) sterilization process residuals. "It's really a matter of getting some people in FDA to recognize that there are issues outside their area of expertise that other experts in the agency understand. FDA's structure is devised so that individuals are empowered to raise questions if they have a concern, even if it's outside their specific area of expertise," he explains.

"If I were [CDRH director Bruce] Burlington," he says, "and my resources were declining as much as 2­3% and I were investing funds to support international standards, I'd be asking why we aren't supporting and using them once they are written and accepted."

Page concedes that FDA's structure differs from the regulatory scheme in Europe, making it more difficult to achieve consensus. He suggests that FDA's guidance documents could indicate that a given ISO standard should be the basis to meet a particular requirement. For example, he says, harmonized European Committee for Standardization (CEN) standards that correspond to an ISO standard are written to meet certain essential requirements of the medical device directives for safety and performance. In a CEN standard, the final annex will document how the European Union (EU) medical device directives will be addressed. Because the United States does not have an equivalent to the EU directives, FDA is caught between laws and regulations, Page says, and interpretation is left to individuals at the agency. He points out, however, that as part of ISO's consensus process, committees address all comments in the process of reconciliation.

He says device companies, especially small manufacturers, want access to guidance documents that can walk them through the requirements. "They want to know what FDA wants," Page says. Many small companies that lack the expertise to interpret standards look for documents from AAMI or FDA's Division of Small Manufacturers Assistance. They know it's FDA that decides whether they can market their products, says Page. He also notes, however, that manufacturers want to use the same information to get approval in Europe, Japan, and the United States.

Page adds that the ISO process of reaching consensus is designed to address committee members' concerns and to incorporate those that the majority endorse. For the EtO standard, this included addressing some member countries' and FDA's concern surrounding acute irritation effects from small devices. The standard states that a device must meet the biological requirements and limits in other parts of ISO 10993. The committee felt that these limits -- already required in ISO 10993-10 -- met this need.

Many individuals at FDA passionately support harmonization of medical device standards. Marlowe and Trautman are among them. They stress FDA's active participation in the process. But, once the ISO committees vote on a standard, FDA's voice is greatly diminished due to the voting structure, Trautman says. ISO committees, striving to achieve consensus, specifically define consensus as: "General agreement, characterized by the absence of sustained opposition to substantial issues by any important part of the concerned interests and by a process that involves seeking to take into account the views of all parties concerned and to reconcile any conflicting arguments. Note: Consensus need not imply unanimity."

Trautman suggests that this definition of consensus is not so easy to accept. For ISO TC 210 on global harmonization, for example, the United States has one vote. If FDA's view is slightly different from Europe's, its vote doesn't carry as much weight as Europe's 15 votes. FDA has a strong voice as a member of the global harmonization task force, she says, which enables the agency to have a greater effect on international standards than it would have as a member of a committee dealing with a specific issue.

"We try to find some middle ground. FDA may have only one of many votes on the U.S. ballot," Trautman says. To accept international standards as a matter of compromise, FDA may need to publish a guidance that interprets its requirements. That adds another step for U.S. manufacturers. But Trautman says the agency feels compelled to publish its own guidances. At the FDA/ AAMI conference, both Marlowe and Trautman reiterated FDA's desire to participate in the harmonization process and adapt agency policy as much as possible.

Trautman's explanations of an overstretched FDA may not reassure device manufacturers frustrated by the result. As the agency balances its priorities with its diminishing resources, the industry's need for harmonization and international standards continues to grow. -- Sherrie Steward

Reform Bills Set for Showdown

Medical Device & Diagnostic Industry Magazine | MDDI Article Index

Originally published June 1996

FDA Reform

Advocates and opponents of comprehensive FDA reform are focusing their attention on four bills developed in recent months by congressional committees with jurisdiction over FDA--the Senate Labor and Human Resources Committee and the House Commerce Committee. If FDA reform is to occur this year, say sources, these bills are the likely vehicles.

On the Senate side, the Food and Drug Administration Regulatory Reform Act of 1996 (S. 1477) was introduced by Senator Nancy Kassebaum last December, and was reported out of the Senate Committee on Labor and Human Resources at the end of March. Floor debate on the bill has not yet been scheduled, but could take place as early as mid-June.

The House has taken a different approach, using three bills to address the product- specific areas of medical devices, drugs and biological products, and foods and animal drugs. Together with the others, the Medical Device Reform Act of 1996 (H.R. 3201) was referred to the Commerce Committee last March. Hearings on FDA reform were held by the health subcommittee at the beginning of May, and mark-up of the three bills was expected to occur in late May or early June.

Among the more controversial provisions of H.R. 3201 are those that would permit third-party reviewers to make the final determinations on 510(k) clearances for Class I and Class II devices. Opposition to giving external reviewers final authority has been voiced by both FDA commissioner David Kessler and Senate Democrats. Also controversial are provisions in the House bill that would permit dissemination of information about off-label uses under certain circumstances; equivalent provisions in the Senate bill were removed during mark-up.

Changes in the reform bills are expected to be made during mark-up and floor consideration. According to Capitol Hill sources, a House-Senate conference on the bills is unlikely to occur before the end of June.

Clinton administration support for reform legislation may depend heavily on the nature of the compromises reached in conference. According to Sally Katzen, administrator of the Office of Information and Regulatory Affairs in the Office of Management and Budget, the administration is approaching change cautiously. Speaking at the annual meeting of the Medical Device Manufacturers Association last month, Katzen said the administration is "interested in legislative reform and not opposed to it," but cautioned that President Clinton is determined to "ensure balance and true bipartisan reform."

Information Technologies Top Hospital Purchasing Priorities

Medical Device & Diagnostic Industry Magazine | MDDI Article Index

Originally published June 1996

Market Trends According to two recent surveys, hospital financial managers are increasing their investments in information technologies. Many are purchasing nonmedical equipment such as data processing and telecommunications technologies at a higher rate than they are diagnostic and therapeutic equipment.

The results of the seventh annual health-care capital survey conducted by LINC Anthem Corp. (Chicago) indicate that hospital managers are seeking business systems and capital improvements that reduce costs by streamlining operations. Another survey, sponsored by the Healthcare Information and Management Systems Society (HIMSS; Chicago) and Hewlett-Packard Co. (HP; Andover, MA), suggests that pressures to control costs in managed-care environments are compelling many purchasers to implement computerization in their organizations.

"The trend of health-care purchasers reallocating capital expenditure for information technology instead of for medical equipment has to do with the evolution of managed care," states Jane Sarasohn-Kahn, a health-care consultant affiliated with Institute for the Future (Menlo Park, CA), a long-term forecasting and research organization. "Managed care frequently requires patient information to be delivered in real time to clinicians, nurses, and managers at a variety of sites. Such timely delivery can be accomplished by systems that integrate information obtained from physician offices, hospitals, ambulatory surgery units, and patient homes," she explains.

The current shift away from medical equipment purchases isn't all bad news for device manufacturers. Sarasohn-Kahn points out that applications are being developed that will marry information technologies to medical devices. For example, Lifestream Technologies, Inc., and Interactive Health Evaluation Systems, Inc., have jointly developed an uplink system that permits data captured by Lifestream's Cholestron cholesterol/HDL test unit to be uploaded onto a computer for processing. The partners' IntraNet uplink system gives physicians access to information related to their patients' conditions, including historical analysis, comparison studies, and lifestyle change recommendations. The American Heart Association recently signed a letter of intent for U.S. licensing and branding of the system.

In order to compete effectively, Sarasohn-Kahn suggests that device manufacturers "ally themselves with information technology companies that can help them add value to their products."

Because hospitals are increasingly interested in acquiring equipment that employs information technologies, the trend suggests that manufacturers of high-cost capital equipment may experience reduced sales growth in 1996. "The need for systems integration is taking money away from traditional purchases such as MRI and CT scanners," Sarasohn-Kahn says. Instead, many purchasers are turning to leasing agreements for such expensive devices. The LINC Anthem survey reveals that more than 50% of the respondents would rather lease equipment than purchase it. Sixty percent of hospital representatives surveyed also said they would consider obtaining refurbished rather than new equipment in order to cut costs.

"Hospitals are under incredible financial stress, and leasing equipment can lower the out-of-pocket costs," explains Casey McGlynn, chairman of the Life Sciences Group of the law firm Wilson Sonsini Goodrich & Rosati (Palo Alto, CA).

The LINC Anthem survey was based on responses from 443 respondents, including chief executive officers, chief financial officers, and other top hospital administrators. All together, the survey represents approximately 8% of the nation's 5300 short-term general hospitals. For more information, contact Pam Welsch, LINC Anthem Corp., 312/ 946-7300. The HIMSS/HP survey polled more than 1200 attendees at the HIMSS annual conference and exhibition in March. For more information, contact Robert Minicucci at 508/468-1155; E-mail address rminicucci@mullen.com.--Daphne Allen

Legislating a New and Improved FDA

Medical Device & Diagnostic Industry Magazine | MDDI Article Index

Originally published June 1996

James C. Greenwood

U.S. Congressman (R­PA)

Two months ago, more than 300 individuals and patient groups signed an "open letter" to Congress. They made an astounding statement, one that was powerful in its simplicity: FDA delays are killing people. Consumer protection means more than just keeping unsafe and ineffective products off the market. It is at least as important that safe and effective products are made available to people who need them as quickly and as efficiently as possible.

That's not happening now, not when it takes twice as long for approval of the average new medical device as it did just six years ago. Premarket approval reviews for medical devices, which are supposed to take 180 days by statute, average 773, twice as long as just five years ago. The average clinical testing time for investigational drugs has increased from 21/2 years in the Johnson administration to six years today.

Commerce Committee chairman Thomas Bliley (R­VA) designated me to head a team to spearhead the effort at meaningful, sweeping FDA reforms. We began by reviewing many studies and recommendations for FDA reform and scores of proposals now pending in Congress introduced by Democrats and Republicans alike in both the House and the Senate.

What emerged from that review, I believe, is a balanced, even-handed approach that will streamline the approval process to allow safe and effective drugs, devices, and foods to reach American patients and consumers more quickly and more efficiently. These proposals will enhance, not jeopardize, public health and safety.

At the outset, we redefined FDA's mission, stressing the objective of approving safe and effective products as quickly and efficiently as possible, and stressing the encouragement of new product development.

THIRD-PARTY REVIEW

On the issue of FDA-approved third-party review organizations, we will strike a middle ground between those who call for outright privatization of the agency and those who prefer that third-party reviews be used only as a stick when legislative carrots fail. Under our approach, applicants would have the option to use an independent FDA-approved review organization in every case, provided they bear the costs.

In my meetings with FDA commissioner David Kessler, he expressed concern about the effectiveness of the independent reviews. Under our plan, FDA will accredit qualified review organizations. FDA will monitor the activities of those organizations with tight, rigorous oversight and will review the recommendations of those organizations before any product can reach the marketplace. Such FDA-approved review organizations will use the same standards the agency uses. But unlike FDA, third-party organizations will be free of the bureaucratic delays and overload that currently slow the conventional approval process. By ridding ourselves of the bureaucracy, the average approval time for supplemental applications covering new uses for already approved products can be measured in weeks, not years, for example.

FDA has recently announced its latest initiative in this area, a limited experiment using third-party reviews for certain simple medical devices. However, this is not enough. It is a pity that as American heart patients travel to Italy for life-saving cardiac stents and FDA delays cause one-fifth of the U.S. medical device industry to move overseas, the best the agency can come up with is a pilot program for third-party review of tongue depressors.

Besides changes in FDA's mission and creating accredited review organizations, our legislation also will take the rapid approval approach that's now used for AIDS drugs and will soon be used for cancer drugs, and will make it applicable for all serious and life-threatening diseases.

FDA REFORM

In the Senate Labor and Human Resources Committee markup of Senator Nancy Kassebaum's FDA reform proposal, an amendment passed by a vote of 11 to 4 to allow a three-year experiment with third-party reviews of all medical devices, including Class III devices. This action makes a successful House-Senate conference on FDA reform a lot easier to imagine.

The House approach is more comprehensive than the Senate one. But these ideas have been kicking around for a quarter of a century, and by no means are they radical.

As for the House, we will soon begin legislative hearings in Chairman Mike Bilirakis's health subcommittee. Our ultimate goal is to report this legislation out of the Commerce Committee by mid-June.

Momentum is on our side. That's what prompted President Clinton's pronouncements about cancer drugs last March, and what prompted FDA's announcement of the pilot program for third-party reviews of some medical devices. Our goal, all along, has been to enact legislation that the president will be eager to sign.

I believe that we will pass a bill in the House before summer. A House-Senate com-promise on FDA reform has become a lot more likely based on developments in the Kassebaum committee.

Industry Can Benefit from Quality in Managed-Care Plans

Medical Device & Diagnostic Industry Magazine | MDDI Article Index

Originally published June 1996

An Interview with Paul M. Ellwood, Jr., MD

President and CEO, Jackson Hole Group

The health-care community is on the verge of an information revolution, one that will allow managed care to achieve the goal that its inventor, Paul M. Ellwood, Jr., first had in mind some 25 years ago. That goal is to give equal credence to cost and quality.

After 17 years of practice as a pediatric neurologist and physiatrist, Ellwood concluded that fee-for-service medicine was not in the best interest of patients, especially those beset by chronic illness. With Alain Enthoven and Jackson Hole Group, a health policy research group, Ellwood devised the concept of managed care, coining the terms health maintenance organization (HMO) and preferred provider organization (PPO). He later developed plans that led to the creation of the Agency for Health Care Policy and Research, and most recently crafted the methods and promoted the application of health accountability by health plans. Beginning this summer, the process of implementing the methodology for health plan accountability will begin.

In this interview with MD&DI, the inventor of managed care--who also serves as president and CEO of Jackson Hole Group and clinical professor of neurology, pediatrics, and physical medicine and rehabilitation at the University of Minnesota--explains the importance of getting information about both quality and cost into consumers' hands and how medical device companies can benefit by doing so.

What was your original concept of managed care?

Originally I had in mind that health plans would combine insurance and the delivery of health care, competing with each other on price and quality. They would have three basic characteristics. First, they would deliver comprehensive care, providing virtually anything anyone needed from drugs to devices and from doctor visits to hospital stays. Second, they would each serve a relatively large group, several million people, over an extended period of time. And, third, they would be responsible for these people for at least a year. That idea was very different from conventional medicine, where a typical transaction might involve a visit to a doctor or a stay in a hospital, where the responsibility for managing the overall disease process would be informally shared among many health professionals.

What did you hope to accomplish with this idea?

The purposes of managed competition or managed care were to improve the quality of health care and to reduce the rate at which health-care costs were increasing. The notion was proposed in the 1970s to the Nixon administration as a way to reform Medicare, with the idea that the private sector might follow suit. As it turned out, the Nixon health reforms didn't do any better than the Clinton health reforms. Instead, over the past few years, the impetus for change has come largely from the purchasers of health care, especially large purchasers who, faced with global competition, felt they had to do something about the cost of fringe benefits and every other facet of doing business.

Until now, the major emphasis has been on cost. Where does quality fit in?

When I first proposed the HMO idea, it was my expectation that the emerging HMO industry would be required to make available to the public evidence that they were producing the best possible health outcomes. This would have been more difficult to accomplish within the conventional practice of fee-for-service medicine because of the short, compartmentalized transactions that take place in that context. It is easier to do in managed-care plans because they have an extended, comprehensive responsibility for their enrollees' function and well-being.

For a variety of reasons, the idea of holding health plans accountable for their impact on people's health has not been implemented. Health care is becoming increasingly like a commodity. Virtually all the competition between health plans is about price.

Some physicians say managed care will not continue to grow in popularity because it is reducing the quality of the care they give their patients. What's your response to such statements?

There is absolutely no evidence of managed care having had a deleterious effect on quality. Quite the contrary. In the few studies that have compared quality in the two systems, managed care has done very well.

I am disappointed that we have not made more progress in reporting on quality. One reason for this is the fact that the old system of health care had no particular incentives or capacity to follow patients over an extended period of time.

How does this relate to the various types of follow-up that medical device manufacturers are required to perform, such as device tracking for implantables?

Device manufacturers have done a better job of following people who have been fitted with their devices than has the health system as a whole. When a pacemaker or heart valve is implanted, both FDA and the manufacturer have an interest in knowing how long it lasts and whether it is really continuing to work. To do that you have to follow the patient pretty carefully.

Might this kind of patient follow-up be expanded to permit quality assessments in the practice of medicine?

We are on the verge of exactly that. In September of last year, a new organization was founded called the Foundation for Accountability (Portland, OR). It is a not-for-profit corporation whose function will be to devise systems for measuring health-care quality and making the information available to the public. These systems will be uniform, which is essential if we are to avoid having dozens of quality accountability systems.

Their first task will be to discern the health of populations--to rank the ability of the HMOs to reduce the likelihood of ill health among the entire population they are serving. The standards will also allow for comparisons of the organizations' ability to improve the health-related quality of life for patients with such conditions as asthma, low back pain, cardiovascular disease, cancer of the breast, and diabetes. And the organizations will be compared in ways that consumers can understand.

Why is this effort important for medical device companies?

No one, especially a company that is competing on quality, wants to be in a commodity market. So everything about the future of the device business depends on moving managed-care competition from a price-only consideration to one that includes quality.

Can device companies assist in doing quality assessment, perhaps by developing databases that record the effects of equipment on patient health or by designing their devices to capture certain types of information?

Both ideas are valid. The device industry is developing very large databases, and we can expect more of that in the future--huge databases for comparing treatments and outcomes. And in the case of devices this information is often very good because many devices produce digital information that can go directly into a database. In the next year, I expect the whole business of computerized medical records--computerized information being provided to doctors and patients--to just take off.

The impetus for this will be the requirement that health plans be accountable for their quality. I think there will be a widespread application of outcomes accountability in the next year, and that will trigger an information revolution in health care. This consumer-oriented information about the quality of care can also be used by health-care organizations to follow up and improve on the effectiveness of health care and the various things that are used in conjunction with it.

How can device companies contribute to outcomes accountability?

There are several ways. One is to support the efforts of the Foundation for Accountability. As the foundation proposes measurement schemes for assessing the treatment of various diseases, device manufacturers should provide it with information about their experiences in tracking the related use of medical technologies. I have met with a number of device manufacturers about this and have been impressed with the state of the art of following patients in the device industry.

Manufacturers will also be involved through the clinical trials they conduct to evaluate their products. One result of the health-care information revolution is that the nation's health system is going to become a kind of massive, continuous clinical trial. So it will be much easier to go to these managed-care organizations and say we would like to test this device; it won't be necessary to install new information systems or teach doctors how to report on the effectiveness of devices, because the managed-care organizations will already have these capabilities in place. This will make it much easier for manufacturers to find clinical sites for their device testing, and will speed the rate at which devices can be evaluated and reduce the cost of doing so.

What results should manufacturers expect to see as a result of their participation in this information revolution?

I went into the development of managed care because I realized how little we knew about the status of our patients and how poorly structured the health system was for taking care of people with chronic illnesses. It ought to be feasible to simultaneously improve quality and reduce costs, but that is not universally true.

There will be circumstances in which the more expensive drug, device, or procedure works better and, under those circumstances, that is what should be used. And that is what will be used when the public has the information that enables them to judge health-care plans on some basis other than cost.

WASHINGTON WRAP-UP

Medical Device & Diagnostic Industry Magazine | MDDI Article Index

Originally published June, 1996

James G. Dickinson

Two major developments in March signaled an overall rebuilding of the way this country regulates medical devices.

The first development was the announcement from both industry and FDA sources of a new consensus-developed approach to FDA inspections. Until the end of this year, firms without a violative history that continue to make individuals and documents involved in previous preannounced inspections "reasonably available" will receive notice of inspections at least five days in advance. Corrective actions taken or promised by these firms will be noted on their FDA-483 forms; however, specific observations made during the inspection will not be deleted when the company has corrected them. FDA will also send formal compliance status letters after each inspection where conclusions of no action indicated (NAI) or voluntary action indicated (VAI) were reached by the district office.

Although these may seem fairly modest accomplishments to industry reform-seekers, they actually constitute the tip of an iceberg. Last August, a collaborative grassroots process began in Denver between FDA's Southwest regional management and an industry task force spearheaded by the Colorado Medical Device Association and its president, Cobe Laboratories senior vice president Wendell Gardner.

The second major development was a legislative breakthrough on both sides of Capitol Hill, defying all earlier predictions that time had virtually run out for meaningful FDA reform to be passed this year. On the Senate side, the Labor and Human Resources Committee marked up a reform bill sponsored by committee chairwoman Senator Nancy Kassebaum (R­KS). In addition, House Commerce Oversight and Investigations Chairman Joe Barton (R­TX) unveiled his bill on FDA medical device reform; two other regulatory reform bills, one on drugs and biologics and the other on food, were announced at the same time.

Kassebaum's bill, which appeared headed for Senate floor debate in the second half of May, was strengthened by markup session amendments that would:

* Allow firms at their own election to have new products reviewed for marketing clearance by a non-FDA third party at company expense. FDA would have only 15 days (for a Class I or Class II device) or 45 days (for a Class III device) to reject this approval recommendation.

* Force FDA to approve or reject within 30 days products already approved in Europe if the agency fails to meet one of its approval deadlines and the sponsor wishes to press the issue; rejections could be appealed.

* Allow Class III device approval on the basis of only one clinical trial, instead of two.

* Require that FDA collaborate with the sponsor in designing investigational device research.

* Limit the terms of FDA commissioners to five years.

Barton's bill, which was given the most chance to be acted on of the three House FDA reform bills, parallels much of the Kassebaum bill with respect to devices; unlike Barton's bill, Kassebaum's bill covers all FDA-regulated product categories. Scheduled to be the subject of May 1­2 hearings and "early" markup after that, Barton's bill includes provisions turning good manufacturing practices (GMP) inspections over to third parties, allowing manufacturers to cite non-FDA standards of safety and effectiveness, and eliminating the need for filing many 510(k)s for postapproval product changes.

Conveniently, these developments coincided with a blizzard of media publicity criticizing FDA's management of device issues. On March 28, the Washington Post attacked the device approval process in an in-depth critique of the agency's performance on pedicle screws.

The next day, Jack Anderson's syndicated column impaled Commissioner David Kessler with the criticism by former aide Jim Phillips, now a Senate committee staffer with no FDA responsibilities, who charged that Kessler is mismanaging every issue faced by the agency except tobacco.

The next day, as Congress adjourned for its 15-day Easter recess, the Investor's Business Daily blasted the agency with a front-page story headlined "Have FDA Officials Gone Rogue? Agency Leaked Papers, Lost Records, Critics Say." The article focused on an unidentified court case from which court-sealed documents, clearly stamped as such, were allegedly leaked by FDA. In another incident, an FDA-leaked inspection report on Epitope, Inc. (Beaverton, OR), allegedly hurt the device manufacturer's stock prices for a securities short-seller. The article also said that FDA concealed documents on Commissioner Kessler's contacts with Ralph Nader's Public Citizen Health Research Group.

Set against the background of furious legislative activity, this torrent of anti-FDA publicity did not bode well for the agency. The House and Senate bills, which would sharply diminish the potency of FDA's activities in the device area, seemed increasingly likely to succeed. With major party conventions and a presidential election campaign looming after the congressional recess, time was running short, and both House and Senate proponents were mustering bipartisan support for FDA reform, especially for devices.

The only discordant note was struck by the venerable Edward Kennedy (D­MA), who left the Kassebaum markup clearly unhappy with the changes made. Some observers predicted he might lead a filibuster on the Senate floor, effectively delaying a vote until time for passage had actually run out.

It's likely that by the time this column is read, all such scenarios will have been exhausted, and the outcome will be known. But regardless of the outcome, it's clear that forces have been set in motion that will radically alter the style and substance of medical device regulation in this country --sooner or later.

FDA's Center for Devices and Radiological Health (CDRH) investigated "more than 38" medical device marketing application integrity cases under its Application Integrity Program (AIP), formerly known as the "Fraud Policy," during fiscal year 1995 (ending last September 30), according to the Office of Device Evaluation's annual report.

"Integrity issues were based, in part, upon internal inconsistencies within the submission, scientifically implausible data, contradictory information provided by scientific/clinical researchers, data inconsistent with the scientific and professional literature, information provided by employees of the applicant, and information obtained from legal documents," the report says.

Four letters were issued under AIP last year that suspended the review of all pending and future filings until the firms in question performed internal audits and implemented an FDA-approved corrective action plan.

The report acknowledges that ODE's own integrity was under scrutiny last year; 37 instances of ethics and conflict-of-interest problems at the office were brought to light. Several were attributed to manufacturers' claims of unequal treatment during the review process. Other questionable activities cited in the report included "receipt by ODE staff of free training, travel expenses, meals, cash honoraria, and other things of value from persons outside the government."

ODE and another CDRH branch, the Office of Compliance, jointly created a highly unusual seminar for center staff entitled "RS Medical: A Case Study--Lessons Learned." This seminar examined regulatory activities against RS Medical that led to "major civil litigation in which FDA was found to be responsible for improper conduct." The judgment against FDA entered two years ago awarded over $390,000 to the plaintiff, attorney fees included. The case, International Rehabilitative Sciences dba RS Medical v. David Kessler, involved a dispute over whether the company's powered muscle stimulator device required a 510(k), and credible charges were made that agency employees engaged in retaliation against the firm.

The report says that 1995 industry submissions to ODE continued a relatively flat three-year trend with 16,978 received, 73 more than in 1994. However, the number of major submissions--premarket approvals (PMAs), investigational device exemptions (IDEs), PMA supplements, IDE amendments, and 510(k)s--decreased for a third consecutive year--by 104 to 10,189 in 1995. This reduced inflow probably allowed ODE to substantially increase its work output; the office processed 12,013 major submissions last year, up from 11,045 in 1994. An "all-time record" of 7948 510(k) reviews, 813 more than in 1994, were performed with total average review time declining from 216 days in 1994 to 178 days in 1995.

"FDA is soliciting comment on how best to communicate to its own staff and to the public the principle that guidance is not binding." That is one of five key issues that were discussed at an April public meeting convened in response to an Indiana Medical Device Manufacturers Council (IMDMC) petition last year. The document proposed that FDA make a policy of seeking public input on guidance document development.

Four other issues that FDA is soliciting public input on are: (1) The value of a standardized nomenclature for guidance documents, and what to do with existing nomenclature; (2) whether to adopt a three-tiered approach to public input (input before guidance is issued, input after guidance is issued, or public notification without formal input after guidance is issued); (3) the adequacy of FDA's current document access programs and any improvements to access; (4) whether the public is sufficiently aware of current appeals mechanisms and whether the mechanisms are sufficient for appealing decisions relating to guidance documents.

In a March 7 Federal Register notice, FDA narrowed the scope of what is meant by "guidance." IMDMC's petition broadened the definition to cover any documents used to convey "regulatory expectations," but FDA's notice says guidance covers "(1) [d]ocuments prepared for FDA review staff and applicants/sponsors relating to the processing, content, and evaluation/approval of applications and relating to the design, production, manufacturing, and testing of regulated products; and (2) documents prepared for FDA personnel and/or the public that establish policies intended to achieve consistency in the agency's regulatory approach and establish inspection and enforcement procedures."

Specifically excluded from the scope, FDA says, are agency reports, consumer information, documents relating solely to internal FDA procedures, speeches, journal articles and editorials, media interviews, warning letters, or "other communications or actions taken by individuals at FDA or directed to individual persons or firms."

On the vexed issue of FDA employees who take guidance documents as binding and seek to enforce them, the agency says it will begin an educational effort, inserting the following cautionary language within each guidance document: "Although this guidance document does not create or confer any rights for or on any person and does not operate to bind FDA or the public, it does represent the agency's current thinking...." In addition, FDA says it will attempt to ensure that all guidance documents use language that clearly conveys that they are nonbinding, avoiding the use of "compulsory language such as 'shall' and 'must,' except when referring to a statutory or regulatory requirement."

Two new guidances have been developed to provide understanding of human errors as they relate to medical device design and preproduction quality assurance design control. Entitled "Medical Device Design Control Guidance" (document number 995) and "Do It by Design" (document number 994), they will both complement and "aid in the implementation" of the soon-to-be-published final rule on current device GMPs, according to CDRH's Office of Compliance GMP expert Kimberly Trautman. Both guidances are available from FDA's Fax-on-Demand at 800/899-0381.

The costs of FDA medical device regulation would revert to 1991 levels, or $40 million less than current costs, if CDRH were relocated elsewhere within the Department of Health and Human Services, according to an analysis released in March by American Enterprise Institute (AEI) resident scholar John Calfee.

Calfee based this assessment on a likely reversal of CDRH's Kessler-era "drug culture," which he says is inappropriate because "drugs are unchanging entities surrounded by rapidly changing information," while devices "are continually changing because they are technology-based tools."

"Sounds like they're living in fantasyland," CDRH director Bruce Burlington commented on the AEI report. He said all of CDRH's work over the past three years has been "diametrically opposed" to the so-called "drug model" cited in the report. A recent CDRH analysis of PMAs reviewed since 1987 shows no increase in double-blind, randomly controlled clinical trials or in the size of subject groups studied (average 600 patients). "Science drives the type of testing you have to do," Burlington said.

A bipartisan bill that would liberalize export restrictions on drugs and devices not having FDA marketing approval was loosened even more in an amendment introduced on March 15. If the amendment passes, any U.S. human drug, biologic, veterinary drug, or device that is approved in one of 24 countries named or those in the European Economic Area would be permitted to be exported to any country willing to receive it. In addition, the product would have to comply with the laws of the country and be labeled for export only; it could not be sold or reimported into the United States.

The list of countries whose approval qualifies products as exportable would be subject to continuing update.

In March, the April 11 effective date for FDA's Medical Device User Facility and Manufacturing Reporting rule was extended to July 31, based on industry requests for more time to comply.

James G. Dickinson is a veteran reporter on regulatory affairs in the medical device industry. *

SITE SELECTION

Medical Device & Diagnostic Industry Magazine | MDDI Article Index

Originally published June 1996

Greg Freiherr

The rapid growth of young, successful medical device companies can mean frequent site changes in the search for adequate space. With long-range planning, some companies have been able to minimize these costly, frustrating moves.

One such company is Tubular Fabricators, Inc. (Petersburg, VA), which is planning to expand its operations over the next several years. During this time, employees will not be uprooted from their current building, equipment will not be moved across town or even across the street, and phone lines will not be changed. In short, the staff will encounter none of the frustrations that can accompany a change of facilities resulting from corporate growth. That's because Tubular Fabricators is staying put.

Seven years ago, the company, which makes a variety of home-health-care devices from canes and walkers to commodes, purchased a 250,000-sq-ft facility on six acres of land. The company initially occupied just 75,000 sq ft. Even today, staff and equipment make use of just over half of the available space--about 140,000 sq ft. Why did the company move into such a huge space? "It's simple," says company president Joseph Battiston. "We don't have to keep moving."

EXPANSION WITHOUT RELOCATION

By purchasing a facility large enough to handle expansion well into the future, Tubular Fabricators can forgo the costs of repeatedly investing in ever-larger facilities. The extra space is readily available for warehousing the company's products, and Battiston says that's where expansion will be needed.

Battiston recalls the first few years after founding the company in 1979. "We kept expanding, and every time we did, we were renting facilities where our costs drastically increased," he says. "When we looked at what it cost us for all those expansions, it made a lot more sense to move right up to a larger facility."

Similarly, over the last six years, the workforce at Diametrics Medical, Inc. (Roseville, MN), a maker of blood and electrolyte analyzers, has grown from 4 to 230. Pressures exerted by corporate growth have been especially great in the last couple of years, as the company payroll has literally doubled. But throughout the history of the firm, and even in these last few years of hectic growth, administrative staff have not been uprooted, nor has the manufacturing and distribution process been interrupted. Instead of switching from one site to another, Diametrics has expanded into more bays of the same building.

In 1990, the founding staff occupied the same site they do now, but they took up much less space--beginning with about 7000 sq ft. Today the company occupies about 60,000 sq ft. "Our philosophy was that if we needed more space, we would work it out with the landlord so that as other leases expired, we would pick them up and expand our facility," recalls Elier Roqueta, director of manufacturing at Diametrics.

For both Tubular Fabricators and Diametrics, the key to painless expansion was long-range planning, though their strategies differed. Tubular Fabricators chose to buy a large facility, which, according to Battiston, was affordable because it was an old building. Diametrics, on the other hand, leased a modern facility in an industrial park. Both approaches successfully balanced the need to conserve capital with the need for enough space to continue expansion.

Before developing a strategy to meet future space needs, companies should carefully examine sales forecasts and staffing projections to make sure expansion will eventually be warranted. But such estimates can give only the crudest measures of what will be needed. Also, luck can sometimes determine whether a strategy works. When in 1990 Diametrics first applied its strategy of expanding into vacated space to accommodate corporate growth, the company had no guarantees that the space would be available when it was needed. Executives knew only that the leases held by the companies bordering its original 7000 sq ft were scheduled for expiration; there was no way to be sure they would leave. Nor was there any way to be certain that Diametrics would be in a position to use the space at exactly the time it became available. "In some cases the leases expired before we needed the space, but we ended up having to take it," Roqueta explains.

CHANGES IN DIRECTION

Small, swiftly expanding companies can also be subject to unpredictable changes. When PowerStrand Wire & Cable (San Antonio, TX) began in Corpus Christi four years ago, company founder and president Dee Johnson was a one-person shop, performing every task from sales to shipping to cleaning. The company's main products, electrical wires that are inserted into electrodes for use in such devices as muscle stimulators, were produced by a contract manufacturer. Since then the company has brought manufacturing in-house, and Johnson relies on a staff of 20 to do many of the tasks she once performed. Consequently, the 1000 sq ft Johnson used at the start would not fit the needs of today's company, which has long since moved to a 5000-sq-ft facility about 100 miles from the company's birthplace. In choosing the current location, Johnson recognized the essential needs of all businesses: to be near transportation outlets such as freeways and airports, to be near major clients, and to be near shipping facilities like the UPS office that is now only four blocks from the company.

Small companies are not only likely to expand, but also to change direction quickly. Business decisions of small companies are sometimes made by the "seat of the pants," says Bill Mavity, president of Innerdyne, Inc. (Sunnyvale, CA). Innerdyne exemplifies not only the volatility of small companies, but how that volatility can affect the growth of the business and expansion of facilities.

Innerdyne's origins extend back to a company called Cardiopulmonics (Salt Lake City), a well-capitalized firm that several years ago appeared poised for rapid expansion. Its technology, a lung-assist device for acute respiratory-arrest patients, showed good potential for growth, but product development was hamstrung by the regulatory process. Meanwhile, Innerdyne had developed an alternative to the standard device used in minimally invasive surgery, the trocar, but was having trouble finding investors. Recognizing the potential of Innerdyne's technology and the difficult road ahead for the lung-assist device, "we bought Innerdyne, dropped the Cardiopulmonics name, and positioned ourselves as a minimally invasive surgery access company," Mavity says.

The decision turned out to be the right one, providing the basis for payroll to leap from 20 employees at the two companies prior to the merger to 120 at the consolidated firm. Yet the only physical expansion was about 7000 sq ft for a distribution facility in Salt Lake City, which was nearly offset by a decline of 6000 sq ft at the Sunnyvale site, a decrease accomplished by subletting the space. The net expansion, therefore, was just 1000 sq ft, despite a fivefold increase in employees.

Mavity accomplished this real estate feat by turning the Salt Lake City facility, which housed the cleanrooms originally designed for the lung-assist device, into the manufacturing plant for Innerdyne's product and consolidating corporate headquarters in Sunnyvale. "The lease for the Sunnyvale facility had been signed when Innerdyne was a stand-alone planning for growth. The facility would have supported manufacturing and a larger technical role," Mavity explains. "When we decided to move manufacturing to Salt Lake City, we simply sublet the Sunnyvale space back to the owner of the building."

UNIQUE REQUIREMENTS

Though Diametrics' leasing strategy has worked in the past, the company will soon be facing a lack of space once again, and this time it plans to move. The choice of new facilities will be determined in part by the requirements of its cleanroom manufacturing processes. In the meantime, the company is caught in a wave of lease negotiations to hold onto the space it now uses. A series of five-year leases signed in the early 1990s have been coming due in rapid succession. While the landlord would like all the contracts to apply until the year 2000, Diametrics wants them to extend only long enough for the company to complete a transition to a new location. "We have to be careful what we agree to if we are going to move out of this facility," Roqueta says.

Moving is necessary because the company has finally exhausted virtually all of the space in its lease, and the rest of the space in the facility is occupied by companies that are unlikely to leave any time soon. "We're at the point where we can't expand any more," Roqueta says. "If we want to grow, we have to go someplace else."

One option is to move the administrative offices out of the current facility into a nearby office building and use the vacated space to expand the manufacturing operation. But Roqueta says doing this would not serve the long-term goals of the company. The company's cleanrooms impose a high capital expense, which makes Roqueta nervous about investing any more in a leased facility. "When you look at the money you are putting into the cleanrooms and what it would cost to move them, the best way to go is to set up a manufacturing facility in a new building and put the money into building cleanrooms there," he says.

Cleanroom manufacturing complicates the decision in more ways than one. "A lot of existing buildings are about 12 ft from floor to ceiling. To build cleanrooms, you need at least 14 ft," Roqueta explains. "So it is hard to get into existing buildings and make them work for you. The best thing is to plan a building to your specifications so you get what you want, rather than trying to conform to existing walls and ceilings."

For Battiston, however, older buildings with more than enough space offer advantages over newer ones. "A lot of people tend to make the mistake of saying they'd rather have a newer facility with less space," he says. "But that can mean tighter aisles where it's harder to work. Also, to make use of the higher ceilings in new buildings, the company has to use racks and special lift trucks. A company in a situation like that can run into a lot of other headaches that we don't have because we have ample space."

Planners of future expansions must keep in mind their companies' unique concerns, such as the need to maintain a location close to the workforce that has made the business successful. In the case of Innerdyne, that meant maintaining facilities in both Salt Lake City and Sunnyvale. "We had to ask what it would cost to move our key people to Utah," Mavity says. "That is a drain on cash that most small companies can't afford."

CONCLUSION

Successful companies share a commitment to their own future development and evolution. That commitment can be expressed by long-range planning for future space requirements. "We make this commitment in a sense to ourselves, but also to our customers, that we are going to be here well into the future," says Johnson of PowerStrand Wire & Cable. "We do it to prove that we're stable and that we can serve our customers and meet their needs."

Whether the decision is to sign a long-term lease for more space than will be needed in the near future or to move out of leased space and into a company-owned facility, taking steps to forestall multiple moves and disruptions can make corporate expansion easier.

Greg Freiherr is a contributing editor of MD&DI.

Advancing Technologies Put Device Software on the Fast Track

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

Steven Halasey

In the past decade, the medical device industry has witnessed a proliferation of software-controlled products, including everything from handheld diagnostics to radiological treatment devices. Yet the inner workings of these products have remained, for most, a grand mystery. Device software, for all its importance, is still a great black box. And that, according to many analysts, is a major problem.

"It's a well-known phenomenon in the software industry that two years after a product is released, the original developer is generally no longer the person supporting the code," says Nancy George, president of SQM, Inc. (Towson, MD), a software consulting firm. "In the case of medical devices, this can cause serious problems if the system needs to be modified because of a life-threatening failure."

Fortunately, instances of such failures are rare--but they aren't nonexistent. Patients have been killed or severely injured by infusion pumps and radiation therapy units whose software algorithms failed to control dose delivery as they were intended to do. And, according to FDA, many other nonreportable events have occurred in which the software controlling health-related technologies such as blood banks and clinical laboratory equipment failed.

FDA SCRUTINY

To resolve such software-related problems, FDA has begun to place greater emphasis on its reviews of medical device software.

"What's new in software regulation is the fact that FDA is increasing its scrutiny of the software portions of product submissions," says Dennis Rubenacker, a device-software specialist at Noblitt and Rueland (Irvine, CA), a management and software consulting firm. "Ten years ago, a 510(k) submission that included software might have skated through without intense review; but today FDA wants documented proof that the manufacturer conducted a formal requirements review, a hazard analysis, verification and validation, and so on."

However, the agency's policies do not require manufacturers to use the latest software development tools or methods, nor do they specify any particular software development model that must be followed. "FDA is more concerned that whatever process a company has, it should be in control," says Rubenacker. "FDA's philosophy is that the device company should pick a process that fits its culture, and then make certain to control all aspects of that process."

A company may decide to use multiple processes to design software for its products, in keeping with the project at hand; some may be highly sophisticated, while others may be very simple. In any case, the manufacturer should see that it has standard operating procedures for all aspects of its software development process.

The agency is equally open to various methods of documenting software development activities. "There are a variety of approaches to documentation," notes George. "But ultimately, FDA wants companies to document their standard operating procedures for software development, to be able to show design links to good manufacturing practices (GMPs) and to the product submission documents, and to provide documentation for long-term product support. These requirements pose some hard questions for manufacturers, and more specific guidance from FDA would be helpful."

Rubenacker agrees: "There is not a lot more guidance for software now than there was five years ago. The agency's 'Reviewer Guidance for Computer Controlled Medical Devices Undergoing 510(k) Review' is useful, but its latest edition was issued in 1991."

While not a guidance, a glossary of software terminology recently distributed by FDA presents the agency's interpretation of certain standard terms for the first time. In many cases the agency has adopted definitions established by the Institute of Electrical and Electronics Engineers (IEEE) or the National Institute of Standards and Technology, but in some cases the agency's definition is unique. "Device manufacturers should reorient themselves to come into compliance with these interpretations," advises George. "The glossary doesn't change definitions, but formalizes them in a way that had not been done before. The section on validation is especially useful."

Now the agency is preparing to go further, proposing a new risk-based scheme for assessing medical device software and determining what requirements developers should meet. The new scheme is expected to be developed with industry input gathered at a September workshop on device software to be sponsored by the Office of Science and Technology at FDA's Center for Devices and Radiological Health.

The new policy will update the definition of device software to include stand-alone planning and processing software. It is also expected to establish a three-tiered risk-based scheme in which the requirements for approval will be greatest for software that poses the greatest risk to patients or users. Other issues to be addressed by the policy include the clarity of the software algorithm, and whether the software affects only an individual patient (e.g., software for a surgical planning simulator) or an aggregate (e.g., software that controls equipment in a clinical reference laboratory).

All together, FDA's efforts to regulate device software have been a positive contribution to the advancement of the field. "The higher levels of scrutiny haven't been all bad," says Rubenacker. "Lots of companies have gotten their software processes into shape because of that scrutiny, and as a result they are now more efficient in getting their products onto the market. Companies are seeing the real benefits of the process, and they're also making better products."

MATURING INDUSTRY

Even so, software experts disagree about the readiness of the device industry to satisfy FDA's new and evolving requirements. While some companies are still in the starting blocks, needing to select and define their development processes, others are following extremely mature models for creating software.

"I'm not sure that companies are thinking too much about the process they're using," comments Bill Wood, director of product development at RELA, Inc. (Boulder, CO). "If their process works, they tend not to change it. But if a company makes a change in its products, it should also go back and look at its software development process with an eye toward making it as efficient as possible."

"Most device companies have some acceptable level of maturity in their software development process--they have to in order to get a PMA or 510(k) cleared," says Michael Bousquet, director of sales and marketing at Mesa Systems Guild, Inc. (Warwick, RI), an integrated project management, software tools, and consulting firm. "But the desire is to have a more mature, managed, and optimized process, so that each new product can improve upon the last."

A truly mature development process enables a company to select from a variety of methods to match the project at hand, he explains. "Many companies are feeling pressure to apply object-oriented methods and languages. But it takes a lot of effort to apply these evolving methods and tools in order to build good reusable and maintainable models and software objects. That effort doesn't make business sense if the product is embedded code required for a singular application. Instead, what is needed is a clear assessment of the need for reusability, portability, and maintainability prior to selecting a particular development methodology."

But in deciding what development model to use, companies shouldn't automatically throw their old system out the window, cautions Bob Kay, president of Elite Engineering, Inc. (Westlake, CA). "Economics and company culture have a great deal to do with selecting an appropriate software development methodology."

Companies can find themselves in trouble when attempting to change their development process, says Jim Barley, president of Business and Regulatory Consultants (Laguna Niguel, CA). "Often there is no one at the company who has a good grasp of FDA requirements. The company may have too many hands involved in the design process, or may not know how to implement a formal change control program. A company that has an experienced regulatory staff can sometimes come out of this okay, but too often the company tries to tweak its process and ends up making matters worse and even getting into trouble with FDA."

In some instances, the trouble comes from attempting to solve systemic problems by using more-sophisticated development tools. "Many companies seeking to solve their software engineering problems have invested heavily in more tools," says Bousquet. "But usually the root of a problem is in the engineering process, which needs to be evaluated and improved."

Companies need to avoid the trap of selecting a development process that doesn't lend itself to the tools they have, or of buying tools that don't integrate well into their existing process. These can be difficult problems because of the complex nature of the development tools that are now becoming available.

"Although industry has come a long way in the past few years, there is still a long way to go before the new tools will be standardized," says Rubenacker.

"The technology is improving," agrees Kay, "but we are still building on software platforms that are not fully understood, and there are some basic problems in software development that have still not been solved."

Recent advances in software development have included efforts to make wider use of existing methods as well as unique combinations of methods. Rubenacker notes that several companies have attempted to use graphical user interfaces, but usually not as device control software. "There have also been some attempts to create hybrids that combine object-oriented programming with other types of programming, in order to match the development culture of the company. But in medical devices, embedded software systems are typically not yet using object-oriented programming."

The most difficult problems arise because of the growing complexity of the devices themselves, says Kay. "No single person can now understand such systems well enough to track the systemic impact of making changes on a subsystem. Understanding even a simple handheld analyzer now requires the expertise of a chemist, a software engineer, and a process engineer. To make matters worse, there are no integrated tools to help perform software analysis or to carry out verification and validation."

Companies whose products encounter problems in the field can find themselves in a difficult situation. "In the short term, these companies are simply looking for a fix that will get their products back onto the market, but in the long term they need a solution that will enable the company to restructure its development process so that it consistently meets FDA requirements," says Barley.

MIL-SPEC AND BEYOND

Despite FDA's overpowering influence on the U.S. device industry, in the area of software development the agency has generally taken a back seat to others. Many of its current and projected software requirements are based on standards and processes that originated in the defense and aerospace industries, and many others have been developed by national and international standards-writing bodies.

"The key elements of FDA's design control requirements are no different from those in the defense industry," says George Brower, deputy director at Analex Corp. (Littleton, CO), an independent verification and validation consulting firm. "The only difference is that FDA doesn't use a standard format like the Pentagon uses."

The similarities among the software standards for defense contracts and medical devices have invited more than a few comparisons between the two industries. In the aftermath of budget cutbacks that reduced or eliminated many military contracts, Defense Department vendors have been eager to ride the wave of technology transfer into the device industry. Some of what they've encountered has not met their expectations.

"In the device industry, testing is generally not as rigorous as in the defense industry," notes Brower. "Often device software is tested without the system in mind, rather than as a part of a whole system. Device companies also tend to be weak in testing the backup modes and failure conditions related to their devices."

To some, however, such differences have been advantageous. "Rigid military and aerospace approaches to medical device software haven't been very influential or successful so far," says Rubenacker. "In part, this is because FDA recognizes that medical device companies come in many sizes with many cultures, and that such a rigid approach might unnecessarily restrict them.

"Instead, FDA has for years adopted a flexible approach: its guidances are voluntary, its templates are regarded as merely examples, and it acknowledges standards compiled by IEEE and ANSI [American National Standards Institute] that are appropriate for the field. In some ways, in fact, IEEE and ANSI have had more of an impact on device software development than have the military or aerospace industries."

How long the device industry will enjoy such an advantage, however, is doubtful. "Device software technology is changing and becoming increasingly complex," says George. "As a result, a systems engineering approach is increasingly the method of choice. FDA is encouraging companies to take such an approach when appropriate, and standards in this area are being developed even at the international level."

Thus U.S. device companies may also face the challenge of meeting software standards for the international market. To make this simpler, FDA's stated policy is to harmonize its requirements with similar international standards such as ISO 9001-3, the software quality standard compiled by the International Organization for Standardization (ISO).

"At present there is still a gap between FDA's expectations and the requirements of ISO 9001-3, but that gap is narrowing," says George. "ISO 9001-3 is a voluntary standard and device companies approve of the latitude that it gives them."

Barley advises clients that expect to do business in international markets to make full use of the ISO certification process. "ISO certification requires a lot of time and paperwork. If a company is getting ISO certified, they should also make a point of using only ISO-certified vendors for their raw materials and components."

Wherever standards have originated, device software developers are now in the process of making them their own. "In the past, the most sophisticated processes--those with the greatest rigor and traceability--originated in the mission-critical military, aerospace, and defense sectors," says Mesa's Bousquet. "But now, many device companies are starting to realize that engineering process rigor can solve and even prevent lots of problems in complex software development projects."

LIABILITY DRIVES IMPROVEMENTS

One problem that is driving many of the improvements in the device software development process is product liability. In the past, device companies were sometimes satisfied with quick-and-dirty software development processes that enabled them to rapidly produce a prototype for testing, but those days are gone. "As recently as three years ago a company could have avoided performing software verification and validation because FDA's requirements weren't set in stone," says Barley. "But now, the expectations have been pretty well defined. Companies should know that the agency expects them to perform software V&V."

Kay goes a step further. "There's no question that FDA expects software design houses to implement design control, so nonperformance is not an option," he says. "Performance of design verification and validation is an automatic requirement of all our contracts."

The disadvantage of such a requirement is that it increases companies' upfront development costs. But software experts agree that more work in the early stages of development pays off in the long run. "The costs associated with repairing a released software defect are exponentially greater than the costs of designing the software right the first time," says Bousquet. "The financial liability involved in cutting corners is just too great; with some of these systems, lives are at stake."

In the future, it may become even more critical that companies meet FDA requirements to the letter. The device industry is now awaiting the Supreme Court's decision in the case of Medtronic v. Lohr, which could determine whether FDA approval can be used as part of a legal defense against charges of negligence or lack of oversight. If that defense is upheld, failure to meet FDA requirements could by implication open firms up to even greater product liability.

To address all of these issues, software experts uniformly advise that companies follow best-practice guidelines in their development processes. "Companies should conduct criticality analyses for their own design processes, and should also see that their software vendors do the same," says Brower. "The device company should know and be prepared to show that its software vendor has an adequate software development process. Off-the-shelf software is available to assist in this task."

TECHNOLOGICAL ADVANCES

Over the past decade, advances in microelectronics have enabled designers to dramatically reduce the size of many medical devices. Meanwhile, new software technologies have made it possible for these ever-smaller hardware packages to perform a greater variety of tasks than ever before.

Market pressures have a lot to do with the character of a medical device and its software. Many companies have begun to experience a shrinking market share as a result of the pressures of managed care. In response, some have turned to software as an inexpensive way to enhance product performance and thereby recapture their market. At the same time, says Kay, "investors like bells and whistles. They perceive high value in the ability of a device to perform multiple tasks."

"The direction of technological advance is toward greater sophistication in finished devices, with software as the driver that enables a device's capabilities to be expanded," agrees Brower. "New generations of software are bringing new flexibility--but with this flexibility comes responsibility.

"People need to be reined in, kept within the established specs of the project," he adds. "Software developers are prone to add more bells and whistles, to say 'for just a little more money we can add such-and-such feature.' But some enhancements shouldn't be made."

The next wave of advances could make matters even more complicated for software developers. "The next big area of emphasis will be in information sharing," predicts Noblitt & Rueland's Rubenacker. "In the near future, virtually every device that uses software will also include some kind of communications capacity."

Driven in part by the emergence of unified data collection systems--a result of ongoing consolidation among health-care organizations--many designers are now looking at devices not as stand-alone units, but as parts of a much larger system. Even many simple devices are now being set up for data collection and integration into a larger system. Moreover, these communications hookups are not exclusively one-sided; some devices are also being set up for both uni- and bidirectional communications.

"Connectivity is definitely happening," agrees RELA's Wood, "but that also means that localized software problems now have the potential for becoming systemwide problems. A small problem that starts in one place will be able to migrate throughout an entire network, and pretty soon a whole health-care organization will find itself in big trouble."

One factor that's sure to cause headaches for software developers is the absence of communications standards for use in such health-care data collection systems. "Without universally accepted standards, it will be difficult to get medical devices from different manufacturers to communicate without incident," says George. "Even simple communications can become complex."

The multiplicity of languages currently in use for device software is one of the major culprits causing such communications problems. Nevertheless, even more languages are being pressed into use, and in some cases a single device may use more than one. Wood cites the example of a hemodialysis device in which Visual Basic (VB) was used to create the user interface, while the device drivers were written in C++. Like object-oriented programming, VB uses onscreen icons and metaphors to simplify user operations. "VB is often used as a prototyping language because it can draw quickly on a visual tool set and offers fast user feedback," he notes. "And those same features make it a good choice as the driver for a user interface." Wood believes that the increasing variety of programming languages being used in medical devices obliges companies to have a staff of programmers with multilingual capabilities.

George notes that the digital imaging and communication in medicine standard (DICOM) being developed by the American College of Radiology and the National Electrical Manufacturers Association is a positive step toward resolving such compatibility problems. When a DICOM transmission gets to the receiving end, the system is designed to automatically answer concerns about the quality of the data. However, she adds, "the DICOM standard introduces some compatibility and testing problems of its own."

Whichever solution is to be adopted, it is clear that it will have to be done soon. "Telemedicine is rapidly approaching," says Rubenacker. "And manufacturers who want to capture a share of that market are eager to get on with developing their products. Communications problems represent a stumbling block that can't be allowed to exist for too much longer."

CONCLUSION

With a new FDA policy on the way, device manufacturers may soon have more information about how best to respond to FDA requirements. But other trends in the marketplace will continue to make software development a complex activity. According to industry experts, companies should seek to match their software development process to the risk associated with the project, and this matching should involve consideration of user and patient risk as well as business risk. New technologies don't automatically require high-level software development processes.

Although companies should seek to do only the minimal amount of documentation necessary, they should write a quality plan and audit their development process according to it. That is also what FDA will do, and companies should be prepared to show the agency their software development plan and to demonstrate that they have followed it.

Steven Halasey is executive editor of MD&DI.

Biostatistics and the Analysis of Clinical Data

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

Richard P. Chiacchierini

Glossary of statistical terms

Bibliography

Analysis of the data from a medical device clinical trial or study is one of many critical steps along the path to FDA approval and, ultimately, to the marketplace. It is the culmination of all prior planning and execution of the study protocol. In the course of a proper analysis, underlying assumptions are verified, study populations and sites are checked for comparability, and all primary and secondary study variables are evaluated.

Clinical data can arise from a controlled clinical trial or from other clinical studies that reveal information about the performance of a medical device. The term clinical study encompasses a broad spectrum of situations in which data are gathered in a clinical setting. A clinical trial is a very specific type of clinical study.

Depending on the way a study is conducted, statistical analysis of its data can be variously affected by such design considerations as sample size, comparison groups, masking, or randomization. The particular type of analysis conducted on clinical study data is dictated by the way the study was actually conducted--which may or may not be the same as originally designed. Changes in the protocol during the course of a study will also require changes in the methods of analysis to be used. This article presents the basic framework for a proper statistical analysis of data arising from the conduct of a medical device clinical trial or study. The methods discussed here are extremely powerful, but their effectiveness depends critically on the quality of the data to which they are applied. No statistical method, regardless of its sophistication, can overcome major data weaknesses that arise from seriously flawed study design or conduct.

Starting Points. The device manufacturer should recognize at the outset that analyzing the data from a clinical study is a painstaking and expensive proposition. Despite the common misconception that data analysis is "simple and straightforward, requiring little time, effort, or expense," statisticians know that "careful analysis requires a major investment of all three" (Friedman, et al., p. 241; see bibliography, p. 56). In recent years, the common misconception has been amplified by the growing number of user-friendly computer software packages that seemingly promise to make data analysis effortless. But giving the analysis of clinical data less effort than it requires often leads to incorrect or inappropriate analyses that cause major delays in FDA's product review process. Agency reviewers are skeptical of statements made by a sponsor that are not supported by a proper and appropriate analysis.

A good analysis should start with an analytical strategy. The strategy should be crudely developed at the time the protocol is written and refined as the study or trial goes to completion. It should describe in general terms:

  • The anticipated analysis procedures.
  • The basis for the sample size.
  • The primary and secondary variables.
  • The subgroups, if any, that will be investigated by hypothesis tests.
  • The influencing variables (covariates) that are important, and why they are important.

Although refinement of the analytical strategy should not be taken to include wholesale changes that drastically alter the intention of the original study, it may include the addition of greater detail that moves the initial strategy from generality to specificity. The original strategy document should provide a skeleton for the analytical scheme and the refinements should provide the meat.

At first glance, many analytical methods may appear suited to the data, but only a few are likely to have underlying assumptions that are truly consistent with the data. To determine the correct analytical technique to be used, the manufacturer needs to know the answers to a number of critical questions:

  • Why were the data gathered?
  • How were the data gathered?
  • From whom were the data gathered?
  • When and for how long were the data gathered?
  • Where were the data gathered?

A database with rows representing patients, and columns representing variables can yield summary data tables that might appear capable of analysis by a number of different methods. In actuality, however, there are likely to be a very limited number of methods (possibly one or two) for which the analytical assumptions are satisfied. Use of other methods that do not satisfy the analytical assumptions is inappropriate and their results are considered unreliable.

Although the term statistical analysis embraces an ever-increasing number of methods that might be used by a medical device sponsor, all such analytical methods can be classified into two main groups: hypothesis testing and estimation. In hypothesis testing, the researcher usually compares the occurrence of one or more features of interest in two or more groups of patients. Most hypothesis testing in medical device clinical trials compares the mean, proportion, or other features of the device-treated group to the same features in the control group. Features could involve such measures as the mean time to healing or hemostasis, or the proportion of patients who showed a preselected degree of improvement.

In estimation, the researcher's interest is to determine the relative value of a characteristic of interest in a group under study. The estimated value is usually accompanied by a statement about its certainty, or confidence interval, which is expressed as a percentage. Estimation is a necessary part of hypothesis testing, but it is not the culmination of the method. Estimation is also important in the analysis of safety variables. For example, in a clinical study of a "me-too" device, where effectiveness is not an issue, FDA and the sponsor may be interested in estimating the proportion of patients that might experience a particular complication. To ensure that the estimate has a high probability of being accurate, the researchers would also need to determine the confidence interval for it.

No single presentation on the statistical analysis of medical device clinical data can be sufficiently comprehensive to cover all aspects of this complicated and diverse methodology. Although this article is not intended to provide new or provocative material, it will cover the basic tenets that form the foundation for a proper analysis of clinical study data. These tenets are divided into three main sec-tions: preliminary analysis, comprehensive analysis, and analytical interpretation.

PRELIMINARY ANALYSIS

Authors of textbooks about statistical data analysis rarely discuss the need to match the analytical method to the character of the data. Often, they simply assume that the reader is sophisticated enough to investigate whether the variance of the groups being compared is sufficiently similar, or whether the distribution of the data is suitable for the analytical method being proposed. This is clearly a leap of faith that is not supported by experience.

In the evaluation of any set of data, from whatever source, it is essential to begin with an investigation of the data's basic character.

  • What is the nature of the distribution of the primary, secondary, and influencing variables?
  • Is the distribution of variables consistent with normal (Gaussian) or another well-known distribution?
  • If the data are not normally distributed, can they be changed by a function (a transformation) that preserves their order, but brings them into conformity with well-known assumptions about their distribution?
  • Is the sample of adequate size such that normality of the means can be assumed even if the data are not normally distributed?
  • Are the variances of the subgroups to be compared equal?

These questions are the realm of descriptive statistics. They can be answered by applying simple, well-known tests or by inspecting rudimentary data plots such as histograms or box plots. Such questions are essential for enabling the statistician to validate the assumptions that underlie the data, and to select the most appropriate analytical method consistent with the data.

Basic Character of the Data. Clinical data are similar to other forms of data in that there are two types of variables, quantitative and qualitative. Quantitative variables are numbers that can have any value within some acceptable range. For example, a person's weight in pounds could be 125.73. Qualitative variables, however, must conform to discrete classes, and are usually characterized numerically by whole numbers. For instance, a patient who is disease-free could be characterized by a zero, and a patient who has the disease could be classified as a one. The analytical procedures appropriate for these two types of variables are diverse. While there have recently been tremendous advances in the analysis of qualitative data, the techniques for analyzing quantitative variables remain more powerful because there is more numerical information in a number like 125.73 than there is in a zero or a one.

The distribution of variables in a sample is a critical factor in determining what method of analysis can be used. Normal, or Gaussian, distribution resembles the symmetrical bell-shaped curve by which most students are graded throughout their scholastic careers. It is fully characterized by two features, the mean, a measure of the location of the distribution, and the variance, a measure of the spread of the distribution. Many well-known statistical methods for analyzing means or averages--such as the t-test or the paired t-test--are based on normal distribution. Such methods rely on normality to ensure that the mean represents a measure of the center of the distribution.

Because statistical theory holds that the means of large samples are approximately normally distributed, an assumption of normality becomes less important as sample sizes increase. However, when sample sizes are small, as they are likely to be in most medical device clinical studies, it is crucial to determine whether the data to be analyzed are consistent with a normal distribution or with another well-characterized distribution.

Most common statistical tests of quantitative variables, including the t-tests and analysis of variance (ANOVA), are tests of the equality of the measures of location belonging to two or more subgroups that are assumed to have equal variance. A measure of location, such as a mean or median, is a single number that best describes the placement of the distribution (usually its center) on a number line. Because equal variance provides the basis of nearly all tests that involve measures of location, in such cases an assumption of equal variance is more critical than an assumption of normality--even when the tests do not rely on any specific distribution of the data (called nonparametric tests). If the variances are not equal among the subgroups being compared, it is frequently possible to find a formula or function (a transformation) that preserves order and results in variables that do have equal variance.

When considering the distribution of data, it is also important to look at a picture of them. Data can be plotted for each group under consideration to determine whether the distribution is shifted toward higher or lower values (skewed). The presence of one or more values that are much higher or lower than the main body of data indicates possible outliers. Data plots can also help to locate other data peculiarities. Common, statistically sound adjustment methods can be used to correct for many types of data problems.

Baseline Variable Evaluation. Once the character of the variables of interest has been established, the analysis can test for comparability between the treatment and control groups. Comparability is established by performing statistical tests to compare demographic factors, such as age at the time of the study, age at the time of disease onset, or gender, or prognostic factors measured at baseline, such as disease severity, concomitant medication, or prior therapies. Biased results can occur when the comparison groups show discrepancies or imbalances in variables that are known or suspected to affect primary or secondary outcome measures. For instance, when a group includes a large proportion of patients whose disease is less advanced than in those of the comparison group, the final analysis will usually favor the outcomes for the former group, even without an effect that is due to the device.

About 30 years ago, another example of this effect occurred in a study that was comparing the effectiveness of surgery and iodine-131 for treatment of hyperthyroidism. The investigators found the seemingly inconsistent result that patients who received the supposedly less-traumatic radiation therapy had a much higher frequency of illness and death than those who underwent surgery. An investigation of the baseline characteristics of the two groups revealed that the patients selected for the surgery group were younger and in better general health than those selected for the iodine treatment. The inclusion criteria for the surgery group were more stringent than those for the iodine group because the patients had to be able to survive the surgery. In this example, noncomparability resulted in an inconsistent finding that was resolved only through investigation.

It is desirable to perform comparability tests using as many demographic or prognostic variables simultaneously as the method of analysis will allow. The reason for using this approach is that the influence of a single demographic or prognostic characteristic on the outcome variable may be strongly amplified or diminished by the simultaneous consideration of a second characteristic. However, the size of most medical device clinical studies is rarely sufficient to allow the simultaneous consideration of more than two variables. More commonly, the sample size of the trial will allow the investigator to consider only one variable at a time.

As part of their comparability testing, one characteristic that manufacturers must always evaluate is the study site. Such an analysis should include not only the demographic and prognostic factors, but also the outcome variables. This evaluation is important because it provides the major basis for pooling the data from various clinical sites, which is very often essential to meeting the study sample size requirement.

Imbalances detected in comparability testing do not necessarily invalidate study results. By knowing that such differences exist, however, the analyst can account for their presence when comparing the outcomes data from the treatment and control groups. Many statistical procedures can be used to adjust for imbalances either before or during the comprehensive analysis, but such adjustments are usually restricted to instances where the extent of the difference is not great. Large differences in variables that affect data outcomes among comparison groups can rarely be adjusted adequately to make the comparison groups comparable.

COMPREHENSIVE ANALYSIS

The methods used for comprehensive analysis of clinical data vary according to the nature of the data, but also according to whether the analysis focuses on the effectiveness or the safety of the device. Selection of an appropriate method must also take into account the nature of the device under study. The following sections outline some of the statistical methods available for comprehensive analysis of effectiveness data for in vitro diagnostic products and therapeutic devices, and for assessing safety-related data.

Effectiveness Analyses for Diagnostic Devices. In vitro diagnostic devices require statistical techniques that are quite specialized. Usually the analysis is based on a specimen, such as a vial of blood, collected from a patient. The same specimen is analyzed by two or more laboratory methods to detect an analyte that is related to the presence of a condition or disease. Thus, each specimen results in a pair of measurements that are related to one another. In the case of a new method devised to detect the amount of serum cholesterol, for example, each blood sample would be used to produce two measures of serum cholesterol, one from the conventional method and one from the new method.

The statistical treatment of such related (or correlated) data is very different from that of unrelated (or uncorrelated) data because both measurements are attempting to measure exactly the same thing in the same individual. Generally, if both laboratory measurements result in a quantitative variable, the first analysis attempts to measure the degree of relationship between the measurements. The usual practice is to perform a simple linear regression analysis that assumes that the pairs of values resulting from the laboratory tests are related in a linear way.

In linear regression analysis, a best-fit line through the data is found statistically, and the slope is tested to determine whether it is statistically different from zero. A finding that the slope differs from zero indicates that the two variables are related, and careful attention should be paid to the correlation coefficient, a measure of the closeness of the points to the best-fit line. A correlation coefficient with a high value, either positive or negative, indicates a strong linear relationship between the two variables being compared. However, this correlation is an imperfect measure of the degree of relationship between the two measurements (i.e., although a good correlation with a coefficient near one may not indicate good agreement between the two measurements, a low correlation is almost surely indicative of poor agreement).

Although correlation can indicate whether there is a linear relationship between two laboratory measurements, it does not provide good information concerning their degree of equivalence. Perfect equivalence would be shown if the correlation were very near one, the slope very near one, and the intercept very near zero. It is possible to have a very good relationship between the two measures, but still have a slope that is statistically very different from one and an intercept that is very different from zero. Such a situation usually suggests that one of the two measurements is biased relative to the other.

If the conventional method used in the testing is a true "gold standard" or reference method, it may be possible to adjust the chemical or electronic measurement system of the device being evaluated to make the slope one and the intercept zero. If the conventional method is not a reference method or gold standard, then the sponsor is faced with the possibility that the new method under test may be better than the one to which it is being compared. In such a situation, tinkering with the device to force equivalence may be inadvisable.

When the conventional method is not a reference method or gold standard, the degree of agreement can be assessed by another method that goes beyond regression analysis. Recognizing that the absence of a gold standard means that the conventional method is imperfect, Bland and Altman devised a technique that compares the difference between the two measurements plotted against their mean (see bibliography, p. 56). The analyst establishes a confidence interval for the difference between the two measurements and assesses the number of differences falling within the interval. If the number is similar to that predicted by theory, and the width of the interval is small enough to be clinically acceptable, then the new measurement system is considered to be in good agreement with the conventional method. However, the determination that an interval's width is clinically acceptable cannot be established by statistical techniques and must involve the judgment of a health professional.

Establishing agreement between the quantitative measures is only the first step in the analysis of an in vitro diagnostic device. Since these devices and those that are designed to give qualitative results are diagnostic, the analyst must also assess the ability of the device to detect the condition. Such an assessment requires that a value (a cutoff value) that specifies the disease state or condition has been identified for each measurement system. It is critical that this value be established on a different set of data from the measurements currently under analysis; it is unacceptable to use a value that characterizes a disease state by reference to its own data set.

The next step is to classify the patients into two groups, those with the condition and those without it. This is performed for both the new method and the conventional method by reference to a qualitative outcome or by use of the cutoff value. The result is a two-by-two table in which the four cells represent the number of patients found negative for the disease or condition by both measurement methods, the number found positive by the conventional method but negative by the new method, the number found negative by the conventional method but positive by the new method, and the number found positive by both methods. From this table it is possible to estimate the sensitivity, specificity, predictive value positive, and predictive value negative, along with their respective confidence intervals. These values are usually compared with those for other classification systems for the disease or condition under test to determine whether they are close to those known values.

The next step in the analysis of diagnostic devices involves either a relative risk assessment or a receiver operating characteristic (ROC) analysis. There is software available to perform either of these analyses. The relative risk is a ratio of the risk of the disease among patients with a positive test value to the risk of disease among patients with a negative test value. The relative risk analysis is particularly effective and can be done by use of either a logistic regression or a Cox regression depending on whether the patients have constant or variable follow-up, respectively. ROC analysis provides a measure of the robustness of the cutoff value as a function of sensitivity and specificity.

These techniques, described more fully below, allow the analysis of the measurement method along with any potential influencing variable. If the final model, fit to the data, contains a statistically significant contribution that is attributable to the sponsor's measurement system--whether or not there are significant effects attributable to other covariates--the test method provides an independent means of assessing the disease or condition. The reason for this powerful interpretation is that the test resulting from these methods is based on a statistic that has been adjusted for the presence of other significant covariates.

Finally, if the device is diagnostic for a condition that takes a relatively long time to develop (such as cancer), the analyst should evaluate the lead time afforded by the device. Sometimes this evaluation is a simple mean with a corresponding confidence interval. For these types of devices to be effective, the interval should not include zero. In addition, the farther away the lower limit of the interval is from zero, the better.

Effectiveness Analysis for Therapeutic Devices. In-depth analysis of a therapeutic device usually involves hypothesis testing to determine whether the device maintains or improves the health of patients. In some cases, FDA may permit a sponsor to compare a particular device operating performance characteristic (OPC) to a test treatment. Even in such cases, however, the result will be a test of the hypothesis that the treatment is better than or equal to a constant, the OPC. Selection of an appropriate method for in-depth analysis of data from such trials or studies depends on many factors, such as:

  • Is the primary variable quantitative or qualitative?
  • Was the primary variable measured only once or on several occasions?
  • What other variables could affect the measurement under evaluation?
  • Are those other variables qualitative (ordered or not) or quantitative?

Quantitative Primary Variables. If the primary variable under evaluation is quantitative, selection of an appropriate method of analysis will depend on how many times that variable was measured and on the nature of any other variables that need to be considered. If there is only a single measurement for each variable, and there are no differences among the potential covariates belonging to the treated and control groups, the appropriate method of analysis may be a parametric or nonparametric ANOVA or t-test. For example, a study of a new cardiovascular stent that is expected to offer better protection against restenosis, with all other things being equal, could compare the six-month luminal diameter by this method.

The choice of an appropriate analytical method changes if the covariates belonging to the two comparison groups differ and are measured qualitatively. Such cases may require use of a more complex analysis of variance or an analysis of covariance (ANCOVA). The ANCOVA method is particularly suited to analyzing variables that are measured before and after treatment, assuming that the two measurements are related in a linear or approximately linear manner. Using ANCOVA, the statistician first adjusts the posttreatment measure for its relationship with the pretreatment measure, and then performs an analysis of variance. Using the example of the cardiovascular stent, ANCOVA would be a suitable method of analysis if the amount of improvement in the six-month luminal diameter of the artery treated by the stent depended on the original luminal diameter of the artery.

In medical device studies, outcome variables are often measured more than once for each study subject. Although there are very powerful methods of statistical analysis that can be applied to such situations, they require what statisticians call balance; for example, every time a variable is measured it must be measured for every patient. A balanced repeated measures ANOVA can be performed with or without covariates. With covariates, this method reveals the effect of each patient's covariate value on the outcome variable, the effect of time for each patient, and whether the effect of time for each patient is changed by different values of the covariate. Continuing with the stent example, a repeated measures ANOVA could be applied to evaluate measurements of luminal diameter before implantation and at 3, 6, 9, and 12 months after implantation, and of the location of coronary lesions. In this case, the primary outcome variable is luminal diameter, and the covariate is the location of the lesions.

A repeated measures ANOVA also can be used if a few patients missed one or possibly two measurements. However, doing so requires the statistician to use sophisticated statistical algorithms in order to estimate the missing outcome measures, and these can present problems. To find solutions, it is sometimes necessary to restrict the data or make other assumptions that may weaken the resulting statistical conclusions.

Some studies result in a quantitative outcome variable and one or more quantitative covariates. In this situation, multiple regression methods are useful in evaluating outcome variables (called dependent variables), especially if the study involves several levels or doses of treatment as well as other factors (independent variables). Regression is a powerful analytical technique that enables the statistician to simultaneously assess the primary variables as well as any covariates.

The regression model is an equation in which the primary outcome variable is represented as a function of the covariates and other independent variables. The importance of each independent variable is assessed by determining whether its corresponding coefficient is significantly different from zero. If the coefficient is statistically greater than zero, then that independent variable is considered to have an effect on the dependent variable and is kept in the model; otherwise, it is discarded. The final model includes only those variables found to be statistically related to the dependent variable. The model enables the statistician to determine the strength of each independent variable relative to the others as well as to the device treatment. In the stent example, a multiple regression analysis would be appropriate for data where the luminal diameter was measured twice (say, at baseline and at 6 months), and the length of patient lesions was measured as an independent variable.

BIOCOMPATIBILITY : Manufacturer Use of ODE's Blue Book Memorandum on Biocompatibility Testing

Medical Device & Diagnostic Industry Magazine | MDDI Article Index

Originally published June 1996

Brenda Seidman

Last July, the Office of Device Evaluation (ODE) in FDA's Center for Devices and Radiological Health (CDRH) officially replaced its longstanding use of the 1987 "Tripartite Biocompatibility Guidance for Medical Devices" with a new policy designed to bring the agency's requirements into accord with newer international standards for evaluating the biocompatibility of medical devices.1 In most particulars, ODE's new policy adopts the practices recommended by the International Organization for Standardization (ISO) in its standard on the selection of biocompatibility testing, ISO 10993-1.2 However, the agency's blue book memorandum initiating the policy change also includes a fair amount of information about how the agency intends to modify the ISO standard for use in the United States.3 To make the best use of FDA's new policy, manufacturers should familiarize themselves with the details of ODE's memo, and especially with those areas in which the agency's practice differs from the international standard.

Altogether, ODE's memorandum consists of three parts: a guidance letter from ODE director Susan Alpert, a flowchart to help reviewers determine when testing is necessary as well as what kinds of test data are required, and a two-part matrix of rec- ommended testing categories. Alpert's guidance letter provides general background about FDA's previous use of the Tripartite Guidance and the agency's reasons for adopting the ISO 10993-1 biocompatibility standard. She notes that the ISO standard is intended to be a flexible guidance, and that the agency, consistent with the standard, may recommend several tests that are not included in the ISO document. She emphasizes, however, that FDA's use of the ISO standard is also intended to be flexible. "Although several tests were added to the matrix," Alpert writes, reviewers should note that some tests are commonly requested while other tests are to be considered and only asked for on a case-by-case basis. Thus, the modified matrix is only a framework for the selection of tests and not a checklist of every required test. Reviewers should avoid proscriptive interpretation of the matrix. If a reviewer is uncertain about the applicability of a specific type of test for a specific device, the reviewer should consult toxicologists in ODE.3

Alpert's letter also announces that the agency intends to develop device-specific "toxicology profiles" that will assist reviewers in determining appropriate toxicology tests for specific devices. The memorandum provides little information about what FDA expects to be included in these new reviewer guidance materials, or how they might be made available to manufacturers. FDA has been preparing the toxicology profiles since the blue book memorandum was issued last May, and reportedly expects to issue several profiles at once. It is anticipated that the first profiles will be released in early 1997.

As discussed below, the flowchart functions as a decision tree, enabling reviewers of a particular device submission to determine whether it includes sufficient biocompatibility information to meet the agency's requirements. The matrix, which is divided into two tables, provides guidance about what types of testing the agency expects for particular categories of devices. Although it is based on the matrix of ISO 10993, the agency's version incorporates a number of modifications that may change the amount or type of testing required for a particular device. These differences are discussed below.

THE FLOWCHART

As its title implies, the "Biocompatibility Flowchart for the Selection of Toxicity Tests for 510(k)s" is intended to apply primarily to devices undergoing 510(k) review (see Figure 1). For this reason, many of the points considered in the flowchart are designed to enable reviewers to compare the biocompatibility data for a device under review to similar data for its predicate device. FDA's memorandum suggests that the chart may also be applicable to some devices undergoing premarket approval (PMA) review. However, it is not clear from any of the materials included with the memorandum how the agency expects reviewers to apply the flowchart to PMA devices.

In general, the purpose of the flowchart is to enable ODE reviewers to determine whether existing data are sufficient to meet biocompatibility requirements for the device under review or, if they are not sufficient, what additional test data should be submitted by the manufacturer. The chart prompts reviewers of 510(k)s to ask specific questions about a device's materials, manufacturing process, chemical composition, body contact, and sterilization method. Along the way, the chart also directs reviewers to consult device-specific guidance documents, to refer to toxicology profiles, and to seek the assistance of an ODE toxicologist if necessary.

In its final steps, the flowchart prompts reviewers to consider alternative sources of toxicology data (such as device master files and data from previous testing), which may enable a device submission to meet the biocompatibility requirements. Reviewers are also directed to consider whether the submission has included adequate justifications or risk-assessment data for not conducting certain tests. If these sources of information or justifications sufficiently resolve all outstanding questions, the chart advises reviewers to seek concurrence from an ODE toxicologist before definitively concluding that the submission has met its biocompatibility requirements. If the information or justifications do not resolve all questions, the chart advises reviewers that additional toxicology testing is required. Such testing can include some or all of the test categories identified in the FDA matrix, in toxicology profiles, or by toxicologists.

Since this chart was intended for use by ODE reviewers, it may be confusing to many manufacturers. In the absence of a similar guidance document designed specifically for industry, however, this internal document can be a great help to manufacturers. In effect, the flowchart informs manufacturers about ODE's newly adopted policy on biocompatibility and indirectly suggests a method that manufacturers can use for considering biocompatibility issues related to their product submissions. Manufacturers should use the flowchart to help them determine what testing they need to perform on a device or its materials.

To manufacturers, perhaps one of the most useful aspects of the flowchart is the guidance it provides on how to satisfy ODE's biocompatibility requirements without necessarily performing extensive biocompatibility testing. The flowchart presents a variety of methods for demonstrating biocompatibility, not all of which involve testing. For example, the chart leads its user into a series of questions about the similarity of the device and its materials to its predicate:

* Are the materials the same?

* Are the manufacturing processes the same?

* Are the materials' compositions chemically the same?

* Do the devices' materials have the same body contacts?

* Are the devices' sterilization methods the same?

Answering yes to all of these questions is tantamount to meeting biocompatibility requirements. A negative response to any of these questions leads the chart user to further consider material differences, reasons why the differences might not be meaningful, and bench testing data on the device's materials. If significant material differences are identified and appear meaningful, the chart assists the user in identifying the categories of biocompatibility testing that should be addressed.

Manufacturers should study the flowchart carefully to determine areas in which they may legitimately avoid testing. With the assistance of a toxicologist, if necessary, they should then justify in writing why certain tests recommended in the matrix or toxicology profile are unnecessary.

ISO 10993 AND THE FDA MATRIX

FDA's new matrix is very similar to the ISO 10993-1 testing matrix and reflects the agency's desire to harmonize its testing recommendations with international standards (see Tables I and II). FDA has adopted the body contact and contact duration categories used in ISO 10993-1, and also recommends all of the tests recommended by ISO. However, the similarity of the FDA matrix to that of ISO 10993-1 should not be interpreted as an unqualified adoption of the ISO standard. For certain device categories, FDA has identified additional testing that it recommends manufacturers should consider "if applicable."

With regard to the test-selection matrix, FDA's memorandum observes that the agency's recommendations conform to the spirit of the ISO standard. Under the heading of "guidance on selection of biological evaluation tests," ISO 10993-1 states that "Due to the diversity of medical devices, it is recognized that not all tests identified in a category will be necessary or practical for any given device. It is indispensable for testing that each device shall be considered on its own merits: additional tests not indicated in the table may be necessary." By adopting this strategy, FDA in effect acknowledges that relative to the ISO matrix, additional tests may be necessary for some devices and fewer tests needed for others.

Although FDA's memorandum and its attachments do not explicitly recommend protocols, the agency's unwritten policy has been, and continues to be, that manufacturers should use such widely accepted methods as those identified in ISO 10993 and the United States Pharmacopoeia. ODE still expects manufacturers to carefully consider testing methodology and to ensure that testing is designed to evaluate the potential toxicity related to the specific device and its materials.

The FDA matrix uses Xs and Os to distinguish testing recommendations that are identical to ISO's (Xs) from those additional tests that FDA considers possibly applicable (Os). In spite of what may appear as a broadening of ISO's core recommendations, FDA's new matrix has eliminated many tests from those previously recommended by the Tripartite Guidance. Hemocompatibility (including hemolysis), mutagenicity, and implantation testing are no longer recommended for many contact type and duration categories. If one considers "if applicable" testing recommendations, in addition to the ISO core recommendations, the greatest potential reduction in testing is for those devices in the surface devices, mucosal membrane category. As might be expected, the least-affected device category is implant devices involving blood; ISO's testing recommendations were already extensive and were in general agreement with the Tripartite Guidance for these devices.

Most manufacturers are by now familiar with the testing matrix and recommendations of ISO 10993-1, and are aware that the ISO standard includes pyrogenicity testing in the systemic toxicity category (unlike the Tripartite Guidance). They also know whether the ISO standard classifies their devices differently from the way they were classified under the Tripartite Guidance. Therefore, the following overview of the FDA-modified version of ISO 10993-1 considers only the most salient differences between it and the Tripartite Guidance and, more importantly, between the ISO and FDA-modified matrices.

Surface Devices. ISO 10993-1 divides this category into three subcategories of devices according to the nature of their body contact: skin, mucosal membranes, and breached or compromised surfaces. For the skin subcategory, FDA's modified matrix introduces no additional testing recommendations, and has in fact struck acute systemic toxicity from the battery of tests previously recommended by the Tripartite Guidance.

In keeping with the ISO categorization scheme, FDA has added a subcategory for mucosal membrane devices, which the Tripartite Guidance termed "externally communicating, intact natural channels" devices. Relative to the Tripartite Guidance, the FDA-modified matrix significantly reduces the testing load for such devices, particularly for those with limited contact duration (¾24 hours). However, FDA recommends that manufacturers consider performing additional tests, if applicable, for devices with prolonged exposure (24 hours to 30 days) and permanent exposure (> 30 days). Manufacturers should be especially thoughtful in considering these extra tests. Alpert's guidance letter explicitly mentions extra testing for surface devices with permanent exposure to mucosal membranes, suggesting that FDA is especially interested in additional safety information for these devices.

For devices in contact with breached or compromised surfaces, regardless of their contact duration, FDA may require additional testing beyond that listed in ISO 10993-1.

If it is unclear whether additional testing is expected, manufacturers would be well-advised to contact the relevant ODE office before planning testing or gathering other safety data for these or any other device categories.

Externally Communicating Devices. The ISO 10993-1 matrix divides this category into the three subcategories of blood path, indirect devices; tissue/bone/ dentin communicating devices; and circulating blood devices. Adopting this structure has resulted in some changes in the way FDA categorizes devices. For instance, some devices previously considered in the Tripartite Guidance's "intact natural channels" exposure subcategory, such as laparoscopes, are now categorized as tissue/bone/dentin communicating devices. Other devices formerly in the Tripartite Guidance's "internal device, tissue and tissue fluids" subcategory--particularly those in contact with tissue fluids and subcutaneous spaces--are also now included in the tissue/bone/ dentin communicating device subcategory.

For blood path, indirect devices with limited contact duration, the FDA-modified matrix introduces no testing recommendations beyond those of ISO 10993-1. For such devices with prolonged or permanent contact durations, however, FDA recommends that manufacturers consider some additional testing, if applicable. For the subcategory of tissue/bone/dentin devices, FDA recommends that manufacturers consider additional testing for all products regardless of their contact duration, but particularly for devices with prolonged or permanent contact. Alpert's guidance letter suggests that ODE is more than casually interested in reviewing testing or other documentation to address these contact durations and that such testing might be considered "applicable." Again, this does not mean that original testing need be performed; however, manufacturers should contact ODE to determine whether it indeed considers such testing necessary.

Manufacturers of circulating blood devices may need to perform additional tests for prolonged and permanent exposure durations. Among limited exposure devices, a genotoxicity test should be considered only for those used in extracorporeal circuits.

Implant Devices. As noted above, the Tripartite Guidance's subcategories "internal devices, bone" and "internal devices, tissue and tissue fluids" have been redistributed into the new subcategories of the FDA-modified matrix. For devices remaining in these Tripartite subcategories, the FDA matrix recommends additional testing regardless of contact duration.

The testing recommendations of ISO 10993-1 for implant devices in contact with blood are quite extensive, especially for devices in the prolonged and permanent exposure categories. For this reason, FDA identified only one additional test (a subchronic toxicity test), for devices with prolonged contact duration exposures.

ELIMINATING UNNECESSARY TESTING

Consistent with ISO 10993-1, specialized testing not identified in FDA's new matrix may also be expected, depending on the device and its materials. ODE has indicated that it intends to use the planned series of device-specific toxicology profiles to identify additional tests that it considers relevant, which may include immunological and neurotoxicological evaluations for certain devices. According to Alpert's guidance letter, the toxicology profiles will be generated first for devices in prolonged and permanent contact duration categories and for those that represent a large share of all submissions. In the interim--and for devices for which toxicology profiles will probably not be generated--ODE expects manufacturers to consult with appropriate individuals in the relevant review division or to obtain expert advice from other knowledgeable individuals. Such consultations can be invaluable in helping manufacturers determine what testing they must perform, and especially in enabling them to avoid expensive, long-term testing whenever possible.

Even if identified among the ISO core testing recommendations, certain areas do not necessarily need to be addressed with testing. The FDA memorandum advises both reviewers and manufacturers to avoid using the matrix as an inflexible checklist and to seek assistance in identifying and justifying testing that is unnecessary for a device or material. Taken together, the three parts of the blue book memorandum suggest several instances when testing might not be needed:

* For well-characterized materials with long histories of safe use.

* For materials acceptably evaluated and documented in device master files.

* For materials and devices for which appropriate risk assessments have been developed.

* When other acceptable justifications are provided for not performing tests on a specific material or device.

Such exceptions are not new; depending on the device and its perceived risk--and the reviewing division or individual reviewer--FDA has often accepted well-developed justifications for not performing testing or for testing materials from something other than the finished device.

CONCLUSION

Although ODE has many competent individuals who review biocompatibility documentation, few are toxicologists. PMA reviews almost always involve one or more toxicologists, but 510(k) reviews may not. Under these circumstances, nontoxicologist reviewers of 510(k)s may defer to ODE toxicologists when they have questions. Nevertheless, other reviewers may genuinely not recognize when the input of an ODE toxicologist is necessary. Should a reviewer's request for additional testing appear inappropriate, the submitter should not hesitate to question the reviewer about the basis of the request.

FDA is attempting to pursue a more informed and rational approach to its reviews of biocompatibility. Manufacturers should consider this an opportunity to avoid doing unnecessary testing. However, to take full advantage of this opportunity, manufacturers must themselves seek appropriate advice from knowledgeable individuals, and must approach their evaluation of testing requirements in a sophisticated manner. It is always to the advantage of the manufacturer to develop a well-written, scientifically sound justification for any decision not to conduct testing that is recommended in FDA's matrix.

REFERENCES

1. Toxicology Subgroup, Tripartite Subcommittee on Medical Devices, "Tripartite Biocompatibility Guidance for Medical Devices," Rockville, MD, FDA, Center for Devices and Radiological Health (CDRH), 1987.

2. "Biological Evaluation of Medical Devices, Part 1: Evaluation and Testing," ISO 10993-1, Geneva, Switzerland, International Organization for Standardization, 1994.

3. "Use of International Standard ISO-10993, 'Biological Evaluation of Medical Devices, Part 1: Evaluation and Testing'," Blue Book Memorandum G95-1, Rockville, MD, FDA, CDRH, Office of Device Evaluation, 1 May 1995.

Brenda Seidman is president of Seidman Toxicology, a consulting firm located in Falls Church, VA.

Copies of ODE's "Use of International Standard ISO-10993, 'Biological Evaluation of Medical Devices, Part 1: Evaluation and Testing'," including the flowchart and matrix, are available from CDRH's automated "Facts on Demand" system by calling 800/899-0381, and requesting shelf num-ber 164.