In October 2009, the HHS Office of Inspector General (OIG) issued a report titled, “Adverse Event Reporting for Medical Devices.”1 This report demonstrates that the current system of medical device adverse-event reporting, particularly with regard to malfunction reports, could largely be considered a waste of industry and FDA resources.

Jeffrey K. Shapiro

December 22, 2010

9 Min Read
What Happens to Medical Device Reports Once They Reach FDA?

Shapiro_J.jpgUnder FDA’s medical device reporting (MDR) regulation, manufacturers must report a serious injury or death that their device has or may have caused or contributed to.2 Manufacturers also must report any device malfunction that would be likely to cause a serious injury or death if it were to recur. These MDRs typically must be filed within 30 calendar days, except in cases for which remedial action is necessary to prevent an unreasonable risk of substantial harm to the public health. The latter reports must be filed within five working days.

The OIG examined manufacturer reports from 2003 through 2007 (p. 9, 13 of OIG report). The OIG found that the number of 30-day reports more than doubled, from 64,784 in 2003 to 140,698 in 2007. The five-day reports were relatively fewer, declining in the same time period from 432 in 2003 to 54 in 2007 (the OIG could not determine the reason for the decline).

The OIG took a careful look at how FDA used these data, and found that the agency “does not use adverse-event reports in a systematic manner to detect and address safety concerns about medical devices” (p. 13 of the report). There was no qualification or mitigation offered to soften such a conclusion. Simply put, the OIG found that between 2003 and 2007, FDA did not make use of the MDR data to improve device safety. 

As the OIG notes, MDR regulation is intended to enable FDA “to take corrective action on problem devices and to prevent injury and death by alerting the public when potentially hazardous devices are discovered” and “to detect unanticipated events and user errors” (p. 1 of the report). Thus, the OIG’s finding essentially means that FDA’s implementation of the MDR regulation failed to meet the basic purpose of the regulation.

Some of the OIG’s subsequent findings were equally disheartening. For instance, the OIG found that FDA’s review of MDRs was generally untimely. Of the malfunction reports assigned to an FDA analyst for review, the OIG report found fewer than one third were read within 30 days, and less than half were read within 60 days. Even the relatively small numbers of five-day reports were not read in a timely manner. From 2003 to 2006, FDA analysts read fewer than 1% of these reports within five days of receipt. In 2007, that figure only rose to 6%. Yet, the five-day reports are those that likely represent the most serious risk to public health. These reports can include what FDA calls “Code Blue” reports of pediatric deaths, multiple deaths, exsanguinations, explosions, fires, burns, electrocutions, and anaphylaxis (p. 15–16).

The metrics noted in the previous paragraph apply only to reports actually assigned to an FDA analyst for review. Buried within the OIG report is the astonishing fact that FDA “assigns only 10% of malfunction reports to FDA’s analysts for review” (p. 15 of OIG report). Thus, it appears that FDA routinely ignores 90% of all malfunction reports received each year. Although these reports are potentially available to FDA for trending, FDA did not provide the OIG with evidence that the agency makes effective use of trend data.

Based on this information, it is difficult to escape the conclusion that manufacturers have spent millions of dollars over the years to collect, analyze, and report adverse-event data for little purpose. Likewise, Congress has appropriated substantial sums of tax dollars for FDA to review, analyze, and manage the data without a measurable public health benefit.

For this reason, it may further be said that the MDR system cannot be credited with the substantial improvement in medical device safety that has taken place during the past 26 years (e.g., since MDR regulations were put in place). These improvements have taken place in the absence of meaningful FDA oversight based on MDRs. One wonders why we needed an expensive system of mandatory reporting to populate a data warehouse that FDA has rarely visited, much less used in the manner envisioned by the system’s proponents.

Effects of the Report

The OIG investigation was limited to the period from 2003 to 2007. It seems unlikely, however, that FDA operated more effectively prior to 2003 or achieved a sudden radical improvement after 2007. FDA did not publicly dispute the OIG’s findings or suggest that they are already obsolete, even though the OIG report was not issued until October 2009. FDA has not publicly announced any improvements intended to address the OIG report. Finally, FDA does not publish any metrics that would demonstrate its systematic use of MDR data to improve device safety.

The OIG carefully notes that the conclusion about FDA’s use of adverse-event reports rested on lack of documentation. So, it is theoretically possible that FDA has made effective use of the data. But again, there is no evidence that the agency has done so. Sound public policy is not built on mere speculation and anecdotes. It is FDA’s responsibility to demonstrate that it systematically uses MDRs to benefit the public health. According to the OIG, the proof is completely lacking.

What to Do About MDRs

It is a wasted opportunity if the OIG’s examination of the problems with the MDR system does not result in useful reform. There can be no justification for continuing to operate such an expensive and time-consuming mandatory reporting system on the basis of good intentions rather than actual results.

The OIG does make some recommendations, primarily that FDA should document follow-up on adverse events and should ensure and document that CDRH does a better job of meeting its existing guidelines for reviewing five-day and Code Blue adverse-event reports. But these recommendations seem fairly minimalist and unlikely to get to the heart of the matter.

It is not easy for an outside observer to suggest how FDA’s use of MDR data might be improved because there is almost no disclosure as to what FDA currently does with the data. The Office of Surveillance and Biometrics (OSB) claims responsibility “for ensuring the continued safety and effectiveness of medical devices after they have reached the marketplace.”3 Yet, OSB’s short, five-paragraph Web page has only the vaguest description of its activities, with no metrics, no hard data, and very few specifics. The OSB’s “transparency dashboard” does not provide any information or metrics on the use of MDR data.4 Even the fiscal year 2010 strategic planning document for CDRH (the most recent one posted on the Web site) proposes only technical improvements in MDR event and product coding, and unspecified “improvements to CDRH’s adverse-event reporting data systems.”5 There are no other goals that appear to be related to improving FDA’s use of MDR data. Finally, the division of postmarket surveillance within OSB has direct responsibility for MDR analysis. Its Web page too has only a general description of how it uses adverse-event reports, with no metrics or other hard information provided.

It seems likely that part of the problem arises from the nature of the information that FDA receives via MDRs. Complaints about medical devices can allege all kinds of malfunctions, with many variations and idiosyncrasies. Many of these reports are really noise that, if FDA were actually to analyze them, would obscure more than they illuminate about the safety of medical devices in use today. Yet, it is these reports that make up the largest volume of MDRs that manufacturers must file. 

The most useful immediate reform would be to eliminate the requirement for malfunction reporting. As the OIG found, FDA analysts have read only a tiny percentage of malfunction reports, so no one can argue that these reports provide essential information that has allowed FDA to systematically address safety problems. Rather, the evidence shows that FDA has not been reviewing these MDRs in a timely fashion or making systematic use of trend data to improve device safety. Furthermore, the variation and detail involved in many of these complaints make them difficult to trend, even for companies that have a much better understanding of their own devices than FDA’s staff does. It seems unlikely that malfunction reports could ever be made useful to the agency. If FDA disagrees with this conclusion, the agency should provide data to prove that it can make use of malfunction MDRs to improve device safety. To date, it has not done so.

Malfunction reports also create complexities for manufacturers, which must wrestle with determining whether a complaint is a malfunction (not all are), and whether a malfunction is reportable because it would be “likely” to cause injury or death if it were to recur. In many cases, this latter determination involves making a difficult and subjective probability prediction. On the flip side, FDA currently wastes valuable inspectional resources in determining whether malfunction MDRs should have been submitted. There is something peculiar about a system in which FDA spends compliance resources to inspect files for nonsubmission of malfunction MDRs which are then submitted, at great cost and effort, and not used by FDA in any meaningful or systematic fashion.

Conclusion

Malfunction reports have generated much of the expense and complexity of the current system, but have provided little or no benefit. By contrast, an MDR system that focused solely on the possible contribution of devices to serious injuries or deaths would be easier for FDA to administer, and would more reliably alert FDA to true safety problems. The smaller resulting data set might also improve the odds that FDA could review and trend all of the reports it receives in a timely manner and take action when appropriate.

In theory, it might have been a good idea for FDA to be given malfunction information to help it anticipate device safety problems. But in reality, there is no evidence that the system actually operates according to this theory, and the OIG report provides ample evidence that it does not. Those who would defend the current malfunction reporting requirements bear the burden of proving that these reports can be used to improve medical device safety. FDA should also publish metrics quarterly or annually that demonstrate the agency’s use of malfunction MDR data to improve device safety. 

Industry should reject the notion that it is acceptable for burdensome regulation to continue on autopilot for decades just because it has a worthy purpose and seems like a good idea. FDA should provide a data-driven demonstration that the MDR system is worth the resources being spent. Based on the OIG’s report, the opposite appears to be true.

References

1.    Daniel Levinson, “Adverse Event Reporting for Medical Devices,” October 2009: available from Internet [pdf]: oig.hhs.gov/oei/reports/oei-01-08-00110.pdf.
2.    21 CFR 803.50.
3.    “About FDA: Office of Sureillance and Biometrics,” available from Internet: www.fda.gov/AboutFDA/CentersOffices/CDRH/CDRHOffices/ucm116002.htm.
4.    “FDA-Track CDRH Office of Surveillance and Biometrics Database,” available from Internet: www.fda.gov/AboutFDA/Transparency/track/ucm203271.htm#KeyCenterDirector.
5.    “CDRH FY 2010 Strategic Priorities,” available from Internet: www.fda.gov/AboutFDA/CentersOffices/CDRH/CDRHVisionandMission/ucm197647.htm.

Jeffrey K. Shapiro, JD, is a partner at Hyman, Phelps & McNamara, PC (Washington, DC).  Contact him at [email protected].

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like