Who's Right in the FDA Numbers Game?

January 1, 1996

6 Min Read
Who's Right in the FDA  Numbers Game?

Medical Device & Diagnostic Industry Magazine | MDDI Article Index

Originally published January 1996

Regulatory Affairs

The House Commerce Committee has joined the chorus of those questioning the review productivity of FDA's Center for Devices and Radiological Health (CDRH), but its interpretation of the numbers may give the casual observer pause. On October 30, 1995, the panel's Subcommittee on Oversight and Investigations released a new study conducted by the congressional auditing agency, the General Accounting Office (GAO). The study concludes that FDA took a median of 804 days, or more than two years, to grant premarket approval (PMA) for Class III devices in 1993. This is almost double the agency's clearance time in 1989, it said. The 1989 median of 414 days was cause for great concern both in the agency and among industry members at the time, a House subcommittee staff document contended, since the Medical Device Amendments mandate FDA approval in 180 days after PMA application submission.

"The increased delays occurred even though the FDA received only half as many applications in 1994 as it did in 1989, and despite the fact that the agency's medical device program was increased by $20 million during fiscal year 1994," asserted Congressman Joe Barton (R­TX), chairman of the subcommittee. "More than any other development so far, this report...confirms what our hearings have found all year--that the FDA is delaying, not improving, Americans' access to medical treatment."

Barton charged that similar trends are occurring with 510(k) clearance times. According to the GAO report, 510(k) approvals took a median of 152 days in 1994, up almost 100% from the 80-day median in 1989. Similarly, the mean time for approval of these applications increased to 166 days in 1994, up from 124 days in 1989. The congressman added that the mean for 1994 will rise when the remaining 1994 backlog of applications is cleared.

In contrast, CDRH head Bruce Burlington has cited statistics that show "we were able to modestly decrease the average time it took FDA to review a PMA application from 21.5 months in fiscal year 1994 to 20 months in fiscal year 1995." He added that "we also cut by more than 50% the average time it took us to reach a final decision on a PMA after we reached an evaluation from one of our advisory panels."

CDRH has reduced the agency's average review time for 510(k)s by 24%, to 138 days in fiscal year 1995 from 182 days in fiscal year 1994, Burlington reported. Median total 510(k) time (from time of submission to final decision by the agency) similarly dropped in those two fiscal years, to 102 days in fiscal year 1995.

Obviously, the GAO is not counting days in the same fashion as FDA. So what accounts for the discrepancy, and who's right?

Two explanations emerge. First, the two studies reviewed different periods. Burlington used data from fiscal years 1994 and 1995, or a period from October 1, 1993, through September 30, 1995. The GAO, however, studied PMA applications in fiscal years 1989 through 1993, and 510(k)s in one additional year, 1994. Thus, for PMA applications, none of the periods covered in these studies overlap. For 510(k) reviews, there is only a little overlap.

However, CDRH data show that the median 510(k) review time steadily climbed in late 1992, surpassing 100 days in September of that year, and peaking at nearly 200 days in February 1993. The median time for this class of product reviews then began to drop and the decline continued throughout fiscal years 1994 and 1995.

From this comparison, it would seem that both studies could be correct--it's just that the GAO report is using older data. But the second manner in which the two studies differ is even more important in explaining the difference between them. It has to do with how review time is counted, or as Barton put it, "how the clock is stopped."

According to the GAO report, it counted applications and days "according to the fiscal year in which the applications were submitted to FDA. By contrast, FDA commonly reports review time according to the fiscal year in which the review was completed." Both studies counted total elapsed time (from the time a company submits its paperwork to the time FDA actually clears the product for market), including both the time the agency was waiting for more information from the submitting company and the time it spent reviewing the application.

Although Barton charged FDA with "statistical sleight of hand" for basing review-time calculations on the year of completion, the congressional report stated that both methodologies are perfectly valid ways of measuring FDA productivity. In fact, the GAO study even defended both methodologies: "using the date-of-decision [FDA's method] is useful when examining productivity and the management of resources," it said, while "using the date-of-submission [the GAO method] is useful when examining the impact of a change in FDA review policy." This discrepancy in methods is not, as the House subcommittee staff charged, a way of "manipulating the clock" in the agency's favor. In fact, it appears that FDA's response that the GAO report does not recognize the effect of review policy changes during the period studied is borne out by the differences in the two studies' data.

The House subcommittee's charges to the contrary, the apparent slide in CDRH review productivity is not due to some flaw in its structure or power (both of which the new Republican congressional leadership has proposed to scale back during the past year). Both studies generally agree that the climbing median review times occurred in the early 1990s--the time during which the agency was dealing with the extraordinary workload Congress handed it courtesy of the Safe Medical Devices Act of 1990. Moreover, the GAO study hints that the review times began to decline during the latter part of the period it studied, and does not report on fiscal years 1994 and 1995, when FDA's reported review times improved even further. In fact, Barton's charge that the $20 million had not improved productivity appears false in light of the FDA report's numbers, for these are the very years in which the extra money was actually spent.

The congressman is on firmer ground, however, when he discusses how the number of submissions from industry has fallen over the past several years. Indeed, both the FDA and GAO studies agree on that score, and if Congressman Barton is looking for a cause of this trend, perhaps he need look no further than the major changes to device law enacted in 1990.--Cliff Henke

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like