Originally published January 1996
Statistics: I have come to believe that I could spend several hours a day for the next year trying to make sense of FDA review-time numbers and be no wiser for it next January. I don't just mean FDA's own figures--I mean anyone's statistics about agency performance.
This realization first began to overtake me last November, when Congressman Joe Barton's press office sent out a press release decrying what he sees as FDA's misleading use of statistics. What clicked for me wasn't Barton's critique, but Barton's numbers. They didn't appear any more useful, informative, or consistent than FDA's. In fact, they were worse.
It was either Disraeli or Mark Twain (I can never remember which) who said, "There are three types of lies: lies, damned lies, and statistics." Now, statistics have honest uses, or concepts like statistical process control and design of experiments wouldn't work--which they demonstrably do. But statistics will serve any master. Even those persons who cite numbers and percentages with the best intentions can be led astray by their own unwitting biases.
With this situation in mind, I decided that MD&DI should take an objective view of the numbers, if possible. The result is the News & Analysis story, "Who's Right in the FDA Numbers Game?" on page 14 of this issue.
If we came up with any clear answer to that question, it's that no one is right, and no one wins.
Barton's press release charges FDA with cynical manipulation of statistics. Maybe so. But Barton's own use of numbers seems to be on the same level. In his press release, numbers from 1993 are presented as current, no comparison to FDA numbers is made, and no acknowledgment of the improving trend since 1993 is offered.
By comparison, I have to judge FDA's use of numbers to be moderately more complete and consistent. In the November 1 "CEO Letter" from device center director Bruce Burlington, the numbers he presents are clearly meaningful and useful. He offers not just one measure, but three: median review time (the number of days required to decide on half of all applications), average (mean) review time, and the 95th percentile (how long it takes to clear 95% of all applications).
Barton and others have criticized FDA for its use of median rather than mean numbers at various times in the past. But means are easily thrown off by a few outlier applications that take an inordinate amount of time to review. If I want to bash FDA, I'll go with the mean. But if I want a representative view of its performance, give me the median.
For all the clarity and comprehensiveness of Burlington's handling of statistics, he seems to engage in manipulation as well. Although he supplies total review times in one table appended to his letter, his text discusses only FDA review times. In other words, the figures he cites don't really tell how long it takes from receipt of an application to the decision, because they don't include "the time the document was on hold awaiting additional information from the manufacturer."
Come now. Who outside of FDA really cares about FDA time? What really matters is how long it takes to get a product on the market. The figures FDA uses should be based on this simple reality.
So I propose a truce in the statistics war. Let FDA, industry, and Congress get together and settle on one uniform measure for judging FDA performance. The benchmarks for decision times are clear: 90 days for a 510(k) and 180 days for a PMA--total days, not FDA days--as specified by law. All that needs to be determined is the methodology.
Failing this unlikely event, all parties would be wise to restrain their claims about the meaning of the numbers. A statistic, after all, may well be a Class III lie.