MD+DI Online is part of the Informa Markets Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Using Receipt and Decision Cohort Data to Monitor Premarket Review Activities

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

An MD&DI November 1998 Column

TRACKING FDA PERFORMANCE

Following passage of the Modernization Act of 1997, which modified time frames for premarket reviews, FDA is using receipt cohort analyses to track its performance.

New medical devices intended for marketing in the United States are subject to rigorous premarket review by the FDA Center for Devices and Radiological Health (CDRH), which is charged by statute with ensuring the safety and effectiveness of such products. The three major types of premarket submission used to accomplish this task are the investigational device exemption (IDE) application, which the sponsor of a medical device that represents a significant risk must file before beginning clinical trials; the premarket approval (PMA) application, which is required for certain specified new devices that generally pose a significant potential risk; and the premarket notification, more commonly called the 510(k), which is used when a device is believed to be "substantially equivalent" to a legally marketed device not subject to premarket approval.

The processing of these submissions for the conduct of clinical trials or the marketing of medical devices is closely monitored by CDRH. Among the data that are maintained in the center's information retrieval system are the identity of the sponsor, the class to which the device belongs, decisions made on the application, any amendments or supplements pertaining to the application, and other information or milestones tracked by the agency. Maintenance of this information enables CDRH to monitor the progress of a submission from receipt to final decision, to respond to requests from sponsors concerning the status of their submissions, and to measure and report the center's performance.

The Federal Food, Drug, and Cosmetic Act and such subsequent legislation as the FDA Modernization Act of 1997 prescribe certain time frames for the various premarket review activities. To help ensure that the agency meets its statutory time frames, FDA employs certain conventions for monitoring and reporting its centers' performance; among these are groupings of submissions into receipt and decision cohorts to monitor the review of medical device applications.

RECEIPT AND DECISION COHORTS

Descriptive statistics relating to application-processing time frames are derived from groupings of submissions. As mentioned above, the two main groupings used by CDRH and other FDA centers are receipt cohorts and decision cohorts. A receipt cohort is a group of applications received by a center over a specified time frame, while a decision cohort is a group of applications upon which a decision was made within a specified time frame. In other words, the distinguishing characteristic of a receipt cohort is that the basic unit of analysis is the receipt of an application or document, whereas for a decision cohort the basic unit of analysis is an action or decision taken on, or completion of work on, an application or document.

The differences between the two types of groupings can be understood by considering the following examples. Receipt cohort data can include the number of applications received during a fiscal year, the number of applications received during a fiscal year on which an initial action was taken within a specified number of days, the percentage of applications received during a fiscal year and completed within a specified number of days, and the average total elapsed time between receipt and completion for applications received during a fiscal year. Examples of decision cohort data are the number of application decisions made during a fiscal year, the number of applications approved or found to be "substantially equivalent" during a fiscal year, the percentage of applications approved during a fiscal year that were completed within the applicable regulatory time frame, the average total elapsed time between receipt and completion for applications approved during a fiscal year, and the number of applications pending or overdue at the end of a fiscal year.

CDRH has traditionally used decision cohorts as a basis for program performance. The statistics are easy to compute and are easily understood by outside parties and center staff. Center publications such as the annual report of the Office of Device Evaluation present summary data based upon the numbers of premarket review events that occurred during a fiscal year, regardless of when the submissions were received. These data include the numbers of decisions made (approvals, substantially equivalent determinations, and so forth), average review times, and percentages of decisions made within a set number of days. With passage of the FDA Modernization Act of 1997, however, CDRH is now facing a challenge similar to that faced by other FDA entities following passage of the Prescription Drug User Fee Act of 1992 (PDUFA). Of the 29 FDA performance goals established in that act, 18 set time frames for review and action on drug product applications based upon receipt cohorts. That is, each referred to submissions received during a specified period. Both PDUFA and the Modernization Act require accountability for processing new applications, and decision cohort measures do not distinguish newer applications from older ones. Thus, CDRH is beginning to use both decision and receipt cohort measures to report performance.

RECEIPT COHORT ISSUES

Decision cohort statistics such as counts, percent distributions, means, and medians can be calculated following the end of each fiscal year (FY) and generally do not change. However, a fiscal year decision cohort can include data for applications received during prior fiscal years as well as for those submitted in the current year. For example, of the 48 PMA applications approved in FY 1997, 16 were received in that year, 16 in FY 1996, 6 in FY 1995, 6 in FY 1994, 1 in FY 1993, and 3 in FY 1992. Because the processing of some members of a cohort may have begun when former policies were in effect at CDRH, while those received later were processed under more recent guidelines, decision cohort analyses may not be sufficiently time sensitive for measuring how well new goals are being met.

Receipt cohort analyses are more time sensitive in that the submissions that make up the cohort were received by CDRH within the same time period. This commonality in receipt dates is important when one wishes to assess the impact of policy changes implemented at a particular point in time. But receipt cohort statistics also have their drawbacks. Sometimes statistical data cannot be calculated until action on all members of the cohort is completed, or the data are subject to change as the review of applications in the cohort continues. The examples discussed below illustrate four issues involving the use of receipt cohort data.

Performance Measures from Completely Processed Cohorts. Once CDRH has taken final action on all members of a cohort, statistics such as the number and types of actions taken generally do not change. Of the 40 original PMA submissions received during FY 1993, for instance, 15 have now been approved, 21 withdrawn, and 4 completed by some other final action. Typically, the time from receipt to completion is longer for PMA applications than for other submissions, and the wait for a completely processed cohort can be considerable. The last final action for members of the FY 1993 PMA receipt cohort did not occur until February 7, 1997, for example.

Performance Measures from Incompletely Processed Cohorts. Statistics obtained before all members of a cohort have been completely processed are subject to change. Of the 40 PMA submissions received during FY 1995, for example, 20 had been approved as of March 1998 and other final actions had been taken on 13. The remaining 7 were still awaiting final action. Although it is now known with certainty that 50% (20/40) of the submissions in the cohort have been approved, whether or by how much this percentage will increase cannot be determined until final actions are taken on the remaining applications.

There are many such occasions when CDRH needs to track the progress of a receipt cohort, rather than the end point of processing. It is necessary in such instances to remember that descriptive statistics such as counts and percentages, averages, and medians may change as the processing of cohort members continues.

Data Stability. CDRH, by convention, does not calculate a cohort's median value until final actions have been taken on 50% of the submissions. Consider a hypothetical cohort of 101 submissions. The median number of FDA days from receipt to final action would be that for the submission occupying the 51st position in an ascending array of FDA days. However, this median value will probably change as processing on the remainder of the cohort is completed. Similar considerations apply to the calculation of other percentile values.

Number of Days after End of Cohort Year Percent of Applications CompletedPercentile Value
25th 50th 75tha 90tha
31 72.2 56 86
61 78.5 55 84 172
92 83.2 55 83 149
123 86.9 54 82 134
151 90.3 53 81 124 265
182 93.0 52 80 115 183
aCDRH, by convention, does not define the nth percentile until n percent of the total cohort receipts to date have been processed to completion.


Table I. Percentile values for the number of FDA days from receipt to final action for the FY 1997 510(k) notification receipt cohort.

How soon, following receipt of all submissions in a cohort, do the percentile statistical data for that cohort become stable? Table I shows the 25th, 50th (median), 75th, and 90th percentile values for the number of FDA days from receipt to final action for the FY 1997 510(k) receipt cohort for each one-month period following the end of the cohort year. These data show a progressive decline in median (and other percentile) values over time. Even with 93% of the cohort completely processed by the end of the sixth month following the close of the cohort year, it is too early to know with certainty when the median and other percentile values will stabilize. Examination of comparable cohort data for previous years suggests that about one-half to two-thirds of the total decline in the median occurs within three months after the close of the cohort year. For 510(k) first actions, the median number of FDA days from receipt to final action stabilizes in about 60 days (not shown in the table). Percentile values lower than the median stabilize sooner, while those greater than the median stabilize over longer time frames.

PMA applications have longer processing time frames and smaller cohorts than 510(k)s do, and CDRH has found that the median number of FDA days from receipt of such submissions to final action is unstable until processing of all members of the cohort has been completed. For PMA first actions, the corresponding median stabilizes 12 months after the last member of the cohort is received. It should be noted, however, that these observations are based upon recent cohorts and do not apply to those from FY 1994 or earlier.

Cutoff Dates. Because receipt cohort data are not static, a database query regarding percentile data should contain a cutoff date in addition to the beginning and ending dates (e.g., fiscal year) of the cohort. Representing some specified time after the end of the cohort year, a cutoff date is the relative or absolute date beyond which any further application activity will not be taken into account in calculating the data requested.

Figure 1. Percentages of first and final actions taken within 90 FDA days for 510(k) notification receipt cohorts. The relative cutoff dates are 90 days after each cohort year, FDA days for final actions are cumulative, and percents are based upon total receipts for each fiscal year.

Examples of the effect of relative and absolute cutoff dates are shown in Figures 1 and 2, respectively. Figure 1 shows the percentages of first and final actions completed within 90 FDA days for 510(k) submissions received during FYs 1990–1997. The cutoff date for each cohort is 90 days after the end of the cohort year. That is, the cutoff date for 510(k) submissions received during each fiscal year, which ends on September 30, is December 29 of that year. Use of such a relative cutoff date enables comparisons among cohorts to be made at the same relative point in time.

Figure 2. Percentages of final actions taken within 90 FDA days for 510(k) notification receipt cohorts. FDA days for final actions are cumulative, and percents are based upon total receipts for each fiscal year.

In Figure 2, the percentages of final actions completed within 90 FDA days for 510(k) submissions are displayed using both relative and absolute cutoff dates. Bars in the foreground use the same relative cutoff date as in Figure 1—90 days after the end of each cohort year—while bars in the background use an absolute cutoff date of April 23, 1998. Using the April 23 cutoff date results in higher percentages of completions within 90 FDA days for every cohort. The two displays differ because none of the cohorts had been completely processed by December 29 of its cohort year. More final actions were taken in under 90 FDA days between the relative cutoff date and April 23, 1998, so the percentages of completions meeting the criterion also changed. Using either a relative or an absolute cutoff date will yield valid statistical data, the difference being that of the particular information desired.

INTERPRETATION OF COHORT DATA

Statistics derived from decision and receipt cohort data for a given fiscal year can differ because the population of submissions that make up each type of cohort is generally not the same. The receipt cohort by definition consists of the submissions received during that year, whereas the submissions comprising the decision cohort could have been received not only during that year, but also during prior year(s). Therefore, the average receipt date for the decision cohort will generally be earlier than that for the receipt cohort. An example of the effects of this difference is given in Figure 3, which shows the median number of cumulative FDA days from receipt of 510(k) notifications to final action for decision and receipt cohorts for the fiscal years 1990–1997. While the receipt and decision cohort trends are similar, the rise in median days from FY 1990 to FY 1993 is steeper for the receipt cohorts than for the decision cohorts; the decline from FY 1993 to FY 1997 is also steeper for the receipt cohorts. In a first-in–first-out environment, during a period when FDA processing time is trending upwards, submissions received earlier will be less affected by the increase in processing time than will those received later. And when FDA processing times are decreasing, submissions received earlier will be less affected by the processing time decrease than will those received later.

Figure 3. Comparative median FDA days from receipt to final action for 510(k) notification receipt and decision cohorts, excluding withdrawals and deletes. FDA days for final actions are cumulative.

CONCLUSION

The collection and reporting of receipt cohort data by CDRH is an evolving process. Both receipt and decision cohort analyses are useful methods of quantifying the center's premarket review activities, and determining which technique is more appropriate depends upon the particular questions being asked. Descriptions of past performance, such as those found in annual reports, tend to report accomplishments based upon decision cohorts, whereas mandates of future performance goals (e.g., a commitment to complete processing of a given percentage of new submissions within a given number of days) may require receipt cohort data to determine whether the goals are met. Knowledge of how a statistic was derived and upon which type of cohort it was based will help members of the medical device industry—and interested others—to interpret the data correctly.

Reader comments or questions on this issue are welcome and should be sent to the author at FDA, Center for Devices and Radiological Health, 2098 Gaither Rd., HFZ-30, Rockville, MD 20850.

Gary E. Blanken is an operations research analyst, FDA Center for Devices and Radiological Health (Rockville, MD).


Copyright ©1998 Medical Device & Diagnostic Industry
Hide comments
account-default-image

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish