Selecting Clinical Investigators: What Really Counts?

FDA relies on high-quality clinical data to support safety and effectiveness decisions. Using informed clinical investigators means a greater chance for successful clinical studies.

+1
Michael Marcarelli, William Vitaleand 1 more

November 1, 2007

13 Min Read
Selecting Clinical Investigators: What Really Counts?

REGULATORY OUTLOOK

Marcarelli

You have reached the point in your device development plan where you are planning clinical studies, and the time has come to recruit clinical investigators. Your first reaction is to recruit recognized experts in your device, product, or treatment area. That is certainly a good start, but what other elements contribute to the success of a device clinical investigator? Is prior device research experience an indicator of future successful performance? Is the probability of a successful study greater when the clinical investigator is located in an area with access to a large number of subjects?

Vitale

The Office of Compliance in the Division of Bioresearch Monitoring (BIMO), CDRH, FDA, looked at some common characteristics of successful device investigators. BIMO is responsible for the compliance oversight of device clinical trials, which includes the annual inspection of more than 300 device sponsors, clinical investigators, institutional review boards (IRBs), and nonclinical labs. These inspections are one way in which CDRH ensures the reliability of data in device research or marketing applications and ensures the adequate protection of human research subjects participating in associated clinical trials.

Gunawardhana

For this exercise, BIMO reviewed and analyzed establishment inspection reports from recent device clinical investigator inspections. The goal was to describe the elements of a successful device clinical study based on the comprehensive evaluation of compliant ­inspections. FDA generally classifies compliant inspections as “no action indicated.”

Each year, BIMO issues approximately 200 assignments to inspect device clinical investigators. FDA field inspectors located at one of the 19 district offices conduct these inspections. Upon completion of an inspection, the FDA field inspector writes an establishment inspection report and attaches all documents collected during the inspection. After internal district office review, the completed establishment inspection report and all attachments are sent to BIMO. Once BIMO receives the establishment inspection report, it conducts a comprehensive compliance assessment of the inspection and classifies it into one of the following three categories: official action indicated, voluntary action indicated, or no action indicated.

An official action indicated classification means that the inspection uncovered evidence of significant objectionable practices that could affect data reliability or compromise human subject protection. This classification generally results in the issuance of a warning letter or some other high-level compliance action such as a disqualification proceeding.

A voluntary action indicated classification signifies that objectionable practices were uncovered during the inspection but did not reach the significance level of an official action indicated classification. This classification may result in a compliance action such as an untitled letter or information letter that reinforces the FDA field investigator's observations, but generally does not require the site to take any additional actions.

Approximately 40% of the time, a device clinical inspection results in a no action indicated classification. A no action indicated classification means that the FDA field inspector did not identify objectionable practices during the inspection at the clinical site, or the inspector identified only minor objectionable practices that did not justify further action. In almost all cases, the FDA field inspector will not issue a FDA-483, Notice of Inspectional Observations. This is perceived as a positive and welcome outcome for device clinical investigators, their research staff, and their sponsors.

Methodology

BIMO selected 39 no action indicated clinical investigator establishment inspection reports with inspection dates from June 2005 to March 2006. Four of these inspections were conducted outside the United States and were not included in the analysis because FDA does not enforce investigational device exemption provisions (21 CFR Part 812) outside the United States or its territories. BIMO excluded six domestic clinical investigator inspections from the analysis because they were abbreviated inspections completed specifically to verify corrections or corrective actions and did not fall within our criteria for evaluation.

As a result, 29 device clinical investigator inspections met the criteria; therefore, we queried internal FDA databases to determine factors such as clinical investigator inspectional history, device type, and relevant geographic and demographic factors. Common links among these 29 device clinical investigators are discussed in this article.

Results

Table I. (click to enlarge) Investigator sites by population. More than one inspection occurred in each of the named cities.

First, we queried the population of each clinical investigator site to see whether there were any major differences. In large metropolitan areas, such as New York, Chicago, and Los Angeles (the only demographic areas with more than 2 million people) as well as smaller clinical investigator sites, and we included all zip codes within the city. (Population was determined using the online U.S. Census Bureau 2000 population query by city and zip code.) We expected to find a high percentage of successful clinical investigators in highly populated cities. The rationale behind this assumption was that major metropolitan areas generally have more state-of-the-art hospitals and research centers, larger and varied research populations, and sophisticated IRBs. In addition, they often serve as magnets for academics and research. However, to our surprise, we found that 62% of these clinical investigator sites were in cities with a population of less than 500,000, with more than half of these located in cities of less than 100,000 (see Table I).

Table II. (click to enlarge) One-third of FDA's successful sample did not have prior FDA inspection experience.

Second, we expected that successful clinical investigators might have undergone previous FDA inspections. In fact, two-thirds of the clinical investigators had prior FDA inspectional experience; conversely, this was the first FDA BIMO for more than one-third of the clinical investigators. These data suggest that prior FDA inspection experience and the possibility of future inspections sensitize clinical investigators to the importance of following good clinical practices. These practices include, but are not limited to, FDA or IRB approval prior to study initiation, proper consenting of all research subjects prior to enrollment into the study, protocol adherence with any deviation properly approved or documented, appropriate adverse event reporting, and adequate device accountability. At the same time, since one-third of our successful sample did not have prior FDA inspection experience, it is important for sponsors to understand that they should not rule out a potential clinical site based solely upon a lack of previous FDA inspection experience (see Table II).

Third, we expected higher compliance rates with clinical investigators who had extensive experience in conducting clinical studies. This assumption turned out to be correct in that 80% of those in this study reported prior experience in conducting FDA-regulated clinical studies. Some device research sponsors consider FDA research experience to be a critical factor when using a risk management approach in the selection of their clinical sites. Based on our data, this presumption may have some validity.

Fourth, we looked at the number of subjects enrolled in each of the study sites. The number of subjects enrolled in these studies ranged from a low of two subjects to a high of 80 subjects, with the average number of study subjects per site being 23.

Table III. (click to enlarge) The number of subjects enrolled in a study does not necessarily affect the chances of a successful FDA inspection outcome.

Of the sites we evaluated, 28% had in excess of 40 subjects. Although these numbers may seem small to companies that regularly handle drug studies, it is typical for device studies to have this type of study subject distribution. Our data suggest that the number of subjects enrolled in the study did not necessarily affect the chances of a successful FDA inspection outcome (see Table III).

Table IV. (click to enlarge) Clinical investigator supervision of an experienced study-site staff may be more critical to the outcome of the study than the total number of staff at a site. Percentages rounded to nearest whole number.

Fifth, we looked at the number of research staff at each site. We defined study staff as employees who did not function as a clinical investigator or subinvestigator. These employees included study coordinators and other administrative staff associated with data collection or coordination of the study. The majority of sites, 66%, had only one employee (study coordinator), and 90% had three employees or fewer. In fact, six of the highest-enrolling sites had only one employee (study coordinator) assisting the device clinical investigator. These data suggest that clinical investigator supervision of a well-trained and experienced study-site staff may be more critical to the outcome of the study than the total number of staff at a site (see Table IV).

Sixth, the establishment inspection reports generally reported monitoring in terms of the number of times a sponsor monitored the study prior to completion, or how many times the sponsor monitored the study prior to this FDA inspection. The duration of these studies varied widely: some studies were brief, while others went on for many years. Therefore, we gathered data to estimate the number of times a sponsor monitored a site annually.

Table V. (click to enlarge) The number of monitoring visits per site when extrapolated to represent monitoring visits per year for the 19 sites for which the establishment inspection report included monitoring frequency.

In this sample, one clinical investigator was also the sponsor, so this individual is not included in this data analysis. Six establishment inspection reports discussed monitoring, but did not include specific information regarding how frequently the sponsor monitored the site. Three establishment inspection reports did not indicate whether the site was monitored or not. Hence, 10 establishment inspection reports are not included in this data analysis. The following data analysis reflects the number of monitoring visits per site when extrapolated to represent monitoring visits per year for the 19 sites for which the establishment inspection report included monitoring information (see Table V).

The data suggest that sponsors used some type of risk-based decision making when determining how frequently to monitor a clinical site. For example, the frequency of monitoring may have been based upon the device technology or complexity, the user learning curve, the vulnerability of the subject population (e.g., pediatric), the rate of ­enrollment, the use of computerization in the clinical trial, or the experience of researchers and their staffs. Whatever criteria are used, monitoring provides an essential feedback loop that is a critical component of a well-controlled and well-managed study.

Other Data

We also identified several other aspects that were key to successful clinical investigator inspections.

Clinical Site Personnel Were Generally Qualified and Properly Trained Prior to Study Initiation. Among the successful sites, characteristics included the following:

  • Both clinical investigator and staff were generally trained for the specific study that was audited, and this training was usually provided by the sponsor.

  • Many sites also had prestudy qualification visits in addition to the training.

  • On-site versus off-site training did not appear to be a significant factor.

Table VI. (click to enlarge) All sites had some degree of sponsor monitoring. For 25 sites, monitoring was performed by the sponsor, a clinical research organization, or both. Percentages rounded to nearest whole number.

Site Monitoring Is Important. In the majority of cases, the sponsor either monitored the study, or there was combined monitoring by the sponsor and a clinical research organization or outside entity. Frequency of the monitoring is a risk-based decision. All sites had some degree of sponsor monitoring. Three inspection reports did not specify who monitored the site, and one clinical investigator was the sponsor. Of the remaining 25 clinical investigator sites, monitoring was performed by those shown in Table VI.

Proper Supervisory Oversight of the Device Study by the Lead Clinical Investigator Is Important Regardless of the Number of Subinvestigators. More than half of the clinical investigator sites in this study had no subinvestigators; one of these sites had 80 enrollees. At 31% of the sites, one subinvestigator was present; these sites had 21, 26, 41, 44, 46, and 56 enrollees, respectively.

Table VII. (click to enlarge) Number of subinvestigators for the 26 sites reporting.

In May 2007, FDA issued a draft guidance titled Protecting the Rights, Safety, and Welfare of Study Subjects—Supervisory Responsibilities of Investigators. This guidance clarifies FDA's expectations concerning the investigators' responsibility: to supervise a clinical study in which some study tasks are delegated to employees or colleagues of the investigator or other third parties, and to protect the rights, safety, and welfare of study subjects (see Table VII).

Staff Turnover Rate Is More Important than Staff Size. Nearly 75% of the clinical investigator sites reported no staff turnover during the course of the study, and most of these clinical investigator sites had one or two staff involved in research. The clinical investigator sites with one staff member had study enrollment of 44, 46, 47, 49, 60, and 80 subjects, respectively.

Complete and Accurate Study Documentation Leads to Successful Outcomes. The following good clinical practice elements were common inspectional findings:

  • IRB-approved protocol.

  • IRB-approved informed consent.

  • Signed investigator agreement with sponsor.

  • All informed consents completed, signed, and dated as required by protocol.

  • Case report forms complete and accurate and, when appropriate, could be verified by comparison to source documents.

  • Progress reports to the IRB, sponsor, and monitor at regular intervals and at least annually.

  • All protocol deviations properly reported to sponsor and IRB.

  • All unanticipated adverse events properly reported to sponsor and IRB.

  • Device accountability records complete and accurate.

Conclusion

This article outlines findings gathered from FDA clinical investigator inspection reports that reveal both expected and unexpected findings. Sponsors in turn may use their own product development experiences to ensure proper selection of clinical investigators and improve the chance of successful clinical studies.

One thing is certain: FDA relies on high-quality clinical data to support safety and effectiveness decisions for high-risk devices; therefore, it is critical for sponsors to recruit informed clinical investigators who understand the research process, possess skilled and knowledgeable research staff, and recognize FDA's reliance on accurate and complete study documentation.

Michael Marcarelli is director of the Division of Bioresearch Monitoring (BIMO) division in the Office of Compliance at CDRH. William Vitale is a BIMO field investigator. He can be reached at [email protected]. Sonali Gunawardhana is regulatory counsel for BIMO. She can be reached at [email protected].

Copyright ©2007 Medical Device & Diagnostic Industry

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like