Kshitij Mohan and Harold E. Sargent
Throughout most of human history, the advance of medical practice has depended largely on accidental discoveries and observational data. Often, treatments or therapies with unmistakably positive effects on patients have been discovered by chance and subsequently adopted into medical practice. In other cases, conjectures or theories have been put forward by the leading authorities of the day, and have found their way into medical practice without any sort of controlled experimentation. For instance, the practice of bloodletting persisted well into the 18th century before new understandings of human physiology drove it into disuse.
Until our own century, instances in which controlled experiments led to the discovery of medical benefits have been extremely rare. The experiments that led to the discovery that citrus juice could be used to prevent scurvy in sailors offer a notable exception to the rule. While other fields of human endeavor were benefiting from the scientific method that developed during the Enlightenment of the 18th century, application of that method to the field of medicine lagged far behind.
Nowhere is this more true than in the area of clinical trials---controlled experiments designed to evaluate the safety and efficacy of one or more medical treatments using human beings as the patients or subjects. While the modern clinical trial embodies the principles of the scientific method, development of techniques for conducting clinical trials came long after that method had begun to be applied to medicine in general. In fact, only since World War II has the controlled, randomized clinical trial come into widespread use in the medical field.
The first controlled clinical trials in the United States focused on providing evidence that the products being tested were safe for human use. It was not until 1970 that clinical trials were required to address both safety and efficacy. Today, the range of questions being addressed in clinical trials includes not only product-related issues, but also quality-of-life measurements and, most recently, economic analyses.1
The codification of clinical trial methodologies into laws and regulations has had a dual effect on the advancement of such methodologies. On the one hand, it has made more prevalent the use of clinical trials to evaluate medical products prior to their introduction into the marketplace; on the other, it has sometimes resulted in a ritualistic and inappropriate reliance on clinical trials for the answers to questions that they are not capable of addressing. These effects have become especially apparent in relation to the clinical trials used to examine medical devices or complex therapies that combine devices, drugs, and new medical procedures. Over the past several years, a variety of trends--including the increasing complexity of medical devices, ever-growing requirements for conducting clinical trials, and more-powerful demands for information related to cost-effectiveness and clinical efficacy--have combined to challenge the limits of the clinical trial as an objective method of scientific inquiry.
THE PROTECTION OF HUMAN SUBJECTS
It is ironic that the irrational use of clinical trial methodologies--a rational scientific tool intended for the human good--has made it necessary to establish social restraints in order to protect the human subjects who participate in such trials. Unfortunately, however, the 20th century has witnessed all too many instances in which the deliberate misuse or ignorant use of clinical trial methods has made such restraints necessary. The bestiality displayed by the clinical researchers of the Third Reich in the concentration camps of World War II, and the callousness of those who studied syphilis in unknowing and untreated African American men in Tuskegee, AL, are only a couple of the better-known examples. Other examples include the alleged forcible use of prisoners or other institutionalized persons as trial subjects, particularly in totalitarian societies, and the careless or callous disregard for human subjects in radiation experiments conducted by the United States over a number of decades.
In reaction to such occurrences, several attempts have been made to define justifiable boundaries between the rights of the individual and the benefits that might accrue to society through the scientific information developed in clinical trials. An early example of such an effort is the Nuremberg Code, which was a set of criteria established in 1947 as a means of adjudicating the Nazi researchers who experimented with human beings in concentration camps during World War II. But the Nuremberg Code is by no means the earliest of such works.
The Hippocratic Oath. The preeminent code of ethics for physicians, the Hippocratic Oath has existed since the fourth century BC. At its 1948 meeting in Geneva, the World Medical Association drafted a modern version of the oath, which is often referred to as the Declaration of Geneva. This version was further amended in 1968.
In its modern form, the Hippocratic Oath has two main parts. The first deals with the duties of the physician to his or her teachers and his or her role in transmitting medical knowledge to others. The second part is a summary of general principles of medical ethics as they relate to the treatment of patients. The scope of the oath is thus limited to clinical practice, and it does not directly touch on ethics as they might relate to research. Since clinical research cannot always be separated from the treatment of patients, however, the ethical guidelines present in the oath may also be considered to apply to the physician's treatment of research subjects. Hence, the oath's first dictum to "do no harm" is also a key principle of ethics as they relate to clinical research.
Declaration of Helsinki. In 1964, the World Medical Association drew up a code of ethics that was intended to address the unique circumstances surrounding human experimentation. Known as the Declaration of Helsinki, this document was subsequently revised by the 29th World Medical Assembly in Tokyo in 1975, and by the 35th World Medical Assembly in Venice in 1983.
In its current form, the document consists of three parts. The first outlines 12 basic principles that should guide all clinical research. These include the principles that clinical research should be conducted in conformity with currently accepted scientific principles; that the research objectives should be balanced against the inherent risk to the subject; and that research subjects are entitled to informed consent.
The second part lists six principles for medical research when combined with patient care. These include protection of the patient's right to participate or not participate in a study without impact on the physician-patient relationship; reiteration that the interests of the patient must come first; and the statement that combining medical research with patient care is justified only to the extent that the research offers potential diagnostic or therapeutic value to the patient.
The third part of the declaration outlines four principles for medical research being conducted with healthy human volunteers. These principles reiterate that the interests of the subject always take precedence over the interests of science.
The Belmont Report. The ethical paradigm that most closely defines current practices is contained in an elegant exposition known as the Belmont Report, which was published in 1978 by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research.2 This commission took as its scope the protection of human subjects in research rather than in the normal practice of medicine or even in the experimental practice of medicine. The purpose of practice is solely the benefit of a patient, that of research is to test a hypothesis to enhance the general public good. To ensure that all subjects involved in such research activities would be protected, the commission intentionally defined its scope to include any activity that included a research component. In the course of its work, this group commissioned a number of scholarly papers from which emerged three basic ethical principles that underlie many of today's seemingly bureaucratic requirements: respect for persons, beneficence, and justice.
Respect for persons relates to the right of a person to be the ultimate decision maker regarding his or her participation as a subject. To enable the individual to exercise such a right, it is necessary that the individual be informed about all the implications of his or her decision and that it be arrived at freely. The right to an informed and free decision can be compromised if the individual has diminished autonomy, that is, if he or she is not capable of being fully informed, as in the case of mental patients, children, or others with diminished capability to comprehend sufficiently the implications of their decisions. Diminished autonomy also applies to persons under duress or under other circumstances that restrict their ability to arrive at a decision freely. Such is the case with prisoners or subjects of totalitarian states. While complicated situations may sometimes require a judgment call, such judgments should be guided by the principle that the decision to participate in a clinical trial should be both free and informed, and special protection should be exercised on behalf of individuals for whom either of those capacities is diminished.
This principle of respect for persons has led to today's requirements and practices related to informed consent. The study subject must receive sufficient information, and in a manner that he or she can comprehend. Thus, not only does information about the study need to be in the subject's language, but it must be at a level that allows comprehension. Information on all risks and benefits must be adequate to enable a reasonable person to make a reasoned decision. Consent needs to be totally voluntary--without social, economic, or other forms of coercion. For those with inadequate ability to fully comprehend--such as children, the mentally handicapped, the terminally ill, or the comatose--a surrogate, such as a parent, a spouse, or someone authorized or recognized as acting in the subject's best interests must be the recipient of the information and the grantor of the consent.
The Belmont Report defines the principle of beneficence as not doing harm, and maximizing possible benefits while minimizing possible harm. The report borrows the Hippocratic dictum "do no harm," and interprets this to mean that harm to even a single person cannot be justified--no matter how much good it does for others. Thus, in a clinical trial, there should be the promise of benefit to the subject as well. Preclinical studies, including animal studies where necessary, also help in minimizing any possible risk to the subject. At the same time, the trial should be designed in such a way that its overall benefits to society are maximized.
The principle of beneficence has led to today's requirements for assessing the risks and benefits involved in a study, minimizing the former and maximizing the latter. Thus, preclinical and animal studies are required, and the general tenet applies that if any scientific information can be gathered by a method other than using human subjects, that method should be applied before conducting clinical trials. Brutal or inhumane treatment of subjects or unnecessary risk are not allowed, and appropriate documentation of the risks and benefits, their appropriate peer review by an objective group such as an institutional review board (IRB), and the inclusion of the information in informed consent forms are required.
Another requirement derived from the concept of beneficence is that research should be conducted in a scientifically valid manner, using a protocol that meets the norms of good science, so that the probability of the research being successful is high and therefore the benefit to society is maximized. Sloppily conceived or conducted research does not justify the exposure of subjects to any risks. Also, with an appropriately conducted clinical trial, information regarding risk or benefit can emerge that could make it unethical to continue as originally planned. If there is clear evidence of inadequate safety or effectiveness, the trial would have to be aborted. On the other hand, if there is evidence of high effectiveness, it may be unethical to restrict the availability of the therapy only to research subjects.
The final principle discussed in the Belmont Report is that of justice. Justice demands that research be conducted fairly, and that both its burdens and benefits be shared evenly. Thus, exposure to the risks involved in clinical research should not be borne by any particular group, such as prisoners, or by groups that are economically or socially disadvantaged. If the benefits of the research flow to society in general, a fair cross section of society should also bear the risks.
In current practice, the concept of justice has led to the creation of specialized requirements related to the selection of research subjects. Generally speaking, special care and safeguards must be used for any study that seeks to include prisoners, minorities, or the institutionalized, or for any study that includes children early on. On the other hand, justice may also require that some groups, such as women, not be excluded. This is to ensure that clinical research and the subsequent advancement of therapies is applicable to the greatest possible number of people.
Other Documents. The use of human subjects in clinical research is the subject of many other government documents, including a set of guidelines established by the U.S. Department of Health, Education, and Welfare in 1971, and made into regulations in 1974. Naturally, FDA's regulations and guidelines related to the clinical investigation of new drugs and devices are numerous. Dealing with their requirements has launched a whole profession of bioethicists and an entire industry of contract research organizations (CROs) whose purpose is to design and implement clinical trials in a socially responsible manner. Recent findings on bioethical lapses in the human radiation research funded by the U.S. government since World War II indicate that the last word on the protection of human subjects in research will continue to be rewritten.3