Human Factors Roundtable Part I: The Regulatory Imperative

Medical Device & Diagnostic Industry MagazineMDDI Article IndexOriginally Published January 2001

January 1, 2001

21 Min Read
Human Factors Roundtable Part I: The Regulatory Imperative

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

Originally Published January 2001

Human factors can be defined as knowledge regarding the characteristics and capabilities of human beings that is applicable to the design of systems and devices of all types. In the medical industry, there is increasing awareness of the importance of good human factors practices in the design of safe, effective, and commercially successful products— especially in the wake of FDA's adoption of the quality system regulation. In the special roundtable discussion that follows, MD&DI has brought together a varied group of human factors specialists—regulators, consultants, device industry experts, clinicians—to explore how companies can promote better product design and excel in the new product development environment.

The roundtable was organized with the assistance of MD&DI contributing editor Michael E. Wiklund, vice president and director of the American Institutes for Research (Concord, MA). Like the other participants, Wiklund has played a prominent role as a member of the Human Engineering Committee of the Association for the Advancement of Medical Instrumentation (AAMI), which has prepared a standard for medical device design that is expected to be approved in the summer of 2001. Joining Wiklund in the roundtable were Peter B. Carstensen, a systems engineer who is the human factors team leader at FDA's Center for Devices and Radiological Health (CDRH); Rodney A. Hasler, senior technical field response manager at Alaris Medical Systems (San Diego); Dick Sawyer, a human factors scientist at CDRH; and Matthew B. Weinger, MD, professor of anesthesiology at the University of California, San Diego and staff physician at the San Diego VA Medical Center, who is also co-chair of the AAMI committee.

This first part of the roundtable focuses on human factors and FDA regulations. The second part, on standards development and human factors implementation issues, will appear in MD&DI's February 2001 issue.

MD&DI: The first question is directed to our participants from FDA. Current GMPs make good human factors practice a regulatory imperative. Can you give a short history lesson on how we got to this stage?


Carstensen: I think it really had its beginnings back in 1974–75. I joined the agency in 1974, as the agency was anticipating the Medical Device Amendments of 1976, and got involved right out of the gate with an ancillary committee that had been working on a standard for anesthesia gas machines for a number of years. At that point the standard had specified about 80% of the requirements and was beginning to deal with what were essentially human factors issues, and I introduced the committee to MIL STD 1472, which is the military version of an AAMI guideline. That gave rise to the organizing of the AAMI human factors committee. I managed to convince the future chairman of that committee—designated Z79—to approach AAMI and get a human factors committee set up and write general standards for guidance for human factors in medical equipment.

And then around 1984 we had a major anesthesia incident, and the subsequent congressional oversight hearings revealed the significant extent to which human error contributed to such incidents. Jeff Cooper up at Harvard had done a study on critical anesthesia incidents, a 1984 study, in which he had talked about as many as 20,000 avoidable anesthesia deaths every year, with 90% or more of those caused by or related to human error. That got FDA's attention, and we organized a human factors group—the agency's first identifiable human factors group—in 1993. The group comprised Dick Sawyer, myself, and a couple of other people, and the whole human factors program at FDA really grew out of that.

So it was in the wake of those congressional hearings that FDA first talked about adding design control to the good manufacturing practices regulation. A further impetus was the Lucian Leape study of human error in hospitals in New York state, which I believe came out in 1991. Leape later published an article in JAMA, called "Error in Medicine," in which he extrapolated the New York data across the country and talked about anywhere from 44,000 to 98,000 avoidable deaths every year in the United States. By the way, many people actually think those are very conservative numbers, as staggering as they are.

In 1995, we held an AAMI/FDA conference on human factors in which we really laid out our strategy and our new human factors program. Two years later, the National Patient Safety Foundation was created. And then this dynamite report came out from the National Academy of Sciences—the Institute of Medicine (IOM) report, "To Err Is Human"—which was really based on the earlier Leape study. So these were the crucial events driving our program.


Sawyer: In conjunction with what Pete's talking about, you may remember the recall study FDA carried out in the late 1980s—I believe it was completed in 1990—indicating that about 44% of manufactured recalls were due to design problems. A case-by-case examination of those recalls indicated the prevalence of design-in errors, or errors induced by bad design. So this really gave us the leverage to introduce design issues into the Safe Medical Devices Act of 1990. I think the center had to fight very hard to get the word design into that document—which then served as a basis for getting design into our GMP regulation as part of the design controls.

MD&DI: Did we really need new design controls in the GMP regulations, as opposed to allowing industry or the marketplace to provide the impetus for better human factors?


Wiklund: I think the gist of this question goes to political views regarding whether regulation is the way to effect change in an industry as opposed to letting change be driven by the marketplace.

Carstensen: I think you could make a case that it could be marketplace driven to some extent. There certainly are companies that do human factors for marketing reasons—perhaps in addition to liability concerns. Clearly, there are companies I know that invest a lot of resources to get a marketing advantage. But yes, I think that in our judgment a regulation was needed, if for no other reason than to get the attention of the industry and give companies the good news that it is in their self-interest to have a strong human factors program.

Sawyer: In most other critical industrial arenas—the military, air-traffic control, transportation, and so forth—there has been a need for some regulation to get things off the ground so that companies really start paying attention to human factors issues. There are clearly precedents in other sectors for regulation.

Carstensen: I would add one other thing. I think we still see plenty of evidence that companies aren't doing as good a job as they should. But we are convinced that it's more a result of ignorance than of any effort to evade their responsibilities. Getting the attention of companies through the regulation enables us to provide the education and guidance that can help them do what really is in their self-interest.

MD&DI: For readers unfamiliar with the discipline, would someone define human factors? How does the application of good human factors practice make medical devices better?

Wiklund: Today, more and more people are probably familiar with the term human factors because of the impact that good human factors practice is having in making things like consumer software applications or electronic devices more usable. Many companies in the commercial sector are promoting good human factors as equivalent to good design or good-quality consumer experiences. As far as defining the discipline, I consider human factors to be the application of what we know about human characteristics to the design of products and systems, including, of course, medical devices. Human factors considers people's mental and physical capabilities as well as their perceived needs or preferences, and tries to accommodate these in the development of good designs that will be safe, usable, efficient, and satisfying. Obviously, when you're talking about medical devices—which serve a life-critical function—there is an inherent justification for a very strong focus on human factors to help achieve important design objectives, especially safety.

Given the proper attention to human factors, one would expect that a medical device could be improved in myriad ways. For example, it would be more ergonomic, which means that it's better suited to the physical interactions of those exposed to it. If it's something you pick up, the handle will be properly shaped so it's comfortable, so that you don't accidently drop it. When you design a display according to good human factors principles, the display is readable from the intended viewing distance and the information is organized in a fashion that is complementary to the task at hand. Controls will be laid out, shaped, and labeled in a manner that is as intuitive as possible, so that the threshold for learning how to use the device is lower and long-term usability is assured.


Hasler: I would agree with the definition we just heard. I also think it reflects the dichotomy of human factors. There are two distinctive components to human factors. The first is ergonomics, which applies human physical capabilities to the device. The second is the cognitive component, which applies the human thought process to the device design.

Weinger: The second part of the question asked how using good human factors processes makes devices better. Mike described the outcomes and how they can be improved through good design, but I think another key element is that a good process involves users—via testing and other techniques—throughout both the initial and iterative design stages. One could very well assemble a good human factors team in terms of knowledge or data and put them together with a bunch of talented engineers and they could design a device that from a theoretical standpoint should have good usability, but until you actually get users in there to use it, you don't know that your solutions are correct. I think that's a key element that needs to be part of that description.

Wiklund: That's a good point. Some of the work that the AAMI committee on human factors is doing hinges on trying to get companies to adopt a human factors process that includes early and continual involvement of the end-users, whether they be physicians, nurses, or patients using medical devices in their own homes. The objective is to get users involved in the process of coming up with good designs that meet their needs and preferences.


Weinger: You can get users involved, but if you simply do focus groups you may end up with a less-than-optimal outcome. And so another element of a successful human factors design process is applying not only the knowledge, but also the tools that will actually describe or verify usability and efficacy in all these critical elements—in other words, usability testing.

MD&DI: Do the people who use medical devices on a daily basis think that there's a usability problem? Do they recognize good human factors design from bad?

Weinger: At present, there is much greater recognition of human factors design than there was 5 or 10 years ago by clinicians across the board. However, those clinicians that have been interacting with medical devices in high-stress, rapid-tempo types of environments like the operating room or the emergency room have recognized problems for quite a few years, and in fact have played a key role in moving both the standards-making and regulatory processes forward. More generally, when clinicians use a device, they may not know about human factors, but most of us know how to cuss at a product when it doesn't make our job easier but rather makes it more difficult, or makes us more error-prone, or prevents us from doing what we want to do, or slows us down. As soon as you tell someone what human factors and usability are they say, "Oh that's the problem with this or that device!" So they may not know the word, but they certainly know what the problems are.

MD&DI: How would you gauge the magnitude of the problem? You spoke about cussing at devices, and those of us not in the clinical environment every day don't have a real good sense of whether this is an unusual or a very frequent event.

Weinger: The answer to that is rather interesting, in that towards the latter part of the 1980s things were actually somewhat better, and then in the last 10 years they've gotten worse again—and the reason for that is computers. Basically, mechanical devices like the old anesthesia machines had gone through 40 years of iterative design to make them more usable. And it's only been in the last 10 or 15 years that we've progressively introduced microprocessor-based devices throughout healthcare, and the human-computer interface has now become a real problem and it's not just in medicine. I'm frequently aggravated with my desktop computer when it crashes suddenly and I lose my work. And from a practical standpoint, there's actually been more time to develop usability of consumer devices. In the operating room, you could imagine that if you're trying to take care of a patient and your monitor suddenly freezes up, that would be a very bad thing. In fact, I've personally seen it happen.

MD&DI: Is there a particular class of devices that are hard to use and vulnerable to user error?

Weinger: Although all types of devices pose a risk, the more complex the device, and the more microprocessor-based technology it includes, the greater the risk. The criticality of the device is also paramount. A device whose failure means that a patient could die—for example, a cardiopulmonary bypass machine—obviously carries tremendous risk. Generally, devices that incorporate both control and display pose a greater risk than ones that are simply used for displays, but it also depends on the condition of the patient. For example, intravenous infusion pumps have received a lot of negative press recently. I think that's partly because they're so widely used and because they have both control and display components and are employed in high-acuity situations. They are probably the one device that comes most readily to mind, but I don't think that pumps are an isolated phenomenon, by any means.

MD&DI: In terms of FDA's overarching view of all the medical device reports and so forth, which categories of devices does the agency point to as more generally problematic?

Sawyer: There's such a huge range of devices that it's hard to characterize. The problems that we commonly see are with devices such as infusion pumps, ventilators, and other intensively interactive kinds of devices. The more a user manipulates or responds to a device in addition to merely reading it, the more there is to go wrong and the more obvious any error. Conversely, if somebody misreads a monitor that is not interactive, FDA will probably never know about it, since it's unlikely to be reported.

Weinger: It's a more widespread problem than what FDA sees, because things that get reported to FDA are generally safety related. But, as Mike pointed out earlier, human factors encompasses more than just safety—it has to do with efficacy and efficiency and satisfaction. When you're sitting in your office working on your computer and the thing crashes, there's no safety issue involved, but your efficiency, efficacy, and satisfaction are all reduced. And many times in the medical environment, devices make our jobs more difficult rather than easier. This isn't going to get reported to FDA, but it probably adds to overall healthcare costs, both directly and indirectly.

Wiklund: Let me ask you a follow-up question, Matt. Let's assume that clinicians recognize that they are well served by devices in which there's a substantial investment in good human factors. Do manufacturers expect that, if they invest heavily in human factors, they'll actually see a tangible benefit in terms of the popularity of their device in the marketplace—how well it sells relative to a device that did not benefit from a comparable attention to human factors? In other words, do you think clinicians have a strong enough voice in getting their institutions to buy products that reflect good human factors design?

Weinger: A year or two ago, I would have been more hesitant to say yes. As everyone knows, the economic pressures in the healthcare marketplace are extremely powerful. A very well-designed device that has good usability but is more expensive than a competitive product might be more difficult to purchase, even if the clinicians want it. But the IOM report and the increased emphasis on safety have begun, I think, to turn the tide in favor of devices supported by the kind of clinician input that says, "This device is easier to use and, we believe, is going to be safer." Such consensus carries much more weight now than it might have even two years ago.

Hasler: I absolutely agree. As Matt mentioned, there are mitigating problems and institution-specific issues—which can include group purchasing contracts and things like that—but, given a level playing field, when you're talking about sales opportunities, a product that is well designed has a powerful advantage with human factors.

MD&DI: Matt, how would you respond to the statement that the ultimate responsibility for the proper use of medical devices—for avoiding user error—rests with the caregivers?

Weinger: The succinct response would be "yes, but," so let's talk about the "but." If a patient is injured during device use, the manufacturer is likely to be as liable as the clinician from a medical, legal, and regulatory standpoint—particularly if the clinician points out that the device contributed to the adverse event. Because there are many other impediments to safe practice besides the device, the clinician doesn't always have the opportunity or time to deal with a device that is poorly designed. The goal for both device manufacturers and clinicians is patient safety and good outcomes, and they should work together to those ends. It's not productive either to point a finger at manufacturers and say it's entirely their responsibility to produce the best possible device, or to target clinicians and insist that they bear the sole responsibility to make sure the device is used correctly. There needs to be a collaboration.

MD&DI: Does FDA find that medical device manufacturers are aware of the new regulations? Are manufacturers responding to FDA's satisfaction?

Carstensen: Well, they're probably not aware to the extent that we'd like. I think we've come to that conclusion, but it's difficult to measure the industry as a whole. We do see some encouraging signs showing that many companies are putting more effort into human factors. But we also see key indications that there are a lot of companies out there that still don't understand what's needed.

MD&DI: What are those encouraging signs?

Carstensen: We get to look at a limited number of the premarket applications, and I'm really basing my comment on that: what we've experienced in terms of looking at the device descriptions that come in as part of the premarket approval program.

Sawyer: People like Mike and other consultants or designers also have told us that they're seeing more and more business, getting more and more opportunities. In the year following the design control development requirements going into effect, FDA did a sampling study by field investigators that indicated that somewhat more than half of the companies out there were doing human factors. How well we don't know, but there were early indications that companies were actively looking at human factors issues. Again, how completely and how well is going to vary tremendously, no question about that.

MD&DI: Awareness among manufacturers is important. Could you point to a few of the things FDA has done to this point to maximize awareness? For companies that are just now discovering there's a human factors imperative, where can they turn to get up to speed quickly?

Sawyer: FDA is doing a number of things. Of course, we put out guidance documents, which were disseminated some years ago, on design controls. Another guidance, on device use safety, just came out recently. We have teleconferences on human factors; there's one coming up in the near future. We're putting out a video for field investigators that tries to get them to understand the linkage between design and errors, to have a feel for when human factors input is necessary in the design process. We do presentations at industry trade events such as MD&M or the ASQC meetings. More and more, we're actually getting involved in giving talks to practitioners or those in industry—to doctors, nurses, biomedical engineers. And of course we monitor the results of human factors practices in regulatory efforts such as premarket review.

Carstensen: For promoting human factors, the premarket review activities contribute in a very limited way. We reach many more people through conferences and articles or through the FDA Web site. We have a pretty robust human factors section up on our site. It includes a great deal of information for manufacturers, and we find that most manufacturers are well aware of the FDA site and and have taken time to explore it. And also you could say that the AAMI human factors standard itself is an educational tool that FDA plans to promote. Once that standard is published under our standard-recognition program it will be granted official FDA recognition, which I think will make manufacturers more inclined to pay attention to it. Most of what we've done that has been effective has really been educational in nature.

MD&DI: Moving beyond education, what is FDA's stance on enforcement and what are the consequences of noncompliance with the regulations?

Sawyer: That's a difficult one. Companies are obviously at risk if they don't comply with the design control requirements. FDA can act with regard to premarket approval if a company hasn't followed the design practices, produces an overtly bad design, and is unwilling to respond. What we really try to accomplish is to educate not only people in industry, but those at FDA, at CDRH—through presentations, device evaluations, and similar means. In terms of enforcement, however, it's a slow, progressive effort. We do find that most companies, when they know there's a real safety problem with a device, will try to do something. There are always exceptions, but most companies are responsive.

Carstensen: The odds of a company getting cited for failing to comply are difficult to quantify. You have to recognize that the field is understaffed and the premarket reviewers are not all up to speed on human factors issues. So there's probably not a high risk for a noncompliant company of being discovered and getting nailed, but that's going to change over time. As we educate more and more of the reviewers and get the field more up to speed, I think companies that ignore human factors will be putting themselves at increased risk.

Weinger: What is FDA's mechanism for responding to a situation in which a human factors or usability problem with a device manifests itself in the marketplace, through comments by users or in the literature?

Carstensen: It depends on the severity of the problem and on how much information is available.

Sawyer: We do get involved, and it's a very difficult area. First of all, most devices that get in trouble, so to speak, were designed prior to design controls. So very often it's hard in an inspection, for example, to follow up on a given postmarket problem—it's difficult to find a procedural violation of, say, design controls. Although we may get a lot of reports on a device, there's tremendous underreporting: we may hear that there's been one death, when in fact there may have been 10. We don't really know how much underreporting there is. In addition, the depth of the reports we receive is highly variable. Often, a report doesn't really isolate the problem for us, doesn't tell us precisely what the design problem is or specify the linkage of that problem to the error and the linkage of the error to the injury or death.

Nonetheless, we do pursue postmarket problems when there are injuries, or potential injuries. Often, we'll get together with manufacturers; if the manufacturer recognizes that there's a safety problem, it's likely that they will try to do something about it. I don't know if "gentle persuasion" is the right term to use, but especially with older devices for which design controls were not involved in the original design or modifications, it's kind of an iterative process trying to persuade the company to correct a problem.

Carstensen: Postmarket enforcement is probably the least effective way for FDA to encourage the industry to address human factors. Once you get into a postmarket action, the stakes are so high for the company and the difficulty so significant for FDA that huge amounts of resources are consumed on both sides just dealing with the situation. It's really an object lesson for everybody, I think, that one needs to prevent these kinds of incidents that are so devastating to a company and so resource-intensive to deal with for FDA. It's just not worth it, so we need to be putting the right stuff in at the front end, getting the job done correctly the first time. Companies need to have good design controls and validation before they start marketing a device.

To the MDDI January 2001 table of contents | To the MDDI home page

Copyright ©2001 Medical Device & Diagnostic Industry

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like