May 1, 1997

18 Min Read
Designing Medical Devices to Minimize Human Error

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

An MD&DI May 1997 Column

HUMAN FACTORS

Medical device designers should be familiar with the causes of end-user error and use safety analysis techniques to reduce its likelihood.

Error is universal: everyone commits errors every day. Most are trivial, but some can combine with other events to cause an accident. In the medical domain, errors can lead to serious injuries and even death.1

Research and common experience both demonstrate that human error can be blamed for most accidents. In his book, Set Phasers on Stun, Steve Casey recounts the prominent role of human error in many well-known accidents, including the fatal radiation-overdose accident involving the Therac radiation therapy device.2 Human error is estimated to cause or contribute to up to 90% of accidents both generally and in medical devices.3,4

Even though there are unique products and usage circumstances in the medical industry, the types of errors that lead to accidents are the same as those that occur in other industries.5 One human error expert summed it up by saying, "There are few or no medical errors; there are many errors that occur in medical settings."6 Medical device manufacturers, therefore, should study what is known about human error and use such information to reduce its occurrence in the use of medical devices.

DEFINING HUMAN ERROR

One of the simplest definitions of human error is that it is any action or omission that causes results that users neither foresee nor intend. Most errors have benign consequences, but they sometimes spread through a system or combine with other circumstances to cause accidents. Both errors and accidents are unintended.

Some human error is random, occurring because of the natural variation in human performance. However, the probability of its occurrence can be reduced through design, personnel selection, training, procedures, and organizational policies. There are typically two opposing views on how human error relates to accidents. On the one hand, it is argued that the product user, because of his or her error, contributed to the event that caused loss or injury. On the other hand, it is argued that either defective design or other circumstances made the error unavoidable (or, at least, very likely and foreseeable).

HUMAN ERROR

There are three basic tenets regarding human error. First, everyone commits errors. There is no such thing as error-free human performance over a long period of time. Second, human error is generally the result of circumstances that are beyond the conscious control of those committing the errors. Third, products or systems that depend on perfect human performance are fatally flawed.

Errors are usually distinguishable from careless behavior, an example of which is the failure of a hospital laboratory worker to wear protective gloves or goggles while preparing blood samples for testing. Error is when that same worker drops his goggles.

Slips and Mistakes. Human factors researchers distinguish between errors of omission and commission. An error of omission is leaving some required action out of a sequence. An error of commission is doing something that is incorrect. While these terms might be useful for categorizing specific actions, they are not helpful in determining the mechanisms involved in error sequences. A more general way of talking about errors is to distinguish between slips and mistakes (Figure 1).

Figure 1. The difference between a "slip" and a "mistake."

A person slips when he or she tries to perform the correct action but somehow does it incorrectly. A simple example of a slip is when a doctor, nurse, or patient sets an incorrect dose into an infusion pump even though they knew the correct dosage.

A mistake is an error that occurs when a person embarks on the wrong action. That action might be completed perfectly, but it is not the action that should have been taken. An example of a mistake is when a physician prescribes a medication to which the patient is allergic. Regardless of how well the user executes every step after this point, the patient will receive incorrect, and potentially fatal, treatment.

The distinction between slips and mistakes is important because the two types of errors are mediated by different human processes. The cause of mistakes lies in the decision-making process, while that of slips lies in performance.

Violations. Violations are actions done intentionally that are not strictly proper or legal, such as casual speeding. These are not errors, because they are intentional. However, people generally only intend to commit the violation, not suffer the potential consequences. Violations commonly occur in the workplace. There may be formal rules that describe job functions, but workers often take shortcuts either to raise productivity or to complete a task not adequately accounted for in the work rules. Typically, violations are tolerated, and sometimes even encouraged, by management.

Active and Latent Errors. Another useful distinction is between active and latent errors. Active errors have immediate consequences and typically cause some sort of major catastrophe. Examples include the mix-up of two sets of results from a diagnostic test and the administration of an incorrect and fatal medication. In most accidents, an active error can be identified as one of the immediate causes of loss or injury.

Latent errors set the stage for later accidents. They are often separated by time and space from the loss and injuries they cause and are more difficult to identify and evaluate than active errors. Typical latent errors involve design, organizational decisions, policies and procedures, environmental elements, and other factors that affect human performance in subtle and pervasive ways. A common latent error in medicine involves misleading or confusing labeling, which ultimately causes someone to commit an error. One researcher called latent errors "resident pathogens," because they lie dormant until the conditions are right for them to cause problems.

In law, there is a concept known as the "learned intermediary," which distinguishes between errors committed by people who have no training in a particular domain and those committed by people who have extensive training and experience.7 It holds trained and experienced individuals to a higher standard of knowledge and performance than those without such training and experience. For example, a nurse or physician might be held legally liable for setting incorrect limits on a respiration or heart rate monitor, whereas a patient would not likely be blamed for incorrectly reading a glucose-monitoring device.

Human factors experts design products differently for naïve users than for highly trained ones. A designer would most likely engineer a different user interface for a respiratory therapist than for a volunteer hospital worker. The terminology used in a medical interface designed for physicians differs from that used in an interface for patients. But this concept doesn't apply to human error. Research studies and empirical field reports demonstrate that poor design and other factors can cause even the most knowledgeable and conscientious users to err.

In fact, during a study of anesthesiology errors, 24% of the anesthesiologists surveyed admitted to committing an error with fatal results.8 These physicians are obviously highly trained, conscientious, and motivated. But poor design, training, procedures, environmental conditions, and common human frailties, such as fatigue, can cause well-trained and well-intentioned people to err. Often the design of a product, label, or other user interface lays a trap for product users, whether these users are learned or not.

Training. Human error is difficult to control, despite all the dollars and effort spent to reduce it. For example, in many industries, even medicine, automation has been implemented to reduce errors. Nevertheless, researchers have discovered that automation eliminates some errors while magnifying the severity of others.9 Given such evidence, it is difficult to argue that automation or any other technique has sufficiently reduced the rate of injury-causing clinical errors.

The most common approach to eliminating human error is to determine who committed a particular error and then train that person (and his or her coworkers) in the correct behavior. This strategy just fixes the blame--not the problem. Unfortunately, training alone is seldom effective in reducing errors.

Properly designed and delivered training is very effective in producing certain types of knowledge, skills, and behavior. The reason it is ineffective in reducing errors, however, is that training affects intentional, or planned, behavior. Errors and accidents are, by definition, unintentional. People don't set out to err. Training cannot alter an intent that never existed. But the use of training shouldn't be abandoned. Procedures, proper tool use, resource management techniques, and other error-reducing strategies should still be taught.

REDUCING ERROR THROUGH DESIGN

Designers can recognize the potential for and drastically reduce human errors. FDA has taken a positive step in this direction by developing a human factors design document for medical devices.10 The Association for the Advancement of Medical Instrumentation (AAMI) has published a similar document.11 In addition to these, many general human factors design guides are available.

But despite the availability of these guidelines, there is no sure way to eliminate errors through design. Errors tend to have more than one cause. Most medical applications are fairly complex, and therefore a number of factors contribute to human errors and subsequent accidents. Because errors are the result of many interrelated and interacting factors, an effective error-reduction strategy also requires a number of elements.

One easy way to evaluate the potential for error in human-machine systems is to use the PEAR model. PEAR is an acronym for the four major components of human-machine systems. To reduce or eliminate human errors, it is necessary to consider the people who will use the device, the environment in which it will be used, the actions (or tasks) people will be doing that involve the device, and the other resources that will be available in the job environment.

People. Errors are rarely committed because people are stupid. Generally, people are very clever and goal oriented. In human factors studies, researchers focus on the people who will use the product being designed. To make reasonable and technically sound judgments about error potential, researchers must know who will be using the product. They need to know, at least approximately, the distribution of the users' age, gender, cultural background, experience, and training. For example, the labeling that is used for a target population of anesthesiologists will probably not be appropriate for novice in-home users. A product used by young adults might not present the same risk as the identical product used only by people over 65 years of age.

Environment. Once researchers know the end-users, they must evaluate the environment in which the product will be used. The term environment refers to the physical aspects of the environment as well as the procedural, staffing, organizational, psychological, and temporal elements. Typical physical elements include temperature, humidity, lighting, indoor or outdoor locations, noise, and work-suface height. Other environmental elements include individual work or teamwork, time of day, use of written procedures, and time stress.

Actions. In addition to determining the end-users and environment, researchers need to know what users will be doing with the product. Several human factors methods, such as task analysis, can be used to identify task elements.12 Researchers want to find out how people will really use the product--something that is often very different from how the reserchers think people will use it or how people ought to use it. People tend to find novel ways to use a product--some of which might introduce risks not obvious to designers or marketers.

Resources. In many instances, errors are controlled before they lead to accidents because of other resources available to the user. Examples are written procedures, instruction manuals, coworkers, telephone help lines, tools, and other job aids. For instance, potential errors associated with in-home medical devices can be reduced by giving users easy access to telephone or Internet advice.

SAFETY ANALYSIS GOALS

Safety analysis is poorly understood even within certain technical communities. Despite the fancy names given to various techniques, safety analysis is really just an exercise in what-if planning. The primary goal of safety analysis is to identify the hazards inherent in using a product.13 Researchers also want to quantify (if possible) the risk of injury or property damage associated with each hazard. Finally, they want to eliminate or reduce that risk. Designers might be able to do that by eliminating the hazard itself. For example, the use of pulse oximeters eliminates the hazards associated with needle sticks required for certain blood tests.

If a hazard can't simply be eliminated, the next choice is to isolate it from users. A common example of hazard isolation is the use of sharps guards on glucose test devices. If the hazard can't be eliminated or isolated, then the last choice is to warn users. Warning is the least technically desirable option for dealing with product hazards. For this reason, many technical people consider certain warnings to be Band-Aids used by manufacturers who are too lazy to fix the underlying cause of the hazard.

Nevertheless, warnings are often chosen in order to deal with hazards because other methods are inordinately expensive. After all, cost is a factor in any business decision. Changing a design to fix a hazard or eliminate an error source is most costly after the product has been built. There is a rule of thumb, often called the "rule of 10," used by developers to show how costs can climb as development progresses. This rule, attributed to Genichi Taguchi, the quality engineering guru, states that if it costs $1 to fix a problem during design, it will cost $10 to fix the same problem during production development, and $100 to fix the problem after the product is on the market.

SAFETY ANALYSIS TECHNIQUES

A number of techniques can be used to perform safety analysis. All the methods described here can be performed in a relatively short time--usually anywhere from a few hours to a week. The elements that determine the analysis time are the number of components in the product and the number of end-user tasks or behaviors. Documenting the analysis results will require additional time.

Failure Mode and Effects Analysis (FMEA).In this technique, researchers diagram the product to show the logical and functional connections among components, then postulate how each component might fail and what effects such a failure might cause. The user is considered to be one of the components of the system and particular types of human errors are failure modes. Once researchers examine each component in isolation, they examine combinations of components. This is a bottom-up method because researchers begin with individual components and then logically determine the effects of their failure.

Figure 2. Example of a failure mode and effects analysis (FMEA) diagram for the metering apparatus on an infusion pump.

An example of FMEA, shown in Figure 2, is the examination of the parts of a drug-infusion pump, such as its drug-metering apparatus. If it is presumed that the metering device can fail in the fully open position, then researchers can determine what effects this might have on the user.

Fault-Tree Analysis.Fault tree is the generic name for a number of diagrammatic analysis methods. Strictly speaking, FMEA is a fault-tree technique. Most fault-tree methods, however, are top-down rather than bottom-up analyses. That is, researchers begin by postulating a type of general result and then try to figure out what would cause it. Returning to the example of an infusion pump, one of the results to consider is the situation in which a patient is given more than the maximum dosage limit. Once the result is specified, all the components and combinations of components that could cause it are examined.

Barrier Analysis.Adapted from certain accident investigation methods, barrier analysis is based on the fact that a product has (or causes) various types of energy that can damage property and cause injuries. Examples of energy are heat, mechanical impact, and pharmaceutical reactions. This method attempts to identify product-related energies and the barriers that prevent the energies from reaching property or people. This method is a bit more abstract than the previous ones, but it can often result in identifying serious hazards quickly. If the product has no designed-in barrier for a particular energy, designers must try to supply one or more. Barriers can be physical, procedural, and behavioral. Protective gloves are an example of a physical barrier against bloodborne pathogens. Drug-interaction warnings are an example of a behavioral barrier, because manufacturers use them to influence a user's behavior.

Force Field Analysis (FFA).In FFA, a technique that comes from total quality management (TQM) studies, researchers identify the desirable outcomes of product usage and then try to identify the forces that may push users toward and away from those outcomes. These forces can be real physical forces or virtual ones, such as time stress, professionalism, and procedures. While not a very quantitative method, FFA stimulates analytical thinking.

Critical Incident Technique.The methods listed above are most often conducted in analytical settings. That is, researchers convene a group of people and perform an analysis on paper. If a product is already being sold and used, researchers can use the critical incident technique. In this method, users typically are asked whether they have observed or been involved in near-accidents or injuries related to the product. Critical incidents are valuable indicators that a component or usage scenario might be hazardous and thus needs further examination.

Pareto Analysis.The Pareto technique also comes from the field of TQM. To perform a Pareto analysis, researchers analyze injury and accident records related to the product to determine the frequency of each type of component failure or accident scenario. The ability to do a Pareto analysis implies that there have been injuries or accidents associated with the product or with similar ones.

CONCLUSION

The best way to control errors is to stop them before they occur. Using the techniques described above, many potential errors and error sources can be identified and eliminated during the design process. While designers can't know everything about a design before it goes into commercial usage, they do have a large knowledge base about many types of products and systems. Simply using what they know about similar products is an effective way to reduce human error.

Regardless of how conscientiously it is handled, however, design is not likely to eliminate all the potential human errors related to a medical product. In fact, poor design can introduce more errors than it eliminates. Therefore, the second element in an effective error-reduction strategy is identifying and managing those that will occur. An error that is caught before it causes loss, damage, or injury is benign. Managing errors generally requires teamwork, the willingness to question constructively the actions of other workers, and the knowledge of proper work procedures.

Finally, reducing errors means controlling both active and latent errors. As evidence demonstrates, product design and resource management can control many active errors. Latent errors, however, require periodic environmental monitoring to identify the elements that produce them. The critical incident technique is a good way to identify latent error situations. Various informal methods can also be used to identify and eliminate accidents waiting to happen.

It is presumptuous to argue that all human errors are caused by circumstances beyond human control. There are certainly situations when a reasonable person should take certain precautions to avoid injury. Nevertheless, the errors that are often of the most technical (and legal) interest involve highly trained and motivated people performing actions that are part of their everyday jobs. In these situations, the vast majority of errors are the result of poorly designed equipment, labels, communications, procedures, work environments, and organizational structures. In essence, end-users have been set up to commit errors. Upon close examination, the real question may not be why errors occur, but why more don't.

REFERENCES

1. Bogner MS (ed), Human Error in Medicine, Hillsdale, NJ, Lawrence Erlbaum, 1994.

2. Casey S, Set Phasers on Stun: And Other True Tales of Design Technology and Human Error, Santa Barbara, CA, Aegean, 1993.

3. Bogner MS, "Medical Devices and Human Error," in Human Performance in Automated Systems: Current Research and Trends, Mouloua M, and Parasuraman R (eds), Hillsdale, NJ, Lawrence Erlbaum, pp 64­67, 1994.

4. Nobel JL, "Medical Device Failures and Adverse Effects," Pediat Emerg Care, 7:120­123, 1991.

5. Reason J, and Maddox ME, "Human Error," in Human Factors Guide for Aviation Maintenance, Maddox ME (ed), Washington, DC, U.S. Government Printing Office, ch 14.

6. Senders JW, "Medical Devices, Medical Errors, and Medical Accidents," in Human Error in Medicine, Bogner MS (ed), Hillsdale, NJ, Lawrence Erlbaum, chap 9, 1995.

7. Allee JS, Product Liability, New York, Law Journal Seminars Press, 1995.

8. McDonald JS, and Peterson S, "Lethal Errors in Anesthesiology," Anesthesiol, 63:A497, 1985.

9. Weiner EL, and Curry RE, "Flight Deck Automation: Promises and Problems," Ergonomics, 23:955­1011, 1980.

10. Do It By Design: An Introduction to Human Factors in Medical Devices (draft), Rockville, MD, FDA, Center for Devices and Radiological Health, 1996.

11. Human Factors Engineering Guidelines and Preferred Practices for the Design of Medical Devices, ANSI/AAMI HE48, Arlington, VA, Association for the Advancement of Medical Instrumentation, 1993.

12. Rutter BG, "Task Analysis: Understanding How People Think and Behave," Med Dev Diag Indust, 19(1):66, 1997.

13. Cushman WH, and Rosenberg DJ, Advances in Human Factors/
Ergonomics, 14, Human Factors in Product Design, Amsterdam, Elsevier, 1991.

Michael E. Maddox is principal scientist at Sisyphus Associates (Madison, NC), a firm that provides analysis, design, evaluation, and training services.

Copyright ©1997 Medical Device & Diagnostic Industry

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like