Medical Device & Diagnostic Industry Magazine MDDI Article Index

January 1, 2006

15 Min Read
Fitting Human Factors in the Product Development Process

Medical Device & Diagnostic Industry Magazine
MDDI Article Index

Originally Published MDDI January 2006

HUMAN FACTORS

For medical devices, human factors analysis needs to be performed early and often during the design process.

By Patricia A. Patterson and Robert A. North

mddi0601p124a.jpg

Self-correcting mechanisms should be incorporated into design in the earliest stages, as was done with Sensys Medical's glucose monitor shown here.

The Association for the Advancement of Medical Instrumentation (AAMI) sponsored a human factors conference in Washington, DC, on June 28–30, 2005. During the conference, FDA systems engineer Peter Carstensen stated that more than one-third of medical device incidents involve use error, and more than half of device recalls for design problems involve the user interface. These statistics prompted FDA to strengthen its initiative requiring medical manufacturers to “conduct appropriate human factors studies, analyses, and tests,” said Carstensen.

FDA has placed a greater share of the responsibility for use-related incidents on manufacturers by replacing the term user error with use error. Blaming injury or death on incompetent users of devices is no longer an option. Manufacturers seeking FDA approval for new devices must submit evidence of systematic human factors analysis of use errors and how they will be controlled throughout the product development process.

Besides complying with regulatory requirements and ensuring safety, it is good business to conduct human factors analysis on medical devices. Why? Consumers are well informed. They conduct their own research on competing devices via Web sites, and they perform their own hands-on evaluations. Given a choice in the marketplace, customers base purchase selections on ease of use. The insulin pump industry, for example, has a patient forum group on the Internet (www.insulin-pumpers.org) and usability of competing pumps is often a hot topic.

At worst, releasing a product without due consideration for human factors could lead to the injury or death of a consumer. At best, poor attention to human factors could drive consumers away, which hurts a company's bottom line.

Accordingly, to mitigate risks, it's important to define and understand the theory and practice of human factors analysis, where it fits in the design process, and the steps manufacturers new to this type of analysis should take.

Sidebar: Case Study: Sensys Medical, Chandler, AZ

Why Human Factors Analysis?

FDA has begun to use the term use error when referring to problems or adverse events caused by faulty interaction between humans and medical devices. Use error refers to devices whose design has failed to support the end-user. It is important to remember that the end-user is most likely not an engineer or scientist. The end-user may be elderly or may have a physical impairment. Without human factors engineering, even the most brilliant engineering design will fail if the human element is left unsupported. In Medical Device Use-Safety: Incorporating Human Factors Engineering into Risk Management, Ron Kaye and Jay Crowley state the following:

Medical device designers are interested in developing highly reliable devices. To do this, they consider the possibilities of hazards arising from failures of the device and its components, identified through conventional reliability analyses. Designers need a more complete and accurate understanding of device use that includes consideration of unique limitations and failure modes of users. At present, designers consider only the most apparent or well-known instances of use problems such as fire or explosive hazards. This limitation during device design increases the likelihood of unexpected use errors and use-related hazards for users and patients.1

FDA looks at how manufacturers analyze or observe instances of foreseeable misuse or use errors and how they attempt to control for those instances.

Human factors analysis addresses how humans think, react to stimuli, and process information. With regard to medical devices, it is a way of researching how, by whom, and in what environment a product will be used. Its goals are twofold: to design a product that matches human capabilities and limitations, and to validate the design with usability testing.

Although there is quite a bit of artistry involved, human factors is basically a rigorous, systematic, and data-based science. One human factors exercise is called task analysis. Task analysis is a simple, step-by-step description of human interaction with a device. The analysis describes what the user is expected to perceive (see, hear, feel, etc.), understand, and physically manipulate (press, move, etc.). These parameters and how the product supports the required human functions are the basic elements of a rudimentary task analysis. The important result of a task analysis is the accompanying use-error analysis, which identifies all the ways that the human interaction may fail (e.g., failure to see information, to understand information, or to be able to execute an action).

Task analysis is not the same as the block diagrams that systems engineers create that show the inputs and outputs of various system components. Those diagrams seldom account for human interaction because they concentrate only on what the device is intended to do. Similarly, failure modes and effects analysis (FMEA), although an accepted best practice in risk management, completely disregards the potential for use errors. The discipline of human factors engineering focuses on the combination of user- and device-related problems.

Implementing the Process

Ideally, human factors is applied throughout the development process, starting with concept development activities. However, even if a project is in later stages, it is better to include analysis at that time, rather than not at all. Even after the device is built, performing task analysis will help isolate and categorize use-error opportunities. Finding those circumstances can enable mitigation through training or labeling. It also allows the company to monitor such events in the field.

Regardless of the status of a design, creating a task analysis and accompanying use-error analysis imparts greater understanding of the human component and benefits the design. For example, a device's screen may display an icon that the designer assumes users will identify as indicating low battery charge. There is a potential for human failure: What if users do not translate the icon as desired? This question should be answered in early usability evaluations. In this scenario, let's assume that 18 out of 20 subjects understood the icon (achieving 20 out of 20 is nearly impossible). The manufacturer can actually learn more from those who didn't correctly identify it.

The purpose of usability testing is to discover likely opportunities for use error. If errors are caught, the firm can respond to them rationally and cost-effectively. Armed with this use-error information, a manufacturer can decide to change the product's design (early response), provide additional labeling or user training, or perhaps simply alert its users that this error can occur (late-stage responses).

Manufacturers are not expected to completely error-proof their products. However, they are expected to aggressively seek out potential errors and apply a reasonable effort to mitigate the potential hazard. Any product development cycle can present endless opportunities to find and mitigate use-related design flaws. It makes sense then to conduct these evaluations iteratively—when fixes are generally easier, faster, and less expensive than at the end of the project.

There are multiple steps to assessing use errors. Defining the user groups and their special considerations is a critical first step before conducting task analysis and use-error analysis. Task and use-error analyses should be followed by early prototype usability evaluations to get users to identify possible misunderstandings of instructions or displayed information. One such evaluation is called walk-through-talk-through. It is conducted with a small sample of end-users. Participants are asked to verbalize their thoughts as they carry out various prescribed tasks with the device.

This is also the time to check whether users can read displays, hear warning sounds, or push buttons in various situations such as in poor lighting, or in high-noise environments. As a design matures, more-formal usability tests and measurements should be conducted with the product in as close to real-world settings as possible.

When a product is already on the market, human factors analysis takes the form of a postmarket usability study or survey designed to yield information on human interface design flaws that can be corrected for the next product revision. Postlaunch observations and customer service reports are a rich source of usability information and design feedback. Manufacturers should see these as guides to user-interface corrections. Often, they reveal needs that were not met or that could not have been anticipated. It is difficult to simulate all real-world uses.

Postmarket usability analysis not only reveals important design input for the next version of a product, but also for the design process itself. That is, the analysis can serve as a process improvement tool. For example, if customers are consistently complaining about difficulty reading the device displays in the dark, it may help to add specific tests that measure human capability to see and read information across lighting conditions. If users report that on-screen instructions are unclear for a certain user action, then more walk-through-talk-through evaluations should be added to reveal errors in interpreting the language of the device. If the complaints involve physical actions, such as difficulty in pushing buttons or moving a screen cursor, test those activities on the prototype in as many environments as possible. A case study by R.A. North and M.K. Peterson explores using postlaunch usability evaluation techniques to eliminate usability issues in product revisions.2

Although late is better than never, human factors analysis becomes less cost-effective when applied at the end of the development process. At that point, discovery of a major usability flaw may result in monumental and costly rework. Regardless of the stage at which usability data are collected, aggressively look for problems related to use error or potential device misuse. The more that potential use errors can be identified early on, the less complex and expensive it will be to correct them. Any delay in applying human factors principles to the design process may mean increasing the cost to fix design flaws by tenfold. Delays can also throw off numerous other contingent product development plans.

FDA Expectations—A Blueprint for Human Factors Documentation

Although any amount of human factors analysis is beneficial, FDA looks for documented evidence of a systematic process. Evidence usually includes:

• A thorough task analysis that expresses user needs when interacting with the product.

• A thorough use-error analysis to identify the risks when a user does not perform an action correctly.

• Human factors evaluations that can reveal potential use errors and erroneous use instances. Formal usability tests or informal evaluations must be conducted throughout the design control process.

• A plan to mitigate or control anticipated or observed use errors. Mitigation usually takes the form of product redesign, additional labeling and warnings, or emphasis in product training.

Human factors provides maximum benefit when integrated into the existing product development process as a parallel activity along with other product development functions.

Program Management and Project Engineering. Management must be a strong advocate for human factors. It must be willing to commit the financial and technical resources necessary for rigorous human factors testing during all stages of development.

Systems Engineering. Engineers often need help to understand a user's failure to identify a certain icon with a certain function. They may also fail to notice that the graphic representation of the device in the manual does not match the device the consumer is holding. Such misunderstandings can result in possible life-threatening situations. For example, a medical device report was recently submitted regarding a diabetic patient who accidentally gave herself an insulin overdose. When she read the glucose meter's seven-digit LCD, she did not realize it was upside down. Instead of reading 225, the patient read the number as 522.

Regulatory. Regulatory staff, with its focus on risk management, should be advocates of human factors analysis. They should document how use errors are anticipated and controlled throughout the design and development process, as part of the FDA product approval process.

Marketing. Marketing analysis is sometimes confused with human factors analysis. Marketing analysis focuses on feelings and attitudes, such as on how people will respond to the product's look and feel, how it compares to the competition, its price point, and perhaps certain ergonomic considerations. By contrast, human factors focuses on safe and accurate performance in the hands of the user under realistic use conditions. It is the only discipline that aggressively asks objective questions and observes users performing tasks while expressing their experiences or frustrations with the effort. It specifically searches for use errors and safety hazards.

Human factors is an iterative process of design, user testing, and redesign. The number of iterations depends on the complexity of the product. Users are the most important part of design. As product specifications, components, and suppliers change, each adjustment could lead to problems for users. The idea is not to slow the process, but to refine it with objective data about human perception, cognitive processes, and physical capabilities. It pays to measure twice and cut once to save time and energy and to lower development costs.

Look for the ROI

So, how much should device manufacturers spend on human factors? In the software industry, 10% of a company's development budget is spent on usability engineering.3 That is a sizable chunk—but allocating that 10% for human factors can save several orders of magnitude in product redesign. Software engineers also estimate that about 50% of effort goes into the user interface (the software that the user sees). What return on investment (ROI) might be expected from similar investments of money and time in the medical device and diagnostic industry? In an August 2005 MD&DI article, Michael Wiklund estimates that the return from a human factors investment between $100,000 and $300,000 could range from 12:1 to 35:1. When the time value of money is taken into consideration, his estimates decrease to 9:1 and 26:1, respectively, which he rounds up to an ROI of at least 10:1 in recognition of the time value of certain financial benefits. Wiklund notes that the estimated value of reduced product liability alone yields an ROI of at least 5:1.4

Considering that more than 98,000 deaths are caused annually by medical errors, with a significant percentage due to the failure of medical devices or use errors, human factors analysis is well worthwhile in the design and development of medical devices.5 Human factors is like quality—it is most conspicuous when absent. It is neither an afterthought nor an optional nice-to-have, but rather is necessary for meeting the FDA's stringent requirements, as well as for keeping consumers safe and a company profitable.

References

1. Ron Kaye and Jay Crowley, Medical Device Use-Safety: Incorporating Human Factors Engineering into Risk Management [online] (Rockville, MD: FDA, CDRH, 2000); available from Internet: www.fda.gov/cdrh/humfac/1497.html.

2. RA North and MK Peterson, “Improving Usability Engineering Through Post Market Usability Analysis,” in the 11th International Conference on Human Computer Interaction (Las Vegas: HCI International, 2005).

3. Jakob Nielsen, Usability Engineering (San Diego: Academic Press, 1993).

4. Michael Wiklund, “Return on Investment in Human Factors,” Medical Device & Diagnostic Industry 27, no. 8 (2005): 48–55.

5. To Err is Human: Building a Safer Health System, ed. LT Kohn, JM Corrigan, and MS Donaldson [online] (Washington DC: Institute of Medicine, Committee on Quality of Health Care in America, 1999); available from Internet: www.iom.edu/Object.File/Master/4/117/0.pdf.

Patricia A. Patterson is president of Agilis Consulting Group (Cave Creek, AZ). E-mail her at [email protected]. Robert A. North is a member of AAMI's Human Factors Standards Committee. Contact him at [email protected] or at 719/598-3196.

Copyright ©2006 Medical Device & Diagnostic Industry

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like