By following best practices for labeling companies can also take greater advantage of the flexibility offered by electronic labeling. Image courtesy of PRESSURE UA
More than a decade ago, FDA held meetings to explore why medical device user instructions were generally found to be ineffective. When the agency issued “Applying Human Factors and Usability Engineering to Optimize Medical Device Design” in June 2011 with its strong encouragement to validate user instructions, FDA essentially set the expectation that the purpose of effective labeling is to control risk in medical device use. Label design should be considered as much a part of risk mitigation the device would be.
Although FDA has long considered labeling as part of the device user interface, testing its effectiveness was not really an expectation. However, many manufacturers continue to claim the problem rests with users who either do not bother to read the instructions or fail to follow instructions after they’ve read them. FDA is clearly unsympathetic to that argument and has placed more responsibility on manufacturers to ensure clarity and comprehension in their labeling. FDA’s message is that labeling has to do more than just read well. It has to measurably support safe and accurate user performance.
Labeling Problems, Deficiencies, and FDA
Simulated use validation studies have revealed important labeling deficiencies, as follows:
- Instructions are not based on good user profiles or task and use-error analysis.
- Instructions are not written at a level of detail to guide user performance.
- The first step is either missing or buried in an introductory paragraph likely to be overlooked by the user.
- Instructions are open to interpretation. Two people read and follow the same step and perform it differently.
- Warning and caution statements are misplaced relative to the corresponding step.
- Illustrations are either not included or are incorrectly displayed.
- Instructions are not available where and when the user needs them.
FDA initially expressed its concern about labeling in 2001. At that time, the agency's research made it clear that labeling was viewed as little more than a late-stage writing project that often occurred prior to submission with no consideration given to human factors or performance analysis.l Manufacturers often did not even consider performance testing for labeling that would verify whether the home or professional user clearly understood how to use a device.
The publication of HE75, “Human Factors Engineering, Design of Medical Devices” by AAMI was a landmark standard developed by applying best practice guidance to medical device human factors engineering. Included in its section on user documentation was a strong recommendation for observational testing of materials and training guides to verify if the user, particularly the lay person, understood and could correctly follow this information. Another notable result of HE75 was an emphasis that directed manufacturers to refrain from citing the common phrase “user error” as the explanation for device usage issues. AAMI recognized that applying good human factors methods including improved labeling and would help ensure better user performance. FDA has since recognized HE75 as a best practice.
The View from a FDA Specialist
Molly Story, PhD, human factors and accessible medical technology specialist with FDA, does not hesitate when asked about the most common labeling error the agency encounters: failure to test labeling with real users. “Too often companies write labeling for themselves and they presume that the user has the same base of understanding as they do,” Story says. “They don’t necessarily understand the assumptions they’re making until they put labeling and the device in the hands of someone unfamiliar with its proper use.” Story says the assumptions tend to omit “critical information that can lead to user error.”
Information provided to FDA from market research can also be a fundamental problem with labeling. Market research often only collects users’ opinions or asks them how much they “agree” with carefully-worded positive statements about the device.
“Too often companies provide marketing information that does not provide the evidence we need that the device is safe and effective,” Story says, pointing out that marketing research does not explain the details of user interaction to FDA’s satisfaction. She noted that such research fails to adequately prove that users understand and appropriately respond to the labeling despite what they tell researchers. “What users say they do is not the same as what users think or actually do,” Story says. ”User opinions do not provide evidence of safety and effectiveness.”
Story says the AAMI Medical Devices and Systems in Home Care Applications Committee is preparing a Technical Information Report (TIR) that should be very useful to the industry. The committee’s TIR is expected to be a guideline on more effective validation of labeling than is currently available. The FDA specialist strongly recommends that device manufacturers validate labeling and training before they validate the device. “If training and labeling have to be changed (after device validation), then you have to go back and revalidate the device,” Story says. “It’s much more efficient to validate in the proper sequence.”
Some manufacturers have discovered that sometimes their best efforts at device innovation have gone for naught. In these cases, FDA found that labeling was unclear about device function and proper use. The result is user confusion that often surfaces with the introduction of a component that requires certain requisite skills and knowledge that users may not possess. In one situation, the manufacturer of a blood glucose meter had to withdraw an advanced component that it believed would have been a market differentiator because users did not understand some basic concepts (e.g. how to dose insulin based on meals versus blood glucose test results). Although the team knew there was a knowledge problem, the expectation was that labeling would take care of this. Human factors testing uncovered a profound lack of understanding of what the manufacturer considered to be basic concepts. The gap was beyond the scope of basic use instructions. As a result, the much anticipated component had to be dropped for the device to clear FDA. This situation caused significant delays in the device’s launch and may have been avoided had the early stage testing focused more on users’ demonstrated skills and knowledge instead of other aspects of the user interface.
Initiating Best Practices
Examples like this show an all-too-frequent disconnect between written instructions and user perception. The time to bridge this obvious communication gap is not during device validation but at the beginning with an instructional design process.
Companies are best served by initiating the human factors process early in the design stage and sharing that process with those who will be responsible for labeling. The process should begin by resolving two very pertinent questions for example:
- Are the users lay people, who have not had professional medical training; are they healthcare professionals, or are both groups users?
- Where and under what conditions will the user interact with the device? “Where” could mean a hospital or a rescue vehicle. Conditions may involve everyday use or an emergency.
Companies should start by determining the users’ performance needs for instructional information to support safe and accurate device use. The perceptual, cognitive, and manual action model in FDA’s human factors draft guidance is immensely useful in capturing this information. It should be asked whether a written or paper-based guide is sufficient or if additional training is required, as is often the case for users of home dialysis for example. Answers can be found through the user and task analysis. The process is the best way to eliminate erroneous assumptions through examination and analysis of the real environment of the everyday user. User and task analysis can facilitate decision-making about the type of training that labeling may require.
Training decisions should also be based on the complexity of the task that the user is required to perform. Numerous steps, a large number of tasks, and long-term memory retention are obvious signs that written guides alone will be insufficient. The more complex the interaction, the more likely training will be required.
Testing user performance should begin with small and informal assessments during the early stages of the process when information is refined. The process actually mirrors the same testing used for the device. Limiting labeling comprehension testing to employees without including user performance testing is unacceptable for the most obvious of reasons: human factors have been excluded. Labeling instruction to be most effective and understandable has to take general user limitations and characteristics into account. Only after this analysis has been completed should there be validation testing that will eventually be submitted to FDA.
Electronic medical device labeling offers flexibility that is nearly impossible with paper-based labeling, but it also needs to be held to many of the same best practices. Electronic labeling, sometimes called electronic performance support systems (EPSS), can offer users detailed, just-in-time instructions. Too often, electronic ‘help’ features simply give the user access to content when what they are really searching for is ‘how do I apply this content to what I’m doing now?’ A well-designed EPSS can provide the type of information users want and, very importantly, adjust the level of detail (task versus step versus sub-step) that the user requires.
Lessons from a Successful Labeling Experience
While focused on creating labeling for its new t:slim Insulin Delivery System, Tandem Diabetes Care Inc., a San Diego-based provider of technologies for managing diabetes, found that “In the majority of our studies, at least 70 percent of our users did not refer to the labeling,” says Linda Parks, Tandem’s national director of clinical education. “People don’t really go back and read the instructions unless they have a question or problem.”
Parks described the initial labeling, which had no input from Tandem’s human factors staff, as “very technical text” that could be confusing to users. “That’s when we made a labeling transition by talking less about the device and more about how the users interact with it,” Parks says.
Tandem used human factors engineering to conduct a usability study and a day of training that included labeling comprehension. The next day, the users were given a group of 10 tasks to perform and labeling had to be understood for all of them. “The labeling passed with flying colors,” Parks says. She credits the company’s good human factors methods as the drivers behind its successful labeling effort. In November 2011, the device became the first insulin pump with a touch screen approved by FDA.
Good labeling and training can never be a substitute for a device that is well-designed. This is a difficult goal but it has to be considered given the standards set by HE75 and the draft guidance from FDA. Companies need to understand who their users are. Cautions and warnings on labels are insufficient without consideration of the vital data that only testing and eventual validation can provide. It behooves manufacturers to recognize that users are much too varied to be categorized by marketing demographics. Human factors engineering, user profiles that are specific, task analysis, and performance testing apply to labeling as much as they do the device.
Patricia A. Patterson is President of Agilis Consulting Group, LLC and an FDA assigned expert consultant. She is a contributor to HE75: 2010, Human Factors Design for Medical Devices, and a member of the AAMI/HA Medical Devices and Systems in Home Care Applications Committee. For further information, e-mail firstname.lastname@example.org, visit www.agilisconsulting.com, or call (480) 614-0486. 2