During usability testing, finding the root cause of errors will enable developers to fix problematic device design characteristics.

January 26, 2017

7 Min Read
Root Cause Analysis: Adventures in Medical Device Usability

During usability testing, finding the root cause of errors will enable developers to fix problematic device design characteristics.

Stephen B. Wilcox, PhD, FIDSA and Peter Sneeringer, MS

The most important data generated when conducting usability testing with prototype medical devices are use errors. Potential users "use" prototype medical devices in simulated form. They make errors. Those errors, in turn, have causes. Identifying the causes is the key to eliminating the errors.

This is what Root Cause Analysis is all about--finding the causes of errors made in usability testing. The term "root cause" is used to emphasize that the goal is to find the "deep" or "underlying" causes, as opposed to the superficial causes. The root causes lead to things that can be fixed, whereas superficial causes usually don't. 


An example may help. When someone performs an injection, it's typically important to leave the needle in for a period of time to assure that the full dose has been adequately given. A common error is to fail to leave the needle in long enough. Superficial causes might be:

  • Users are lazy and/or careless.

  • Users fail to read the instructions.

  • Users do not read the instructions carefully enough to properly comprehend them.

Note that each of these may well be true, but they don't lead to a fix. Some people are always going to be lazy and fail to properly read instructions, so identifying such causes as those above leaves us in the position of trying to fix human nature, a hopeless task.

This is where the root cause comes in. We need to ask, in turn, why the users did what they did--why did they act lazily or why did they fail to properly read the instructions? If at all possible, the root cause should lead back to the design characteristics of the device we're trying to fix.

Finding the Root Cause

One way is to ask participants during usability testing why they made the errors--how did they get misled, what did they fail to understand, etc.

  • In other words, probing the participants about why they did what they did. Participants may or may not be accurate, by the way, in their assessments of their own actions, but their answers often can be illuminating. For example, there is a difference between a person who insists that he or she did leave the needle in long enough and a person who is surprised that it was supposed to be left in for a given time period. The former is likely to be poor at judging time, so the solution might be to provide some way of signaling the end of the injection to improve accuracy. The latter is likely to be a person who didn't read the relevant instruction, so the solution might be making that instruction more prominent or ensuring it is perceived as more important. Note that the first person didn't report that he or she was not accurate and the second person didn't admit to failing to read the instructions. They may not understand the cause themselves. However, we can infer from their answers what the root causes were.

  • A good heuristic for root cause analysis is not to be satisfied until the root cause is something that can be altered and tested empirically to see if the errors can be eliminated. For example, "laziness" doesn't meet this criterion. This usually means that the burden is on the moderator to get beyond the perspective of the typical participant, who tends to blame him or herself for a given error--"I just wasn't thinking," "I forgot," etc.

Another method is to use eye trackers during usability testing.

  • It can be quite illuminating to have a record of where people look. As one example of how such data can lead to a root cause, we can look to see if a participant looked at a given instruction for an adequate length of time or not. The person who didn't rest his or her gaze on the instruction didn't read it (so the instruction needs to be made more prominent, as discussed above). The person who did look at the instruction for a period of time was likely to have misinterpreted it, so it needs to be reworded or supported by better graphics, rather than made more prominent.

The study can be designed to provide other participant behavior that can also be illuminating.

  • One trick is to put people together to perform a given task, particularly if the device is typically operated by multiple people. Then, inevitably, they discuss any problems they have with each other--discussions that should be recorded and analyzed for what they tell about why errors were made.

  • Another trick is to create instructions that require the participant to turn to a given page to review a given instruction--providing objective data regarding whether they turned to the relevant page.

  • One other technique is to encourage participants to ask questions of the moderator when they find anything confusing. The questions, then, provide additional data. A caveat, though, is that the moderator must not actually give answers to the questions in order to avoid biasing the results.

Some Additional Tips 

  • It can be helpful to create a flow chart of probing questions beforehand so that the moderator knows where to start questioning, based on the various expected types of errors. Such flow charts can be updated, if necessary, as the testing continues.

  • It is always helpful to identify the types of use errors prior to the initiation of a study to prepare for the analysis of root causes. Brainstorming can be helpful with a knowledgeable team, and task analyses and pilot studies can be used for this purpose.

  • It can be helpful to categorize the causes of errors into cognitive, perceptual, or behavioral. However, the root causes have to go beyond these participant-centric causes to device causes--e.g., poor instructions for cognitive errors, poor typography for perceptual error, or inadequate control design for behavioral errors.

  • Root causes are often far from obvious. It's sometimes necessary to watch the video record of usability testing several times to determine what, exactly, took place. It's crucial to ask probing questions during the session to get as much participant feedback as possible, but every error may not be identified in real time, making careful video analysis important.

  • Participants often fail to recognize that they've made an error. This requires the moderator to point out and explain errors in later questioning. Such discussions can require tact to keep from offending participants, which can cause them to "clam up."

  • The timing for probing questions can be a challenge. It's important to ask questions as soon as possible after errors are made, to promote accurate recall. However, this goal has to be balanced against the potential of altering subsequent behavior by information that the participant may infer from questioning.

The Role of Root Cause Analysis in Formative vs. Summative Testing

During formative testing, identified root causes guide ongoing device design--they have implications for what designers need to fix and what types of fixes are likely to work. However, in summative testing (i.e., testing to establish that a device is safe and effective rather than to help to make it so), finding the root causes of use errors helps to answer the central question: Is the device safe and effective? Root Cause Analysis is a key part of the determination concerning whether the inevitable "residual errors" can be tolerated, or not--i.e., whether the benefits outweigh the risks and whether the errors can, in fact, be feasibly eliminated.

It is rare to eliminate all errors, so there's nearly always a need to understand the root causes of the errors that can be expected when the device is introduced. They can help inform what to prepare for and what to focus on in training.

Stephen B. Wilcox, PhD, FIDSA is founder and principal of Design Science in Philadelphia. Read all "Adventures in Medical Device Usability" posts by Wilcox.

Peter Sneeringer, MS is director of Human Factors at Design Science.

[Image courtesy of QIMONO/PIXABAY]

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like