Conscious Awareness is Highly Overrated: Adventures in Medical Device Usability

Read all Adventures in Medical Device Usability by
Steve Wilcox

 I have an article coming out soon in the ACM journal, Interactions (Jan./Feb., 2013 issue), titled "The Problem with Transparency Is it’s not Conspicuous Enough." I thought I might summarize what I say there as it relates to the usability of medical devices. What I focus on is that the usability of a tool is closely associated with its transparency. That is, when a tool is really easy to use, it, in effect, disappears. The user focuses on the task, not the tool. A usable scalpel is one that allows the surgeon to focus on transecting tissue, not on the scalpel itself—that allows the surgeon, so to speak, to see through the scalpel to the tissue effect. 

Thus, awareness of a tool is usually a bad, not a good thing, which raises a number of interesting design challenges. The obvious implication is that a key job of the device designer is to design things so that they will disappear. But what does this mean? What does it mean to design for transparency? This isn’t so simple. I don’t think that industrial designers and design engineers take courses in making things disappear (although, maybe they should). However, it’s even more complicated than that, because a good device that’s transparent in use should be anything but transparent at other times. 

Take the example of an MRI system. In use, the system should disappear, in the sense that the tech who uses it should be thinking about getting a good image, not about how the system is operated. But, when the system is presented at a trade show, it should be anything but transparent. It should stand out among competitive units as particularly functional, elegant, safe, etc. It should be conspicuous as a thing of quality, even of beauty. Likewise, an MRI system should be anything but transparent as an object in the room; we don’t want people accidentally running into it.

It follows that one of the reasons device design is so hard is that the task of the designer (or, more typically, the largish, interdisciplinary group known as the design team) is to create things that disappear and reappear—disappear when they’re supposed to be transparent and reappear when they’re supposed to visible. A great device jumps out at the intended customer and demands that it be acquired, then promptly disappears when in use, then reappears again when not noticing it will compromise safety. It provides transparent access to things beyond it, but it is also conspicuous as an example of the latest technology, as a device that’s safe, as something that will last for many years, and so on.

It follows (as I said about product design, in general, in the Interactions article) that great device design involves the mastery of a dynamic transparency, or, alternatively, that you can’t be a great device designer unless you can control transparency with assurance.

What I’m claiming, in other words, is that the mastery of dynamic transparency is a central skill of the device designer. The problem, though, is that the mastering of dynamic transparency is itself transparent—it’s a crucial, but largely tacit skill, not one, as I mentioned, that is particularly taught to or particularly focused on by the designers and engineers who design medical devices.

Perhaps this should change. In other words, perhaps we need to make transparency less transparent. It seems to me that we ought to make transparent use an explicit design goal. Although, I admit that I have more questions than answers about how to do this, here are some initial thoughts about the implications of this line of thought:

We ought to be able to measure transparency.

One way to think about the issue is to imagine two devices that generate similar user performances, one that requires concentration on the device and one that doesn’t. It may be that, when using only the devices themselves, performance is equivalent. However, performance on other tasks (that the user would also have to perform under real-world circumstances) would be selectively undermined by the “cognitive load” associated with the less transparent device. If this reasoning is correct, then traditional measures of cognitive load, such as performance on a simultaneous additional task, should provide a measure of transparency. In other words, perhaps we can think of transparency as simply the inverse of high cognitive load. That probably is part of it.

The practical implication is that it may be useful to add tasks that measure cognitive load to our usability testing.

Because a transparent device is one that is used unconsciously (rather than consciously), interviews may not be the best tool for understanding what constitutes a usable device.

I would argue that we already know this. That’s why usability testing is largely behavioral rather than interview-based. This notion of transparency, though, reinforces my skepticism about too much reliance on what people say rather than what they do when it comes to usability (let alone safety). Stating the problem, though, as designing a device to be used unconsciously, helps to indicate the relative difficulty that the device designer faces. 

Transparency increases as a device user goes through the learning curve.

Except in rare cases, even a really usable device isn’t transparent in the beginning, but becomes so as the user goes through a learning curve. Thus, transparency is dynamic in the sense that it changes with experience. Perhaps we should use measures of transparency as criteria for the success of training.

The notion of transparency may be useful in understanding designing for people with disabilities.

Understanding how to design for people with a variety of disabilities is increasingly important as more devices move out of the clinical environment and into the home and as the population of healthcare professionals trends toward an older demographic (along with the rest of the population). One thing that’s interesting is that transparency varies depending on one’s abilities. A device that’s hard to hold for the person who struggles to achieve an affirmative grip will be far from transparent when that person picks it up. But it may be completely transparent for those who can grasp and hold it with ease. Likewise, a display screen that’s transparent for the person with good vision may not be for the person with a visual deficit. 

A logical design goal for a device would be to increase the percentage of the population for which the device remains transparent, as opposed to requiring special concentration and focus.

Well, as I said, I don’t claim to have a lot of answers regarding what this notion of transparency means. I’d like to suggest, though, that thinking about how a good tool becomes transparent to the user can provide insight into how we can design good medical devices.


Stephen B. Wilcox, is a principal and the founder of Design Science (Philadelphia), a 25-person firm that specializes in optimizing the human interface of products—particularly medical devices. Wilcox is a member of the Industrial Designers Society of America’s (IDSA) Academy of Fellows. He has served as a vice president and member of the IDSA Board of Directors, and for several years was chair of the IDSA Human Factors Professional Interest Section. He also serves on the human engineering committee of the Association for the Advancement of Medical Instrumentation (AAMI), which has produced the HE 74 and HE 75 Human Factors standards for medical devices.