Usability, Gilbert Ryle, and the Wizard of Oz: Adventures in Medical Device Usability

To my surprise, a previous post of mine (about the importance of integrating human factors and industrial design) generated some quite spirited objections to my use of the term usability. See what you learn by doing a blog? I never knew that anyone in the human factors community objected to the term usability.

April 30, 2012

9 Min Read
Usability, Gilbert Ryle, and the Wizard of Oz: Adventures in Medical Device Usability

Since quite a few people who don’t have a behavioral-science background are having to struggle with human factors issues—to do validation research, for example—I thought I would use a discussion of word choices to communicate the complexity of some of the issues that arise when doing research with people.

The central (although not necessarily explicit) objection to the term usability, is that use of such an everyday term devalues the discipline of human factors by failing to convey the technical, scientific nature of human factors engineering, which is “so much more than usability.”

I remain unconvinced by the objection.

I certainly agree that there are (at least) 2 ways in which the tasks of a human factors professional can be too narrowly defined:

1.Focusing exclusively on ordinary device use rather than cradle-to-grave issues, which include setup, maintenance, disposal, and so on. It is unmistakably true that there can be devastating human factors flaws in those aspects of a device that have to do with maintenance, just as there can be such flaws with how a device is used in everyday clinical practice.

2.Focusing on ease of use without including safety, effectiveness, efficiency, etc.
 

However, I don’t see what’s wrong with including all of this under the umbrella of usability. Can’t we talk about the usability of maintenance functions or setup functions? And if ordinary use of a device entails errors that endanger people, isn’t it fair to say that the device isn’t adequately usable? Can we honestly say that a device has good usability but isn’t safe? It strikes me that we might call such a device “falsely usable”—it appears to be easy to use but actually isn’t, because users make hidden errors that endanger patients.

Frankly, even the discussion about usability strikes me as a psychological Wizard-of-Oz argument. By Wizard of Oz, I mean the attempt to dazzle by exaggerating the power and majesty of the expertise involved. In the case of psychology, it takes the form of playing to the common misconception that the psychologist can “look deeply into the human mind”. It goes back to Freud, who characterized the mind as having all sorts of surging forces in the “unconscious”. “Cognitive Science” has replaced Freud’s Id and Superego with “unconscious cognitive processes”, but it shares the idea that there are hidden phenomena that the expert can find by digging deeply enough into the mind. This Wizard-of-Oz approach is aided and abetted by replacing ordinary terms like “performing” and “doing” with technical-sounding terms like “emitting behaviors”. People don’t see, they “engage in perceptual processing.” They don’t recall past events, they “retrieve schemas from long-term storage”.

Now, don’t get me wrong. I can list, right off the top of my head, lots of things that we human factors professionals know that most people don’t—details of body dimensions and how they vary in the population, what types of errors are most likely under various circumstances, how to tell whether observed differences are real or just statistical artifacts, which IFU layouts are demonstrably more effective, how peripheral vision changes when you panic, and so on. We know how to conduct a valid usability test and many of us know how to conduct a valid hazard analysis.

However, little of this practical stuff actually requires the razzle-dazzle of a particularly esoteric vocabulary.

Of course, human factors, like any technical discipline, does need some technical terms to achieve adequate precision. We talk about “pronation and supination”, for example, rather than “turning the forearm inward and outward” because the former leaves less room for ambiguity. However, there are some powerful arguments for being very cautious when creating a technical vocabulary for mental terms. I’m going to go out on a limb here and try to explain one of these arguments—a line of reasoning presented by Gilbert Ryle in his book, The Concept of Mind. Ryle first published the book in 1949, but, in my opinion, it’s as fresh and enlightening (not to mention entertaining) today as it was then.

Ryle asks the simple question of what mental terms like perceive mean. Let me focus on that particular word to make the point. He concludes that perceive is an “achievement word” rather than a “feeling word” or a “process word”. What he means is this: to say that we perceive something is to say that we’ve achieved the ability to act appropriately toward it—the word doesn’t exactly name the behavior; it names the achievement of a capability (what he calls a “disposition”) that has behavioral implications. If we perceive something, we don’t bump into; we can accurately answer questions about it; we can pick it up; if it’s a chair, we can walk up to it and sit in it, etc. There is often a feeling associated with perceiving a chair (I’m using feeling here to mean something like conscious awareness), but there may not be. For example, we may be carrying on a conversation and have no conscious awareness of the chair that we plop into, but we would still say that we perceived the chair because we acted appropriately toward it.

This notion of perceive as the name for an achievement contrasts with the view that’s embedded in modern psychology (which, in turn, comes from traditional British philosophers like John Locke) that terms like perceive refer to a feeling or a process—an “internal” mental phenomenon—a “feeling” if you think it’s conscious, a “process” if you think it’s unconscious.

Here’s Ryle’s argument for why perception is about achievements rather than conscious awareness: if a person has the feeling of perceiving a chair but can’t keep from bumping into it, can’t describe it, etc., we would generally agree that the person doesn’t, in fact, perceive the chair; at best we might characterize the person as “thinking incorrectly” that he or she perceives the chair. So we can perceive without a feeling, and we can have the feeling without perceiving. It follows that perceiving doesn’t have to do with feelings at all (although feelings often accompany perceiving), but rather with a certain type of achievement. The idea that mental language names feelings is what Ryle called a category mistake, like the mistake made by the prospective student visiting the college campus who says “I’ve seen the classrooms and playing fields, but I haven’t yet seen the college.”

Now, cognitive psychologists who recognize this dilemma—that words like perceive don’t seem to name feelings—try to get around it by claiming that mental words name processes, unconscious cognitive processes—you have an experience, just not a conscious experience. If you ask me, though, as soon as you go down that path, you’re getting awfully close to angels dancing on the head of a pin. You have mental processes that you aren’t aware of. Yes, the psychologists often provide behavioral evidence for their cognitive processes, but why should you believe that behavior x demonstrates the existence of cognitive process y? In fact, you have to essentially accept the premises on faith in order to accept the psychologist’s interpretation of the data.

In a nutshell, then, Ryle’s point is that those who study the mind have a long history of getting it wrong when they try to be scientific or analytical about the object of their study. As he points out, the irony is that we don’t seem to have any trouble knowing what we’re talking about when using mental terms in everyday life. However, we get ourselves tied up in knots when we try to step back and analyze what we mean by ordinary terms. So, from Ryle’s point of view, when we use terms like unconscious cognitive process, we quite literally don’t know what we’re talking about. And don’t say that you don’t care what ordinary mental terms mean, that we can simply replace them with better, more precise technical terms. As Ryle argues, the only tool we have for building a technical vocabulary is ordinary language. In other words, any technical language we create has to be communicated by ordinary language, so it rests on a foundation of ordinary language; if the latter is flawed, so is the former.

My conclusion: when we’re dealing with human phenomena, it’s safer to stick with terms that have proven useful in everyday life or at least that are only slight extensions of ordinary terms, terms like usability, that allow us to know what we’re talking about even if we’re not so good at understanding what we mean when we step back and try to analyze what we’re talking about. Of course, it behooves us to be clear and to explain what’s included and excluded, by a particular term, but this doesn’t mean that we need to make up a new technical vocabulary. Because, when we replace our everyday vocabulary with a technical vocabulary, we may be building in fundamental errors, errors like thinking that perception is a cognitive process, when it isn’t. Indeed, if Ryle is right—that the whole notion of cognitive processes is, as he puts it, “a barking up of the wrong tree”—it would explain why there seems to be such a shortage of predictive usefulness from cognitive science, which defines itself as the study of cognitive processes.

There are certainly lots of smart people from the discipline of cognitive science who do great human factors work (mostly, in my opinion, by unlearning much of what they were taught), but try to find the link between the content (not the methods) of cognitive science and actual real-world prediction. You can find a link between effective human factors work and the rigorous experimental methodology one learns in cognitive science, certainly. But try to name one real-world phenomenon that you can now predict once you learn everything there is to know in cognitive science. I’ve repeatedly offered this challenge without receiving any convincing replies.

So, in sum, we human factors professionals are doing a lot of good work. The academic community has created a powerful body of knowledge by direct empirical study of things that we practitioners are interested in—what types of warnings people actually follow, how electronic systems of all kinds can be made easier to learn, how much force people can exert, how long it takes to react under various circumstances, what sample size it takes to find a given type of use error, etc., etc.

But doing good work in human factors doesn’t generally require us to play the Great and Powerful Oz by using a hyper-technical vocabulary when ordinary language works just fine.

+++++++

Stephen B. Wilcox, is a principal and the founder of Design Science (Philadelphia), a 25-person firm that specializes in optimizing the human interface of products—particularly medical devices. Wilcox is a member of the Industrial Designers Society of America’s (IDSA) Academy of Fellows. He has served as a vice president and member of the IDSA Board of Directors, and for several years was chair of the IDSA Human Factors Professional Interest Section. He also serves on the human engineering committee of the Association for the Advancement of Medical Instrumentation (AAMI), which has produced the HE 74 and HE 75 Human Factors standards for medical devices.


 

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like