Ethnographic Methods for New Product Development

Posted by mddiadmin on September 1, 2001

Originally Published MDDI September 2001


Product developers who observe end-users' behavior in the actual environment of use generate tangible, workable information about a device—and the requirements of the people who use it.

Stephen B. Wilcox and William J. Reese

A multitude of devices confront clinicians on a daily basis.

In new product development, one of the central questions—if not the central question—is how do manufacturers determine what people want and need? Without a clear picture of what users need, it is impossible to channel corporate resources to yield results. There is nothing more counterproductive and frustrating than coming up with solutions to the wrong problems—but such scenarios are difficult to avoid without good, solid information about what product users want and need.

How, then, does one go about this? In practice, companies often rely on what people say. Verbal behavior is the foundation of the overwhelming bulk of customer research. Marketers make much of the difference between qualitative and quantitative research, but these terms describe variations on the same theme: both are based solely on what people say—either in groups or individually, in person or over the phone, on paper or on the Internet—rather than what they do and how they do it.

When manufacturers and marketers rely solely on what users say, they fall victim to the fact that people often have difficulty describing what type of product they need or want. End-users typically cannot express their precise problem with a given device only by talking about it. This phenomenon occurs for several reasons.

One problem is that many important user issues are not particularly part of conscious awareness. If a device works correctly, operators don't necessarily notice how they hold the instrument or what exactly takes place when they use it. It is only when something goes wrong that they notice it. Indeed, as people become more expert with a task, they are less conscious, not more, of what they do. When learning a new procedure, a surgeon is likely to be highly aware of each hand movement, but after countless procedures, he or she is more likely to carry on a conversation about a recent golf game than to focus on the intricacies of a particular device.

When answering questions about the past, people are subject to various errors. They conflate events that happened at different times; they remember events in such a way that they conform to what they expect; they are highly influenced by the way questions are asked.

Another problem is that people have all sorts of agendas that interfere with strict accuracy. They develop opinions about what interviewers want to hear and consciously or unconsciously tailor their statements to fit these opinions. They see themselves as possessing qualities that they do not really possess, so they answer questions for their "ideal" selves rather than their real selves. They are embarrassed to admit that they have problems with products because they fear those problems might reflect poorly on their abilities.

Also, to avoid sounding ignorant, people sometimes give answers to questions even when they have no idea what the correct answers actually are. Another issue is that people differ enormously in their ability to express themselves. Some people can paint an accurate picture with words, but many people cannot.

These commonplace examples of fallibility have been well established by empirical research. For example, several years ago, A. W. Wicker reviewed 48 studies that compared actual behavior to peoples' allegedly objective descriptions of that behavior.1 He found a weak correlation between speech and behavior. Elizabeth Loftus, in her book, Eyewitness Testimony, showed in study after study that peoples' descriptions of past events are highly fallible.2 New information can be introduced into a person's memory by a later conversation, for example, or parts of an event can be transposed into another event.

If there are inherent dangers in trusting what people say, why do product developers rely so heavily on exactly that? This article explores an another option: ethnographic research.


Ethnographic research can augment—even supplant entirely—the traditional means of interviewing end-users about a device. As a methodology it stems not from psychology, but from cultural anthropology, and so involves the study of people's behavior in the actual "environments of use" to generate insights about their needs. It means going to wherever people routinely use a given product, from operating theaters and radiology suites to waiting rooms, cafeterias, and even patients' homes.

If the product is not yet available, researchers make every attempt to observe users working with something like it—a prototype, for instance, or a competitor's product.

It is vital to observe behavior in the environments of use because each such environment is unique. Differences in use environments make it impossible to know without seeing firsthand what conditions most crucially shape the success or failure of a device.

If surgeons will be using a device in the operating room, researchers need to observe them using the product there, because nobody could reproduce artificially the sounds, distractions, and relationships that characterize that highly specific milieu.

Video ethnographic research in the cardiac catheterization lab.

The story is the same regardless of the environment. If people will use a device in their homes, then it is vital to learn how patients with the medical condition that the device treats actually behave with the product in their own homes. Their usual careful performance with log books, touch screen displays, and hypodermic needles may be compromised when they come home after a long day at work, or when they wake up in the middle of the night.

Firsthand Knowledge. Observing users in their natural habitat helps new-product developers understand the product's strengths and limitations from the user's point of view. But there are many additional benefits. By watching end-users interact with the device, researchers can see behaviors that reveal product performance attributes even when the users don't say a word.

Video Documentation. One way to make certain that the analysis is detailed and comprehensive is to document everything on videotape. Video documentation helps analysts take apart complex events and interactions that happen in rapid succession—such as special hand movements, gestures or facial expressions, offhand verbal statements—and look at each one in turn.

After viewing videotapes of user after user having difficulty aligning two key components of a device (despite their insistence to interviewers that this was not a problem in the procedure), designers might decide not to include those components in the development program. Developers could also come away from the discovery process with a more realistic assessment of some common user complaints, and a better understanding of which issues need attention as opposed to those that don't reflect an underlying problem in product quality.

Richer Information. When compared with focus groups, surveys, and even one-to-one interviews, the discovery process of ethnographic research yields information that is typically more rich, vivid, and concrete. This is because working in a naturalistic context enables researchers to ask people questions while they use the product (or, at the very least, immediately before and after).

Why is this important? Timing and context have a huge impact on the responses to questions. In the case of design research, there are at least two big advantages to conducting interviews when people are using the product. The first is that they literally remember more. The second is they tend to give more specific and useful answers.

People remember more while they interact with the device because the activity of using the product itself becomes a mnemonic system—it gives users a host of real-world, real-time cues to help them answer questions. Through the practice of "ask-and-watch," the interviewee is more likely to recall long-forgotten or disregarded concerns about the product than if he or she were asked about it outside the context of use.

By combining observational research with interviewing, researchers frequently hear comments such as, "Oh, I forgot to tell you about this problem," and "I guess I am having a little difficulty here. I never thought about it before—you just become so used to doing things."

The second advantage of on-site interviewing is that the observations people make tend to be more specific and useful than they would be if they had to think of them out of context—on the phone, say, or in a focus-group setting. For example, if researching a new type of self-administered therapy, product developers who visited patients' homes would discover a wealth of information. When asked out of context to discuss the kinds of mistakes they make when using a particular device, patients might volunteer very little information, at first. When they are observed actually using the device, however, the patients tend to become more vocal and list the different errors they often make. Patients might point out tasks they were performing out of sequence, buttons they pushed by accident, and tubing connections they failed to secure when they were tired or ill.

Questions about the device might get the patient thinking, but actually using the device is what gets them talking.


Conducting ethnographic field research is, of course, not quite as simple as just going into the field. If all that product developers needed to do was watch and take notes, everybody would be doing it. According to a study of successful new products over a decade, almost two-thirds of senior executives were disappointed with the results of their firms' new-product programs.3

To be successful, fieldwork depends on adherence to a methodology. The approach under discussion relies upon the framework provided by ethnography. Literally meaning "writing about culture," the term and concept of ethnography describes the attempt to understand the beliefs, customs, and rituals of people in different types of societies.

The main principle of successful ethnography is this: understand the subjects' worldview. Doing so is not the same thing as merely listening to their opinions. Understanding their worldview means discovering their values and norms, identifying their goals and expectations, and determining some of the basic ways they divide space and fill up their days.

Casting the net broadly in this way does not provide quick-and-dirty answers. But it can offer a measure of insurance that developers won't miss anything critical that could make or break the new product.

Consider the modern pacemaker-defibrillator, a device that neatly straddles the professional worlds of the electrical engineer and the cardiologist, and whose widespread use has helped give rise to the hybrid discipline of electrophysiology. Electrical engineers have had a difficult time for years trying to develop a user interface for pacemaker programming that cardiologists will find easy to use. Why? Because to do it, the device manufacturers essentially need to learn how physicians think. They need to know, for instance, what day-to-day professional language cardiologists and electrophysiologists use, which diagnostic categories are most critical to them, and how they arrive at specific treatment plans.

To develop a user interface that physicians would find intuitive, and which would therefore greatly enhance its value to them, would require understanding their professional worldview in some detail. And yet the world of cardiologists and electrophysiologists is something to which few people, apart from the doctors themselves and others they work with closely, are privy.

Rapport. The question then becomes, How does one gain access to such a unique and specialized world? The short answer is: by shadowing the people who live in it, by talking with them and observing them, and then by trying to understand their perceptions of the product.

This objective introduces a slew of new challenges, of course. First and foremost is the simple but often underestimated fact that people cannot be forced to tell researchers what they want to know. Short of some pressing utilitarian reason—such as the outright failure of a product—people simply will not share their real feelings, at least not to the level needed to provide a clear understanding of their personal situation, unless they have some authentic, interpersonal basis for interacting with the researcher.

Thus, the main ingredient in successful ethnographic work is rapport—developing relationships with people. These needn't be long-term relationships, of course, but short-term relationships characterized by basic expressions of trust and mutual goodwill.

Often, it is enough for informants to know and believe that researchers have a genuine interest in their personal situation, and in the difficulties and successes they experience with the product. But it also helps simply to spend time with them.

Though it sounds easy enough, many people who conduct traditional market research simply don't follow this principle. A common mistake made while interviewing, for instance, is attempting to extract answers to "bad" questions—questions the data are proving to be on the wrong track. This is often a function of the need to generate statistics. But it makes no sense to amass answers that don't represent the end-users' opinions. When subjects bend their answers to fit bad questions, researchers learn little of value to the product's development.

Instead, an ethnographer should take a more flexible view of the interview guide or moderator's protocol, encouraging the subject to talk about the product in his or her own words. Rather than doggedly following a predefined route, which can exhaust interviewees or put them on the defensive, the ethnographer should talk around the subject, letting bits and pieces of relevant information gradually fall into place. In this way, the key questions nearly always get answered. In the course of informal give-and-take, some preplanned questions simply become obsolete, while other questions, inevitably, become better and more precise with an ever-improving knowledge of how informants think, speak, and act.

Why do so many market research practitioners ignore this natural trajectory? There are many reasons, but all boil down to essentially one theme: the unfair expectation that the interviewee is responsible for answering the researcher's analytical questions.

In fact, informants can answer only questions that are put to them for purposes of gathering data. It is never their responsibility to provide a systematic analysis of their own behavior, nor, of course, to design products for manufacturers. Rather, their insights, comments, behaviors, and ideas are useful only as data—to be compared with other data and applied in an analysis based on a far wider sample of behavior and comments, and which is informed by a broader experience of problems associated with the product than individual informants are in a position to command.


Ethnography relies on a combination of smaller sample sizes and longer interaction times with informants than is typical in market research. Spending time with informants is important, and it is the key factor that differentiates ethnography from other methodologies, such as surveys.

Time is critical as well; understanding what someone needs is a complex and time-consuming process. Just think how dimly most people understand their own needs with regard to many of the consumer products they use every day, from cell phones and tape recorders to dishwashers and vacuum cleaners. Understanding what others need is considerably more difficult.

So it often takes some time for informants in a research program to really understand what the researchers are after, reflect on their experience, and generate helpful information. The best research takes place over a day, or even over several days—working with a team of nurses, for example, or shadowing a cardiologist during rounds.

On one hand, time helps establish good rapport. People tend to trust researchers more the more time they spend with them. As a result, informants begin to reveal more, both intentionally and inadvertently.

On the other hand, people need time to go beyond ideological responses. Strangers who have just met tend to relate to each other in terms of hackneyed stories and canned speeches about themselves, to which they seldom attach much value. They speak less as individuals, and more as types.

A similar phenomenon is evident with professionals in positions of high responsibility, such as physicians, chief nurses, and hospital administrators. In many situations, they tend to speak from their role as the member of a group, and with the group's ideology and interests in mind, rather than from their viewpoint as individuals with their own idiosyncratic opinions and feelings.

This is acceptable when all researchers want is official opinions—the kind of thing one might read in a medical journal or in a hospital newsletter. But it is not acceptable when researchers want practitioners to express their own opinions. Their responses do not count as good field data if they don't. And worst of all, if encouraged, group ideology will stifle the flow of authentic information.

For example, if an orthopedic surgeon in a focus group (or even in an anonymous survey) is questioned about the rate of infection at his hospital, he or she will likely provide the same numbers that could be read in a medical journal. But if a researcher asks that surgeon the same question at the end of a long day in which the researcher has participated in rounds, watched several surgeries, and heard the surgeon's explanations of surgical procedures, the odds are much better that the response will become clearer and more helpful—if not more honest.

So a good researcher will spend enough time to be able to recognize the interlocutor's ideology, and help him or her steer clear of it. The researcher will also be open to opportunities available in situations and spaces that offer less formality, such as the car on the way to the hospital, the office after the tape recorder is turned off, or the hallway during the minute or two after the interview. Relevant information often emerges from the interstices of a formal discussion, not from its official center.

Given time, not only may informants open up more, but they may bring to the surface of their own mind more of what they truly think and feel about a device. Everyone has had the feeling of some word or phrase being "just on the tip of the tongue." Some time later, the word or phrase surfaces out of nowhere. This is a common example of an important principle: the unconscious mind continues to work on a problem even when the conscious mind has abandoned it. In the same way, the quality of information tends to improve the longer an informant is permitted to ponder the question, meanwhile resuming his or her normal tasks.


What do the users of a product talk about when they are using it? What terms and expressions do they employ? What names for things (and persons) have they invented?

Researchers who make the effort to learn about these things are well on their way to forming a map of their subjects' worldviews, and, correspondingly, a knowledge of key mental categories that shape their assessment of the product.

Consider the following scenario: in a study of cardiac catheter labs in the United States, a manufacturer was on the verge of producing an innovation that could move in one or more of several directions—improving patient safety, supporting medical record-keeping, or decreasing costs all around. But to choose the optimal direction, the manufacturer needed to understand the dominant values in the cardiac cath lab.

Interviews suggested that the dominant concern in clinicians' minds was to improve patient safety. During procedure after procedure, however, researchers found that documentation needs drove much of the activity in the lab—even the performance times of the procedures themselves. After observing numerous procedures, researchers determined that documentation was a much more crucial factor than they had initially believed.

But to make this determination, they first had to understand the culture of the cath lab. That meant developing a knowledge of the division of labor (e.g., nurse as computer "driver"), of the unique pace (how long is too long to complete a given part of a procedure?), and, especially, of the relevant language.

The terms that clinicians used were sometimes medical jargon ("RV/LV pressures"), sometimes slang (such as "pulling settings" from a hemodynamics monitor), and sometimes a combination or contraction of medical and slang terms ("thermals" for thermodilutions).

The researchers observed procedures in a live cath lab to gain a clear understanding of how things worked. In the lab, the technologist, out of the sterile field in a viewing room, preserved the record of each step on her computer, meanwhile keeping an eye on some of the patient's hemodynamic information, such as the heart rate and blood pressure readings.

Examples like this drove home the importance of real-time recordkeeping during the cath lab procedures. Indeed, full documentation was important enough to force the doctor to actually halt his procedure, momentarily, to allow the technologist time to clear a paper jam and to enter patient data.

The language of the cath lab reflected this social reality: the technician was in a number of important ways the driver of the procedure. Chiefly, it was she who controlled the step-by-step documentation of each therapeutic maneuver.


Ethnographic research is holistic. It requires researchers to look beyond the immediate answers to their initial questions and see the new product as part of the larger context of personnel, tasks, and incentives that its users confront on a daily basis.

In a study of behavior around office photocopiers, anthropologist J. Blomberg discovered that office workers tended to define the term mechanical breakdown simply as any time the machine was unusable, not as when the mechanical components were physically damaged.4 As one might imagine, machines that happened to be in offices with a helper, someone available to clear paper jams and handle other minor problems efficiently, were perceived as more reliable than machines of the same type in offices lacking such an employee.

The example is a good one of how context defines meaning. Had researchers been content merely to ask (as in a phone survey) how many times the machines in the office "broke down," they would have received a misleading impression.

From an anthropological perspective, such contextual definitions of device problems are wholly welcome and expected. Of course, product users often define error, problem, and difficulty differently than product developers—they live with the device every day.


Ethnographic field research is a crucial tool for the development of medical products. It is certainly not the only tool for determining what product users want and need, but it yields a depth and accuracy of results that are not available from any other approach. Of course, conducting ethnographic research can be more difficult in various ways than conducting more-conventional research. But the results are worth the trouble.


1. AW Wicker, "Attitudes versus Actions: The Relationship of Verbal and Overt Behavioral Responses to Attitude Objects," Journal of Social Issues 25, no. 4 (1969): 41–;78.
2. E Loftus and G Wells, Eyewitness Testimony: Psychological Perspectives (New York: Cambridge University Press, 1984).
3. RG Cooper, New Products: The Key Factors in Success (Chicago: American Marketing Association, 1990).
4.J Blomberg, "Social Interaction and Office Communication: Effects on User Evaluation of New Technologies," in Technology and the Transformation of White Collar Work , ed. R Kraut (Hilldale: Lawrence Erlbaum Associates, 1987) 195–;210.

Copyright ©2001 Medical Device & Diagnostic Industry

Printer-friendly version
No votes yet