Kristopher Sturgis

September 20, 2016

4 Min Read
Should We Trust Robots in Medical Emergencies?

A recent study has shown that humans place too much trust in robotic technologies during emergency situations, even after they have proven to be unreliable.

Kristopher Sturgis

Emergency Robot Georgia TechWhen it comes to robotic technologies, is it possible we're placing a little too much faith in their abilities? That was the question asked by researchers at the Georgia Tech Research Institute (GTRI) when they decided to conduct the first study on human-robot trust levels in emergency situations. Paul Robinette, a research engineer from GTRI and one of the lead authors on the work, says that may be the case, despite how difficult it can be to observe genuine human-robot trust levels.

"Currently robots are used mainly for bomb disposal or for search in emergency scenarios," he said. "They are typically tightly controlled by trained human operators and do not have many features, so there are many difficulties in human-robot experiments. It can be challenging to create an experiment where participants will act naturally, and it's also challenging to determine the level of trust a person has in a robot. In the past, we have used a forced choice scenario, where a person has to act in a way that indicates he or she trusted the robot, supplemented by a survey of questions in an attempt to understand their choice."

In their latest study, researchers gathered a group of volunteers and asked them to follow a brightly colored robot that was labeled "emergency guided robot." In the study, the robot would lead the subjects to a conference room where they were asked to complete a survey about robots and read an unrelated magazine article. Other subjects were led into the wrong room where the robot would travel around in a circle twice before breaking down.

Once both groups were moved into the conference room, the hallway that the subjects had just walked through was filled with artificial smoke, setting off a smoke alarm. When the subjects were re-introduced to the robot, the robot would attempt to guide the subjects to safety. However, the group that was introduced to the broken down robot still decided to follow the robot toward an exit at the back of the building, rather than use the doorway marked with exit signs that they had used to enter the building. Robinette says this shows humans willingness to trust a piece of technology to do what it's supposed to do in a crisis situation.

"In a scenario where there is limited time to make a decision, people tend to trust that a piece of technology will do what it says it can do," he says. "Our robots said that they were designed to operate in emergencies, so our participants assumed that was true. There are many possibilities for why they made that assumption, for example, it's possible they chose the first 'good enough' option they saw. It's also possible that they simply focused on the brightly lit robot making arm gestures in front of them, and it's possible they saw the robot as an authority figure with special information about the situation. We're still working on determining which of these options, or others, is actually the case."

Robinette says that they had expected that once the robot proved itself untrustworthy by unsuccessfully guiding the subjects to the conference room, that the subjects wouldn't follow it during the simulated emergency. Instead, they found that the subjects were basically willing to follow the robot regardless of how it had previously performed. It wasn't until the robot began making more obvious errors during the emergency part of the experiment, that subjects began to question its abilities.

With robotic technologies making their way into healthcare, the level of trust between them and their human counterparts will be an essential element to their success. From surgical robot technologies, to altered AI robots that can provide scheduling information to nurses, establishing a level of trust in robotic technologies will be crucial to their impact on patient care. A fact that Rubinette and his colleagues are trying to explore so that they can better understand the trust relationship between humans and robots and hopefully avoid as many issues as possible.

"So far, it seems like the best bet is to make the robot's decision process as transparent as possible," he says. "If a person can easily see that the robot is making its decision based on bad information, he or she should be less likely to follow it. We are planning future studies that examine how to make a robot's thought process obvious to nearby people, and studies that test how obviously bad a robot has to be before people stop following it." 

Kristopher Sturgis is a contributor to Qmed.

Like what you're reading? Subscribe to our daily e-newsletter.

[Image courtesy of Georgia Tech]

About the Author(s)

Kristopher Sturgis

Kristopher Sturgis is a freelance contributor to MD+DI.

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like