Originally Published MDDI October 2003R&D DIGEST Cognitive Machines May One Day Strengthen Capabilities of Medical DevicesGregg Nighswonger

October 1, 2003

4 Min Read
Cognitive Machines May One DayStrengthen Capabilities of Medical Devices

Originally Published MDDI October 2003

R&D DIGEST

Cognitive Machines May One Day Strengthen Capabilities of Medical Devices

Gregg Nighswonger

A Sandia software developer operates a simulation trainer with a cognitive model of the software. The model can detect operator errors and provide an alert.

A new type of “smart” machine that could fundamentally change how people interact with computers is now in development at the U.S. Department of Energy's Sandia National Laboratories (Albuquerque). Although the concept is not currently being developed for medical applications, Sandia researchers believe it could one day have a significant impact on device technology by enhancing decision-making processes.

Over the past five years, a team led by Sandia cognitive psychologist Chris Forsythe, PhD, has been developing cognitive machines that can accurately infer user intent, remember experiences with users, and allow users to call upon simulated experts to aid in situation analysis and critical decision making. Forsythe notes that the group is not currently working on any medical applications, but that the field offers great potential. 

“One discussion we have had,” he says, “is to capture the expertise of various specialists in cognitive models that can be made available to first responders who must make critical decisions in the field. Another example would involve capturing an individual's experience in a means that others working in the same or similar specialties may query those experiences to help understand current episodes. In training, a trainee's cognitive model may be compared with that of one or more experts to identify potential gaps in their knowledge. Similarly, a trainee may compare the knowledge of different experts to understand what is commonly understood as opposed to what is partly a matter of opinion. Also, the technology could be applied to enhance the interfaces to various medical equipment, and particularly, equipment that requires some degree of interpretation of the results. These are just examples that come to mind.”

The initial goal of the work was to create a “synthetic human”—that is, a software program/computer system capable of thinking like a person. According to Forsythe, “We had the massive computers that could compute the large amounts of data, but software that could realistically model how people think and make decisions was missing.”

According to the researchers, there were two significant problems with modeling software. First, the software did not relate to how people actually make decisions. It followed logical processes, something people don't necessarily do. People make decisions based, in part, on experiences and associative knowledge. In addition, software models of human cognition did not take into account organic factors such as emotions, stress, and fatigue—vital to realistically simulating human thought processes.

Says Forsythe, “The most difficult problem has involved being able to capture and model the unique knowledge of a specific individual. We have found that individuals with seemingly similar levels of expertise have quite different knowledge structures. Similarly, the biggest problem to overcome is the development of tools to automate, or at least semiautomate, the knowledge-capture process. Currently, we are very good at developing a model of a specific individual, but it is a very time-consuming and laborious process. It is essential that we work toward automation of this process.”

Work on cognitive machines expanded in 2002 with funding from the Defense Advanced Research Projects Agency (DARPA) to develop a real-time machine capable of inferring an operator's cognitive processes. This capability provides the potential for systems that augment the cognitive capacities of an operator through “discrepancy detection.” Using discrepancy detection, the machine uses an operator's cognitive model to monitor its own state. When there is evidence of a discrepancy between the actual state of the machine and the operator's perceptions or behavior, a discrepancy may be signaled. 

The research is based in part upon a simple concept, the Sandia team explains. When people interact with one another, they modify what they say and don't say with regard to such things as what each person knows or doesn't know, shared experiences, and known sensitivities. The goal is to give machines highly realistic models of the same cognitive processes so that human-machine interactions have essential characteristics of human-human interactions.

“It's entirely possible that these cognitive machines could be incorporated into most computer systems produced within 10 years,” Forsythe says. “Our biggest investment over the next couple of years will focus on automating the knowledge-capture process.”

Copyright ©2003 Medical Device & Diagnostic Industry

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like