Ex Machina: What Is the Risk of AI in Medtech?

Brian Buntz

September 23, 2015

4 Min Read
Ex Machina: What Is the Risk of AI in Medtech?

The recent film Ex Machina could serve as a warning for the unforeseen consequences associated with future medical technology.

Brian Buntz

Are we beginning to create a cloud-based healthcare monitoring network that is reminiscent of NSA's warrantless surveillance program? A popular vision for the future of healthcare monitoring paints a picture of a vast network of iPatients, who play an active role in monitoring their own health metrics (or use technology that automates the process), and allow their smartphones to send their data into the cloud.

The main difference in this scenario and the NSA's surveillance program is that the data is for detecting and monitoring worrisome health conditions rather than for detecting potentially threatening security situations. In this case, doctors take on a role reminiscent to an NSA agent--leveraging cloud-based analytics to identify suspicious-looking health data patterns. A single doctor could even pull up a dashboard showing near real-time health data for a range of patients, and help identify which ones should be admitted to a hospital and which can rest at home.

But the doctor doesn't have to be alone in this task. Such a system could be used to help facilitate the diagnosis of all manner of urgent health problems. Certain processes like calling for an ambulance or reading a cath lab could be partly or entirely automated. Doctors could do more than use mobile monitoring technology to identify high-risk patients but also to contact them, recommending that they come in for further evaluation.

In the field of diagnosis, IBM is hoping its Watson platform will use its artificial intelligence capabilities to boost doctor's ability to, say, find out what is wrong with a patient admitted the ER or identify a tumor barely visible in an MRI. Cloud-based computing could scour immense troves of data to identify the sickest patients, suggest possible treatments, and help monitor them once therapy has begun.

A Warning from Ex Machina

While this vision may be exactly what our disjointed and inefficient healthcare system needs, it also opens up new risks as well. Such an intelligent health monitoring system may help us meet the needs of 21st century healthcare, but it may have unintended consequences as well.

What if, through a software glitch of some sort, an artificial intelligence system recommends against following a doctor's wise counsel? Or what about the security ramifications of having so much personal health data correlated to specific patients in the cloud?

The recent sci-fi flick Ex Machina serves as a warning of the bad that can happen when artificial intelligence systems begin to surpass human intellect.

The thesis of the film, directed by Alex Garland, is that technology developed to help humanity can backfire without the necessary safeguards. While his film focuses on artificial intelligence, it hints at the security risks of a range of fields including medical monitoring, the smart home, self-driving cars, and so forth.  

In the film, a technology titan named Nathan has created not only the world's most successful search engine but also artificially intelligent robots that can interact with humans naturally. His most recent robot, Ava, eventually becomes capable of manipulating its creator and his unassuming employee, Caleb, who was interacting with the robot to see if it could pass a modified version of the Turing Test.

Nathan's AI system came to be able to imitate human behavior after scouring vast troves of search engine data matched with data captured by smartphone sensors. For instance, by studying people's expressions while speaking on the phone, it was able to identify the patterns on conversation and tone intonations that are most likely to elicit a smile or a grimace. Those patterns were then actively used by the robot and eventually imitate those patterns when interacting with humans.  

The film also points to threats of the smart home, which has been touted as some for its potential to revolutionize the monitoring of the elderly. In Ex Machina, Ava learns to manipulate this system to suit her own purposes.

Which Is Riskier? AI or Hackers?

Even if such a potential outcome seems like it is far fetched, the notion of limiting human control of technology is risky. And linking vast amounts of patient data in the cloud carries with it security risks. Hackers are becoming more interested in personal health data (including identifying information such as names, social securities, etc.) because they can use that information for identify fraud, including using the data to obtain prescription drugs. Consequently, they are placing less value on obtaining data such as credit card numbers because the banks that issue them are getting better at detecting fraud and quickly shutting the cards down.

The notion that artificial intelligence represents a real risk if not implemented correctly also seems to be gaining ground as well. Recently PayPal and Tesla founder Elon Musk donated $10 million to an organization with the mission of lowering the potential risks posed by "human-level artificial intelligence." Stephen Hawking and Bill Gates have joined him warning about such risks.  

Like what you're reading? Subscribe to our daily e-newsletter.

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like