Machine learning and artificial intelligence (AI) have long been heralded as the future of transformative technologies. From diagnostic and imaging technologies to therapeutic applications and robotics, the potential for machine learning and AI technologies reaches almost every corner of the medtech world. So, what does that mean for the development and application of next-gen medical devices?
Dave Saunders is the chief technology officer of Galen Robotics, an emerging surgical robotics company that specializes in a new line of robotic technologies that provide a cooperatively controlled surgical platform. The company aims to provide robot-assisted technologies that can extend increased precision and unprecedented tool stabilization to microsurgery procedures.
Saunders has personally overseen the evolution of more than 40 different internet-based products from inception to market since 1991 and has led product development programs for virtual machine clustering and computer-vision-guided surgical tools. He’ll also be speaking at MD&M East in June where he’ll be covering the topic of “How Artificial Intelligence Has Changed Everything for Medical Devices.”
Saunders recently sat down to speak with MD+DI about how the current development and application for diagnostic and therapeutic devices is poised to explode once true AI arrives. He also discusses some of the challenges that new AI and machine learning technologies pose to device developers and explores the immediate impact that some of these new technologies will have on the market.
MD+DI: For starters, AI technology has been touted as a truly transformative technology for many years. Do you think we’re on the verge of machine learning and AI technologies finally having a real impact on the medical device market? How soon will we see any kind of significant impact?
Saunders: We’re already seeing AI and machine learning being applied to diagnostics and other areas, so inroads are being made. AI and machine learning for surgical devices might be a bit further out though. Currently we don’t have a clear path from the FDA for approval in this area, and training for AI and machine learning is also a bit more difficult for something like a surgical robot than, say, facial recognition or other more prominent uses of AI. We’re getting there, but a lot more work is needed.
MD+DI: In your experience, how has the development of machine learning and AI technologies changed the process of medical device development? Has it made things easier, or more complex?
Saunders: I think it’s opening a lot of doors and making complex analysis more possible in a wide range of applications. A great rule of thumb from Dr. Andrew Ng of Stanford is that anything a human can “think through” in a second or less is a possible candidate for AI or machine learning. This rule of thumb isn’t perfect, but it does provide for a short list that can help technologists and product managers see where devices might benefit from applied AI and machine learning.
MD+DI: As the chief technology officer at Galen Robotics, what role do you see AI technologies and advanced machine learning having on the development of next-gen robotic technologies?
Saunders: My preference is to see AI and machine learning applied in a way that acts as a super assistant to a surgeon or practitioner to give them “super human” perception, dexterity, and information with which to make better decisions. It doesn’t have to be all or nothing either. Take something like medical imaging. For example, if an AI or machine learning system can give you a cancer/not-cancer determination — with 100% certainty — for 50% of all breast scans, you’ve just made a huge impact on unburdening the humans who can now focus on the remaining 50%.
In surgical robotics, it’s a similar situation. You could use AI and machine learning and combine it with augmented reality to highlight a tumor during surgery, then let the robot do something like close up at the end. You could also develop integrated sensors to see through thin layers of tissue to stop the surgeon’s hand if they might be coming close to something they should avoid or align a pedicle screw as the surgeon is co-aligning it based on their own training and expertise. I love the idea of these big “moon shot” robots that could do fully autonomous surgery, but I don’t think the tech is there yet. Meanwhile, there are countless applications for AI and machine learning technologies that could help right now, and that’s where should all be focused.
Incidentally, we also need to make sure humans are incentivized to remain involved with their areas of expertise. If an AI or machine learning system reduces the load of medical imaging analysis by 50%, we still need humans to work through the rest. If people get scared that “robots will take their jobs,” we could see talent shortages in those areas. The best application of these technologies, in my opinion, is to enhance human capabilities and reduce the load of the “easy stuff” — but we still need to make sure humans are around because we can analyze things that AI and machine learning systems can’t.
MD+DI: What are some of the biggest challenges that some of these new AI technologies present for device makers, and how do you think some of these challenges can be addressed?
Saunders: How we train AI and machine learning is very important. The group of people who establish the training data for an AI or machine learning system are teaching it a worldview, even if it’s narrow in focus. Because of that, diversity among these teams is critical. It’s important that AI and machine learning systems are taught how to recognize various differences between people and help make decisions that work for each individual and not as a homogenized composite which may only represent a subgroup of humanity.
For example, imagine orthopedic planning software for knee replacement which uses AI or machine learning that was only taught with scans from people with valgus knees. When it’s applied to people with straighter or varus knees, it could get really confused. A diverse training team can have a collective perspective that makes sure gaps like that are filled in during training so reliability in the field is optimal.
MD+DI: What are some of the most important issues to consider when device makers are wanting to integrate AI or machine learning into a new medical device?
Saunders: Where is the data processing going to be done? A lot of AI and machine learning systems are designed to work in the cloud. Is that the right approach for a hospital? Can a high-volume system get enough access to a processing cloud to work on demand? Do you want a surgical robot that needs an active internet connection to work correctly? What happens if that connection gets interrupted? This is also a potential risk for “man in the middle” attacks from hackers. At the same time, setting up an edge computing cluster at each hospital could be enormously expensive and then you have to deal with the logistics of updates, new data, etc. Reading a medical image and having an occasional five-minute delay is probably manageable — but what if you get that “spinning ball” while you’re in the middle of surgery? That’s just not going to cut it.
The computing resources, storage, and training data for AI and machine learning systems used with surgical robotics needs to be planned to be robust, fault tolerant, and cost efficient. Otherwise, we may never see it in the field.
MD+DI: What kind of an impact do you think AI technologies will have on regulatory and human relation issues?
Saunders: When you have a robot make a decision based on an algorithm, it’s fairly straightforward to validate; you can do the equivalent of a mathematical proof and show that you know how it works. With AI and machine learning, you’re training it to make the kinds of decisions you want and it’s pretty amazing. Type “cat” into Google images and look at all the different cat pictures you get. No one wrote an algorithm to describe what a cat is to a computer. AI and machine learning systems are trained to recognize it — but how do you prove to a regulatory body that it really knows what a cat looks like?
Now consider a robot that can take out your appendix, based on AI or machine learning. How are you going to prove to the examiner that the robot really knows where the appendix is, and what will it do if it is not 100% confident for a specific patient? What if the patient has situs inversus and the appendix is on the other side? Who/what is responsible for each step of the procedure? What are the safety protections for the patient, and who is responsible if something does go wrong? I think there’s a lot of ground to cover here before we ever see AI driven surgical robotics on any broad scale.
As for human relations, I think we’re ready to interact with machines as peers, and maybe too soon at that. There are stories about elder care robots being immediately accepted by people, or children developing “relationships” with the personal assistants on phones. I actually say “thank you” to Alexa myself. My only concern is that because of how AI and machine learning is depicted on TV and in movies, people may be predisposed to trusting computers to operate independently in areas where they’re not actually capable in reality and might be untrusting for things where they are.
MD+DI: Finally, in a broad sense, where do you see AI technologies having the most immediate impact when it comes to medical device development? How soon do you think patients will begin to benefit from some of these new technologies?
Saunders: Today, I think the best applications for AI and machine learning systems are in non-real time systems. For example, diagnostics, medical image analysis, gene sequencing, drug interaction analysis, and pre-surgical planning. If there’s some wiggle room in the amount of time you can wait for an answer, we’re well positioned. I think these are areas that can take advantage of current infrastructures, like cloud-based computing, and variable internet bandwidth, and they can take a huge load off the hospital system without having to “do it all.” Even a load reduction of 20% could have a big impact on costs, turn-around, and quality of care.
Some of these things are being applied now, so patients could already be benefitting from AI and machine learning systems and not even know it. A bit down the road could be smart-vitals monitoring with wearable devices. Based on readings, medications could be adjusted in real time, or emergency services could be alerted if a combination of vitals shows that something is wrong. You may not want your phone to automatically dial 911 if your heart rate is elevated, but there could be a combination of readings which do indicate an urgent problem. These are the kinds of things that AI and machine learning systems could help us better understand about the human body as more data is gathered and analyzed by the wearables we already have today.
We’re in an interesting age where data storage, processing power, and connectivity is the stuff of science fiction, if you went back just 10 years and described the technology to which you have such casual access today. The low cost of all that would be equally unbelievable to people just 10 years ago. We have the ability to generate health data from wearables, affordable imaging, and other diagnostic information that would also be the stuff of science fiction to a person mere decades ago. That’s so cool, but there’s no way any human being can make sense of it all. Without AI and machine learning, we will not be able to process all of that data and combine it in ways that could revolutionize our understanding of the human body and medical technology. The possibilities may even be beyond our own imagining, but as we continue to chip away at the problem, what’s possible 10 years from now may very well blow our minds.