Bernhard Kappe, CEO of Orthagonal, began his Virtual Engineering Week presentation, “Using AI & Machine Learning to Improve Medical Device Design,” quoting Marc Andreessen, founder of Netscape. “Software is eating the world, in all sectors,” Andreessen wrote in an article in the Wall Street Journal. “In the future every company will be a software company.”
Kappe said he thought that statement has aged well. He shared another Wall Street Journal story, this time a review of the Apple Airpod Pros, one year after their initial review. “This is not a review of new Airpod Pros,” he explained. “This is a new review of the same Airpod Pros one year later, and it's a much better review.” The difference, he said, was because of better software. Apple kept collecting data and feedback over that year to improve it.
This is where AI and real world data comes in, Kappe said. “From usage and scale, from other data about users, their environment, the world that they live in, and what they do, AI can discover new patterns and correlations that humans can't and turn those into algorithms much faster than humans can and continue to improve them while humans are sleeping.”
Those same patterns are also happening in medical devices, Kappe said. He gave an example of his client, Quidel, which makes point-of-care diagnostics. The company’s Project Sniffles includes a small diagnostic device that can read a standard Sofia fluorescent immunoassay cartridge, and then images of the cassette are captured on the Sniffles device and transmitted via Bluetooth to a mobile app. The result is then interpreted using a proprietary AI software model that is downloaded and part of that mobile application. “This has a number of advantages compared to the previous versions,” Kappe noted. “The first is affordability. The manufacturing costs are less than 20% of the cost for their current Sofia platform. Second is mobility and connectivity. Cellular connectivity and Cloud integration enables testing in new markets beyond the traditional point of care.”
The third advantage is rapid manufacturing, Kappe continued, because the devices are much simpler to produce. “And fourth, and just as important, is the ability of the algorithm and the device to continue improving. The AI algorithm can be improved over time through continuous offline learning.”
Kappe shared some things that his audience should consider as part of their journeys into using AI. “The first thing is to really think about where AI might add value to your medical device in general,” he stated, adding that companies should also think about where they can take existing algorithms and improve them and continue to optimize them. “Often you're combining both of these things--new trends with algorithm optimization,” he said.
“Second, you need to understand how complex your problem is, both in terms of the domain and the inputs,” Kappe continued. “The more complex these things are, the more data you're going to need to train and verify your models.”
He cited the example of Quidel's Sniffles, which uses a standard sa1 camera with a fixed distance and fixed lighting issues. “Now contrast this with using a smartphone camera to take pictures of skin to identify skin cancer lesions,” Kappe said. “In that case, you don't have a standard assay, you don't have one camera with a fixed distance. You don't have standard lighting conditions. You're going to need a lot more data,” he stressed, adding that planning around that need and understanding that from the start will be helpful in terms of thinking about the feasibility and what will be needed to design for this.
The third thing he urged companies to think about is the availability of a reference standard for an AI model. He gave an example of glucose monitoring (CGM). One of the things needed to assess the performance of a CGM system is to generate a mean absolute relative difference (MARD) score. MARD is the average of the absolute error between all CGM values and matched reference values, which are gotten from blood tests. “Getting those blood tests has a cost, and it's not part of everyday practice,” Kappe said. “So you need to pay for those when you update your algorithm,” he continued. “That makes it much, much more expensive to evaluate and validate the new algorithms. So one of the things you really need to take into consideration is, how much is it going to cost to get that reference standard? How easy is it to administer that because that's really going to impact your ability to continue to improve your algorithms, and demonstrate that,” Kappe explained.
Another thing to consider is data availability and quality, Kappe said, including a company’s own data. “How are you making sure that it's good data, that you can trace where it came from and isolate bad data? Make sure that training data is separate from verification data, etc,” he said. There may also be third-party data, which might not always be medical-grade, and Kappe urged his audience to think about how they would deal with potential issues with that, such as bias.
“Now, of course, it would not be a medical device if you didn't have to consider patient risk,” Kappe said. Patient risk is going to affect how much testing needs to be done, how much risk analysis and risk mitigation needs to be done, and how much scrutiny an algorithm will undergo by regulatory bodies, he said.
The last thing Kappe mentioned was how and where algorithms will be deployed. Deploying them in the cloud is in many ways the easiest, he said, because you have a lot of processing power, can scale easily, and can define the specific GPU or FPGA [field programmable gate array] or access that you're using, he said.
However, he said devices typically don't have the same processing power. “If you've engineered your apps from modularity, it can be pretty easy to update the algorithms in your apps, but there's a lot more work and translating and testing your algorithms. Different smartphones have different floating point precision in their GPUs, often much less than what you would be able to get in the cloud where you trained this, and so you might need to do a lot more tuning of your algorithms and testing those for specific smartphones,” he said.
Kappe wrapped up his presentation by leaving his audience with some best practices. “First, whether you using AI or not, you should design for data capture from the start,” he said.
The second is to design processes for continuous improvement. “So you need to think about how to design for efficient updates. There are a lot of best practices out there for automation and software as a medical device that you can leverage here,” he said.
“With future machines I describe the human role as being shepherds,” Kappe concluded. “You just have to nudge the flock of intelligent algorithms. Just basically push them in one direction or another and they will do the rest of the job. You put the right machine and the right space to do the right tasks.”