MDDI Online is part of the Informa Markets Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Medtech Leaders Discuss Snakebite App and Other AI Opportunities

Is AI really ready for prime time in medtech? Panelists at AdvaMed's Digital Medtech Conference discussed the challenges that remain and the opportunities already at work.

At AdvaMed's recent Digital Medtech Conference, held at a conference center nestled inside San Francisco's growing UCSF Mission Bay campus, AdvaMed Chief Strategy Officer Andrew Fish asked for a show of hands: how many of your companies develop products that use artificial intelligence?

About 75% of the crowd raised their hands.

For years, many have looked to AI and machine learning as keys to transforming healthcare, medical devices included. Today, we're seeing real ground broken, especially in diagnostics and software as a medical device (SaMD).

Is AI really ready for prime time in medtech? The panelists in a discussion called "Demystifying AI" discussed the challenges that remain and the opportunities already at work.

Adjust Expectations

In 2017, Stanford researchers developed an algorithm that can recognize skin cancers with an accuracy rate comparable to dermatologists. In some cases, the machine outperformed humans. Should we expect even greater accuracy for medical devices?

The exactness needed for a medtech algorithm depends on intended use, said Sam Surette, regulatory affairs and quality assurance manager for Bay Labs, a San Francisco-based medical technology company that applies AI cardiovascular imaging. Don’t burn out your engineers in a quest for 100% perfection added Pat Baird, senior regulatory specialist and head of global software standards for Philips. Shoot for human-level accuracy first, then aim for slightly better.

Surette said to also watch for false positives and false negatives and to bring in human backup to confirm or deny the machine's numbers. "Something very unique to AI is seeing where humans can act ask risk mitigation for the algorithm," he said in a phone interview after the conference. "Is the output explainable? Can the software explain why it's making this prediction?"

Bay Labs's EchoMD AutoEF software, which received FDA 510(k) clearance in June 2018, calculates left ventricular ejection fraction, but it also provides clips used to make those calculations, Surette said. "In that setting, clinicians don't just get a number. They get the number in the context of video clips, which allows human intelligence to act as a safeguard on AI."

Train to Overcome Bias

AI-based software isn't inherently biased. But the data used to train the device might be. Many types of bias exist: confirmation bias, where the physician gives more weight to evidence that supports his presumed diagnosis; automation bias, where the clinician trusts technology to the exclusion of other evidence; and sample selection bias, which stems from data that doesn't represent the total population, to name a few.

Generally, the panelists recommended developers accommodate all variabilities when feeding data into a device. Mitigate any inadvertent human biases by including a range of genders, ages, ethnicities, shapes, and sizes. Surette describes it as training the algorithm on a sufficient number of low, medium, normal, and high disease states.

The Opportunities Ahead

Dave Saunders, chief technology officer of Galen Robotics, told MD+DI last year that non-real-time applications such as diagnostics, medical image analysis, gene sequencing, drug interaction analysis, and pre-surgical planning have the greatest potential right now.

The panel echoed that statement, noting two products authorized by FDA: iDX-DR, which uses AI to detect eye disease; and Viz.ai, a decision support software that detects suspected strokes.

Panel moderator Yarmela Pavlovic, a partner at Hogan Lovells who focuses her practice on medical devices and diagnostics, cited remote monitoring and at-home monitoring devices as promising opportunities, as they have the potential to keep patients out of the hospital, thus lowering our country's exorbitant healthcare costs.

The panel also discussed AI's potential to bring specialists to rural areas, as well as third world and underdeveloped countries, where experts aren't readily available. African Alliance of Digital Health Networks (African Alliance), for example, works to develop "digitally literate" healthcare workers in Africa. With training and education, more African developers can learn how to use AI to develop technology that will improve public health.

Even technologies such as iDX bring specialist services to people who may not otherwise have access. Designed and tested for use in primary care, iDX-DR can detect signs of diabetic retinopathy. If it detects more than mild eye disease, the physician can then refer patients to an ophthalmologist.

John Daley, vice president of quality assurance, privacy, and security for IBM Watson Health, mentioned an interesting concept for a serious situation—snakebites. About five million snakebites occur globally each year, causing 125,000 deaths. Administering the correct antivenom saves lives, but you have to correctly identify the snake.

Developers are working on an app that would use AI to identify the offending snake from a photo. However, this means a bitten person would have to somehow take a picture of the snake, which may not go well. An alternative method discussed involved an app that tracks the victim using their phone's GPS and then facilitates the arrival of the antivenom via drone. We Robotics has been experimenting with delivering antivenom by drone to small villages in the Amazon Rainforest.

FDA's Vision for Good Machine Learning Practices

Recognizing the potential for AI to transform healthcare, FDA is clarifying its vision for good machine learning practices (GMLP). It mentions GMLP briefly in its AI-focused discussion paper, published last month, and hopes to elaborate on the theme.

Matthew Diamond, MD, PhD, medical officer of FDA's new digital health division, who spoke at the Digital MedTech conference, described GMLP as similar to good manufacturing practices but specifically focused on the challenges presented in machine learning.

GMLP don't exist yet, but Diamond hopes the industry will share its thoughts on what that means. FDA's discussion paper ends with 20 questions for medical device manufacturers and software developers, a few of them centered on GMLP. Diamond encouraged the industry to submit responses to those questions.

FDA has received 41 responses so far. If you'd like to contribute, you'll have to hurry. The deadline for submitting comments is Monday, June 3.

Filed Under
500 characters remaining