Healthcare is just as prone to fall victim to hype and irrational exuberance as any other complex industry. And the more revolutionary the promise, the more outrageous the overstatements could be.
Artificial intelligence has certainly been one of those "next big things" for some time in healthcare. Whether branded as "big data and analytics" or "automated clinical decision support," the results of technology-assisted care, especially in non-clinical and non-emergent settings, have been uneven at best. But a new report indicates AI's time in healthcare is nigh, and technology and policy pioneers are doing their best to ensure the hopes aren't all hype.
The report, written by the JASON advisory group, was commissioned by the Office of the National Coordinator of Health IT and the Robert Wood Johnson Foundation (RWJF). RWJF senior program officer Michael Painter said the report might serve as a bellwether for AI deployment in medicine.
"One of the points they're making is 'Yes, we've been talking about AI for decades and there's been a lot of hype that has evaporated,'" Painter said. "The initial sort of benchmark finding in this report is that this time is different; they're saying 'Pay attention, there are legitimate things that have happened in terms of deep learning, computer processing power, and data sufficiency.' We have moved to a point where there are significant implications for health and healthcare."
One of the biggest potential payoffs of AI-enabled medicine, of course, is the promised feedback loop between individuals, clinicians, and the delivery system at large via remote device connections. Whether dedicated to a particular condition, as are devices such as glucose meters, or a general fitness app on a smartphone, the potential to analyze data far more numerous and complex than that from clinical studies bears epochal ramifications. The JASON researchers realized that and dedicated seven pages of their report to mobile AI in health, in particular breaking out their observations on avoiding what they called "snake oil" platforms that claim to use AI, and assuring equitable access for users of all economic circumstances.
Making AI Affordable
The CEOs of three mobile AI platforms, two of which were mentioned in the JASON report, said they realize their obligation to provide thoroughly vetted technology to the widest possible market.
In terms of affordability, Salim Madjd, CEO and co-founder of asthmaMD, a free asthma management app mentioned in the report, said he and his colleagues opted to make the app available to both iOS and Android users: "It's easier to launch in iOS only because the Android market is very fragmented, but Android devices can be obtained cheaper than iOS devices," Madjd said. "Another thing we are working on is a web version. You lose some benefits of the mobile app, but the thought is if you can't afford a mobile device, maybe you have access to a library."
Julia Hu, CEO and co-founder of Mountain View, Calif.-based mobile AI chronic condition coaching and nursing platform Lark, said the company's performance-only billing model and affordability offer payers and providers incentive to deploy the text messaging service and covered users the incentive to use it: "Not everyone owns a computer, but almost everyone has some sort of smartphone now," she said. "We work to make sure all the low-power sensors on these phones were completely integrated with our AI nurse such that the Lark coach can pull data from sensors on devices ranging from $10 phones to the iPhone, and provide coaching based on that."
KardiaMobile, developed by AliveCor (also in Mountain View), was also mentioned in the JASON report. The base technology's $99 list price might seem to place it outside the "affordable" category for a mobile platform (the company also has a premium subscription service featuring unlimited EKG history, storage and summarization for $99 a year). However, company CEO Vic Gundotra said the FDA-approved technology, which gives users the ability to take and transmit clinical-quality EKGs with their cell phones and Apple watches, has the potential to save individual users thousands or tens of thousands of dollars, and insurance carriers and health systems many millions of dollars, through early detection of irregularities such as atrial fibrillation that often lead to strokes. Great Britain's National Health Service recently decided to cover the technology.
The American Stroke Association estimates that 15% of all strokes are caused by untreated a-fib, and Gundotra said the potential individual and system cost savings in preventing those strokes are immense; "It can cost upwards of $200,000 to treat a stroke patient," he said. "So that $99 investment to catch it and treat it preventively is a tiny amount of money."
Building a Legit AI Loop
Madjd said the AI feedback loop of aggregating great volumes of individuals' data with established care standards is still in its infancy, but building that kind of loop remains an objective of asthmaMD.
"Although the space is very polluted with a lot of noise, to build legitimacy you have to start working with some solid partners," he said. To that end, he said the company has formed collaborations with Astra-Zeneca and Thermo-Fisher, and fine-tunes the platform's recommendations with guidelines from The Global Initiative for Asthma (GINA); additionally, he said the asthmaMD platform was studied by UCLA researchers who discovered close correlations between users' entries about the severity of their asthma and weather conditions.
Hu said Lark relies on the latest guidelines from clinical experts as well as chief medical officers of customer health plans and its own internal review committee of 15 academic and industry experts.
"At the end of the day, we are doing daily lifestyle management, and the basics don't change," Hu said. "Chronic condition management is about eating healthier for your disease, exercise management, weight loss, and medication adherence. We feel these things are pretty core and non-changing."
Gundotra said many emerging AI technologies will indeed be complex, and may lead many people to distrust them due to the "black box" nature of many of their algorithms; however, he said, those products that prove their rigor through regulatory and empirical evidence should prove to be reliable.
"The reality is, how a doctor makes a decision is also often a black box, but you trust his or her reputation and their experience," he said. "We trust doctors' black box thinking all the time because we know the doctor's credentials. The same thing will happen here. FDA clearance and peer review are very high bars. We may not be able to understand some of the internal workings just as we don't understand how a doctor may arrive at a particular conclusion. But if there is enough credentialing, over time we will build up trust."