AI in Medtech: Myth & Reality for 2024 & Beyond

A look at how artificial intelligence can be properly applied in the medtech industry.

Mitch Maiman

February 15, 2024

7 Min Read
Image Credit: Maciej Frolow/Getty Images

At a Glance

  • Medtech products and AI today.
  • A Look at what's on the horizon for medtech products.
  • How medtech products and AI are used today.

Artificial Intelligence (AI) is the latest technology area generating buzz in both the business and private sectors. There is already widespread availability and application of free-use public AI engines such ChatGPT from OpenAI and Bard.ai from Google (among a growing group of others). These engines need to be used with care to validate results are accurate. This may be fine for writing an article but what about applications in the medical domain?

Where does this leave AI in medtech products for 2024 and what are some of the near-term expectations for the technology?

Medtech Products & AI Today

First, one must differentiate from closed-loop rules-based processing that takes in input data and generates a result in accordance with the pre-programmed rules vs. implementation of a true broad intelligence embodied in a generalized AI. Current technology marketed as AI is only acting on known, human rules. Any intelligence in machine learning improves the efficiency and usefulness of known rules more efficiently than uncovering an unknown set of rules.

There are already many applications where systems, in medtech and other fields, where data from a variety of sensors and other sources is analyzed and processed according to rules to generate a “smart result.” Computing platforms performing analytics according to pre-programmed rules utilizing digitized data is not new. As an example, in the medtech world, virtually any modern blood glucose meter can take a sample of blood, characterize the blood sugar level and spit out both a numeric value of eAG level, the qualitative assessment of whether the level is high, normal, or low. Newer devices can now look at trends and make predictable suggestions of where the eAG is heading over time. This represents a low level of intelligence of the type that has been available since the creation of “smart connected devices.” This does not represent what most of us view as a high level of intelligence envisioned for AI.

Related:AI Doesn't Exist in Medtech, It's Just Machine Learning

A more sophisticated use of AI in medtech uses a broader and larger set of data inputs to generate insights not readily possible for most humans, such as the software used to help healthcare professionals manage prescription medications and non-prescription supplements used by patients. For example, most doctors in the U.S. access an electronic medical record (EMR) system which tracks a variety of conditions, treatments, and test results for patients. They also track medications and supplements in a patient’s profile. For those with routine medical conditions, doctors assess the common medications and supplements used by a patient when making a new prescription recommendation. For those with complex health conditions, sometimes involving potentially conflicting diagnoses and treatments with the use of exotic new medications or off-label medication usage, the interactions between these medications and treatments may not be easily uncovered by a primary care doctor or even a doctor in a vertical specialty.

Related:AI: Considering the Regulatory and Legal Implications

IMG_2024-02-15-170953.jpg

Even in simple cases, doctors are human and can make mistakes in prescribing medication. EMR systems can be very helpful today in applying intelligence that looks across a broad range of factors in simple and complex situations to make recommendations to healthcare professionals, regarding medications. This has the hugely beneficial effect of minimizing errors or providing novel guidance that would be difficult for a single healthcare professional to uncover. Even with the use of these tools, it is still up to the doctor to interpret the results and recommendations applying human judgment. A doctor may override the recommendations from the EMR to prescribe medications for off-label use. These examples are at the first level of AI where machine intelligence exceeds human capacity albeit in a very narrow focus.

Related:Your AI Initiative is Likely to Fail

What’s Next for AI in Medtech Products

For AI to use more generalized machine intelligence that discovers its own rules based on the processing of large amounts of disparate data, there are still many challenges. In such a case, the AI engine can develop intelligence through insights that could not be envisioned by human-programmed algorithms. For this to happen, AI engines need access to vast amounts of dependable data, and there are many challenges because:

  • Manufacturers of drugs do not openly share all the data related to their medications and only selectively release test results needed to secure FDA certifications.

  • Patient data is not universally available to an AI engine. There are limitations to sharing of data between EMR systems. In the near term, EMR data is unlikely to be available to outside developers wanting to develop AI engines. This restricts the amount of data accessible to an AI system.

  • Even under the best circumstances, EMR systems may not have access to all relevant patient data. For example, one specialty doctor using EMR “system A” would contain prescription info and that info may be sent to Walgreens. Another specialty doctor a patient sees may use EMR “system B” with prescriptions sent to CVS. The primary care doctor may not see the composite of all medications sent to the different pharmacies. The patient may forget to inform the primary care doctor of all prescriptions from all the doctors they are seeing. They may also forget to tell the primary care doctor, or all the non-prescription supplements, dietary or other lifestyle factors which may not seem relevant. To a doctor evaluating the entire patient profile or to an AI trying to do the same thing, such seemingly unimportant or forgotten information or errors from the patient may be crucial in detecting or treating a condition or assessing a patient’s health trajectory.

  • Research data is proprietary and may not be accessible to an AI. Universities and pharmaceutical companies may not make their research data available for all to see (or at least not until a time when such data has intellectual property protection).

  • Research data may or may not have integrity. Thankfully, most research data is produced with an eye towards the truth and honesty but there are cases every year of research data being published that is not peer-reviewed or is reviewed but contains undiscovered errors or outright falsifications.

  • Data may be published for medications or procedures that have proven successful. However, data may not be published for treatments that failed. Knowing what doesn’t work and any related side effects may be as valuable as the data for what does work to an AI trying to invent new forms of treatment.

  • AI systems do not have access to non-quantitative data that informs what may be acceptable or unacceptable to an individual. These “human factors” are not captured in the data sets. Part of the role of the physician is to not only assess what might be an effective form of treatment but to also understand things like the patient lifestyle or pain tolerance to make an appropriate recommendation for that individual.
    IMG_2024-02-15-171625.jpg

Besides challenges with the availability of comprehensive data, in the near term, FDA is struggling with how to certify the use of AI in products. One of the issues has to do with the requirement for the processing to be transparent. That is, a regulatory submission needs to be clear and traceable as to how data is analyzed, and conclusions are derived. With the smarter AI engines, part of the value is in that the machine learning inside the engine is a great part of the value. Medtech products are looking to gain new insights from the AI engine. However, it is not always possible or an easy task to determine how an AI engine created a new insight or developed a result. In the near term, FDA is proceeding with caution as it could be catastrophic if an AI engine were released that generated results that are incorrect (the current open AI systems are notorious for always generating an answer which may or may not be correct).

Even with these limitations, AI systems will continue to evolve using whatever datasets are available as inputs. There are products out there already with very limited or moderately intelligent technology improving quality of life and improving outcomes. AI systems are being employed in many fields such as imaging, Rx validation and machine-assisted surgery already. Such systems will continue to evolve into the near future. The great hope is that AI systems will continue to become smarter and more capable providing insights into medical treatments and procedures that could not be envisioned by healthcare professionals today.

About the Author(s)

Mitch Maiman

Mitch is the co-founder of Intelligent Product Solutions (IPS), a leading product design and development firm. He honed his deep knowledge of product design on the strength of a 30-year career with companies that manufacture commercially successful products for the medical, consumer, and industrial markets. Always espousing a hands-on approach to design, he holds a portfolio of numerous United States and international patents.  He can be reached at [email protected].

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like