Medical imaging saves millions of lives each year, helping doctors detect and diagnose a wide range of diseases, from cancer and appendicitis to stroke and heart disease. Because non-invasive early disease detection saves so many lives, scientific investment continues to increase. Artifical intelligence (AI) has the potential to revolutionize the medical imaging industry by sifting through mountains of scans quickly and offering providers and patients with life-changing insights into a variety of diseases, injuries, and conditions that may be hard to detect without the supplemental technology.
Images are the largest source of data in healthcare and, at the same time, one of the most challenging sources to analyze. Clinicians today must rely mainly on medical image analysis performed by overworked radiologists and sometimes analyze scans themselves. The interpretations of medical data are being made mostly by a medical expert. In terms of image interpretation by a human expert, it is entirely limited given its subjectivity, the complexity of the image, the extensive variations that exist across different interpreters, and fatigue.
Despite constant advances in the medical imaging space, almost one in four patients experiences false positives on image readings. This can lead to unnecessary invasive procedures and follow-up scans that add cost and stress for patients. And while false negatives happen less often, the impact can be catastrophic. The surprisingly high rate of false positives is due in part to concerns among radiologists about missing a diagnosis. Late detection of disease significantly drives up treatment costs and reduces survival rates.
This is a situation set to change, though, as pioneers in medical technology apply AI to image analysis. The latest deep-learning algorithms are already enabling automated analysis to provide accurate results that are delivered immeasurably faster than the manual process can achieve. As these automated systems become pervasive in the healthcare industry, they may bring about radical changes in the way radiologists, clinicians, and even patients use imaging technology to monitor treatment and improve outcomes.
AI applications for radiology use deep-learning algorithms and analytics to assess images for tumors or suspicious lesions systematically and to provide detailed reports on their findings instantly. These systems are trained on labeled data to identify anomalies. When a new image is submitted, the algorithm applies its training to differentiate normal vs. abnormal structures (e.g., benign/malignant). As these tools become more sensitive, they will also potentially enable earlier diagnosis of disease because they will be able to identify small variances in an image that is not easily spotted by the human eye. They can also be used to track treatment progress, recording changes in the size and density of tumors over time that can inform treatment, and to verify progress in clinical studies.
The latest machine-learning, deep-learning, and workflow automation technology can accelerate interpretation, improve accuracy, and reduce repetition for radiologists and other specialties. The truth is that most departmental picture archiving and communication systems (PACS) still don't provide the underlying infrastructure that enables these technologies to thrive. Interpreting and analyzing images requires easy access and free flow of imaging to work effectively. However, studies are still often buried on CDs, file servers, or multiple hard-to-search locations, putting them out of reach of the latest processing algorithms. It’s just one of the reasons why organizations are focused on consolidating and integrating imaging into one archive—to turn it into a strategic asset.
Recent studies show that artificial intelligence algorithms can help radiologists improve the speed and accuracy of interpreting X-rays, CT scans, and other types of diagnostic images. Putting the technology into everyday clinical use, however, is challenging because of the complexities of development, testing, and obtaining regulatory approval.
Radiology algorithms focus narrowly on a single finding on images from a single imaging modality, for example, lung nodules on a chest CT scan. While this may be useful in improving diagnostic speed and accuracy in specific cases, the bottom line is an algorithm can only answer one question at a time. Because there are many types of images and thousands of potential findings and diagnoses, each would require a purpose-built algorithm. In contrast, a radiologist considers a myriad of questions and conclusions at once for every imaging exam as well as incidental findings unrelated to the original reason for the review, which is quite common.
Accordingly, to fully support just the diagnostic part of radiologists’ work, developers would need to create, train, test, seek FDA clearance for, distribute, support, and update thousands of algorithms. And healthcare organizations and doctors would need to find, evaluate, purchase, and deploy numerous algorithms from many developers, then incorporate them into existing workflows. Compounding the challenge is deep-learning models’ voracious demand for data. Most models have been developed in controlled settings using available, and often narrow, data sets—and the results that algorithms produce are only as robust as the data used to create them. AI models can be brittle, working well with data from the environment in which they were developed but faltering when applied to data generated at other locations with different patient populations, imaging machines, and techniques.
While AI marketplaces should foster widespread adoption of AI in radiology, they also have the potential to help alleviate radiologist burnout by augmenting and assisting them in two ways. The first, through the iterative development process, is by facilitating the design of algorithms that integrate seamlessly into radiologists’ workflows and simplify them. The second is by improving the speed and quality of radiology reporting. These algorithms can automate repetitive tasks and act as virtual residents, pre-processing images to highlight potentially essential findings, making measurements and comparisons, and automatically adding data and clinical intelligence to the report for the radiologist’s review.
By taking over routine tasks, adding quality checks, and enhancing diagnostic accuracy, AI algorithms can be expected to improve clinical outcomes. For example, an FDA-cleared model automatically assesses breast density on digital mammograms, as dense breast tissue has been associated with an increased risk of breast cancer. By handling and standardizing that routine but essential task, the algorithm helps direct their attention to patients at the highest risk. Also, AI algorithms have proven equal to, and in some cases better than, an average radiologist at identifying breast cancer on screening mammograms.
As the population ages, the need for diagnostic radiology will surely increase. Meanwhile, radiology residency programs in the United States have only recently begun to reverse a multi-year decline in enrollments, raising the specter of a shortage of radiologists as the need for them grows. The recent emergence of AI marketplaces can accelerate the adoption of AI algorithms, helping to manage increasing workloads while providing doctors with tools to improve diagnoses, treatments, and, ultimately, patient outcomes.
Machine learning and AI technology are gaining ground in medical imaging. For many health IT leaders, machine learning is a welcome tool to help manage the growing volume of digital images, reduce diagnostic errors, and enhance patient care. Despite its benefits, some radiologists are concerned that this technology will diminish their role, as algorithms start to take a more active part in the image interpretation process while ingesting volumes of data far beyond what any human can do.
How Machine Learning Works
In traditional predictive modeling, researchers develop a hypothesis about how distinct inputs predict some particular outcome, and then they test their theories against data. In contrast, machine learning is the process of algorithmically turning raw data into new knowledge without being explicitly programmed. Machine-learning tools can analyze an immense amount of data to discover relationships and combinations of variables to propose a predictive model back to the researcher. These tools draw out rules from repositories of past knowledge to build an algorithmic foundation that can then analyze, and continually learn from, real-time data. These algorithms mimic how humans learn complex concepts. Machine learning is associated with computer-aided detection (CAD), and as a technique, it can be used to develop more powerful CAD algorithms.
Machine-learning tools can collect data across various IT systems, such as electronic health records (EHRs), laboratory information systems, and radiology and cardiology PACS. Other forms of data can be unstructured, including text in books, guidelines, or publications.
When it comes to medical imaging, there are ways to characterize and extract textures, shapes, and colors associated with various types of disease. After analyzing a database of existing images—which can reach billions in volume—a machine-learning algorithm can start to recognize patterns (while minimizing false positives) and automatically flag abnormalities within new images for more informed decision making.
Algorithms for image analysis and decision support have been developed for decades, but most of them have not found their way into clinical practice. Nevertheless, many IT vendors and healthcare providers have made strides in the imaging space.
Benefits of Machine Learning
Machine learning—and CAD applications in general—show promise, and radiologists have much to gain from incorporating this technology into their operations given the following:
- AI can evaluate an enormous number of imaging variables much faster, and more consistently, than a radiologist.
- Algorithms facilitate decision making and education for inexperienced radiologists.
- CAD can automate mundane reading and measurement tasks, freeing radiologists to focus on patient interaction, research, and complex higher-order thinking.
- Machine learning can automate radiologist workflow, placing more time-sensitive cases higher on the radiologist’s worklist.
- Machines have the potential to improve diagnostic accuracy dramatically, prevent medical errors, and reduce the overuse of testing.
- Machine learning can act as a next-generation clinical decision support tool for radiologists, offering segmentation, classification, and pattern recognition that can be used to propose statistically significant guidance for image analysis.
- Analyzing images can be highly subjective; machines replace subjectivity and reader variability with quantitative measurements that can improve patient outcomes.
Challenges of Machine Learning
Despite the potential benefits that machine learning brings to medical imaging, these challenges need to be addressed before widespread adoption occurs:
- Many radiologists worry that the increased use of machine learning will lead to fewer jobs or a diminished role, which can cause some of them to resist technology.
- Devices that conduct diagnostic interpretation are labeled class III devices by the U.S. FDA. This class label makes it challenging and time-consuming to gain approval for use. Class II devices avoid the diagnosis and offer only features of measurement (e.g., raising a red flag on an image), which is a more straightforward pathway to FDA approval.
- Healthcare organizations that rely on machine learning open the door for potential legal trouble if an algorithm leads to misdiagnosis or medical error.
- Building machine-learning algorithms are complicated and require massive inputs of clinical and peer-reviewed data to learn rules to evaluate new images.
- Most CAD algorithms address specific tasks or conditions. It is challenging to develop generalized algorithms that apply to broad sets of scenarios.
- Although image analysis and decision-support projects have been around for years, many do not advance past a piloting phase.
- The "black box" effect: algorithms can identify an image object as abnormal but cannot explain why it was determined or give more granular details to the radiologist.
The radiology community has had mixed feelings about the use of AI, with some portraying the technology as a boon to medical imaging. In contrast, others believe that AI is many years away (if not decades) from replicating the work of radiologists.
A popular topic of discussion is whether machine learning will displace much of the work of radiologists (and of other groups, such as anatomical pathologists). Proponents of this view claim that organizations waste time and resources having humans interpreting diagnostic images when algorithms can process higher volumes at a lower cost. Some stakeholders advocate the use of algorithms because they feel it results in excellent patient safety since algorithms are not burdened by stress or exhaustion.
On the other hand, other stakeholders do not see machines “taking over” the field, but rather working in a supplemental role. They argue that a machine's role is not to replace the radiologist but to enhance a radiologist’s ability to identify and correctly diagnose any problems that appear on diagnostic images. Machine learning gives radiologists a way to manage the exponential growth in imaging volumes, while occasionally highlighting features that may have been overlooked. Having access to this “virtual consultant” can also bolster strategic partnerships with referring physicians, as radiologists will have greater insight for interpreting images. As far as risk goes, healthcare organizations that are risk-averse will always have aspects of medical image analysis that require manual review to mitigate ethical or legal concerns.
Timing is another significant factor. Skeptics point out that there have been thousands of machine-learning algorithms developed, but rarely do they advance from the research floor to clinical application. Furthermore, even if an algorithm is created that outperforms radiologists at all tasks across pilot stages, there is no clear timeline for how long it would take to verify those findings or get FDA approval to use it for diagnosis.
We will likely see continued incremental changes and specialized applications of machine-learning algorithms in the short term. Many AI vendors and their healthcare provider partners claim their technology will be ready to use in the next year or two for relatively well-characterized images (such as x-rays). Whether bullish or skeptical about the technology, most industry experts agree that over the next five to ten years, machine learning will become a powerful tool in radiology as it branches out to most other types of imaging modalities, including CT studies, MRI exams, and ultrasound.
Here are a few considerations for current and future machine-learning implementations:
- Engage all stakeholders in the planning process. Machine learning has the potential to revolutionize medical imaging. Radiologists can use this technology to make volumes of data actionable, streamline workflow, and ultimately improve patient outcomes. However, machine-learning initiatives can fail if healthcare organizations do not address existing cultural resistance to new IT systems or quell the fear that AI will make the radiologist role obsolete.
- Be mindful of your scope of application and implementation timeline. Many machine-learning algorithms are narrow in their application, working across select modalities to inform decisions on specific diseases. Although compelling cases exist in imaging, many machine-learning tools are still under development and may take years before they are available for clinical use.
- Incorporate machine learning as a complement to the radiology staff. Even when algorithms are accurate, radiologists still need to apply their judgment, using the algorithm as a secondary support system to optimize care. Researchers have shown that highly accurate algorithms can still be outperformed in diagnostic performance by experienced radiologists. On the other hand, inexperienced or non-specialist radiologists are more susceptible to mistakes and may fail to consider all variables systematically when reading images.
The views expressed in this article are attributed solely to the authors and not that of the company (IBM) they represent.