How Google Is Bringing AI to Healthcare

Maureen Kingsley

September 2, 2016

4 Min Read
How Google Is Bringing AI to Healthcare

Google DeepMind has been forging research partnerships with the National Health Service in the U.K. 

Maureen Kingsley

UCLH Google DeepMind scanner

The concept of artificial intelligence put forth by movie studios in the past few decades centers around creepy, vacant-eyed robots that resemble humans and tend to "go bad" and destroy actual human lives. As with many Hollywood portrayals, however, the truth is far less fantastical and violent, yet just as fascinating: In real life, artificial intelligence, or AI, is finding useful applications in healthcare--via such programs as IBM's Watson Health--for processing patient data, suggesting potential diagnoses, making individualized treatment recommendations, and assisting physicians with planning and executing procedures, among other things.

Google DeepMind, a U.K.-based AI company co-founded by Mustafa Suleyman and acquired by Google in 2014, has been getting in on the act, too. DeepMind in late August announced a research partnership with the Radiotherapy department at University College London Hospitals (UCLH) NHS Foundation Trust, a provider of cancer treatment.

Google DeepMind and UCLH's radiotherapy clinicians are examining whether machine-learning methods can potentially reduce the amount of time it takes to plan radiotherapy treatment for head and neck cancers, which affect more than 11,000 individuals in the U.K. alone each year. (These cancers include oral cancer, oral-cavity cancer, and cancer of the sinuses, among others.)

Radiotherapy treatment for these cancers has improved survival rates, but because of all the delicate, intricate anatomical structures located in this area of the body, clinicians must plan and map-out treatment extremely carefully to ensure no vital nerves or organs are damaged. This detailed planning-and-mapping process is called "segmentation," and it involves drawing around different parts of the patient's anatomy and entering this information into a radiotherapy machine, which then uses it to target cancers and leave healthy tissue unharmed.

For some of these cancers, segmentation can take about four hours. Google DeepMind believes that through its collaboration with UCLH, it can "carefully analyze" up to 700 UCLH scans (wiped of their associated patients' identifying information) to determine the potential for machine learning to make segmentation faster and more efficient, according to a recent DeepMind press release.

"Clinicians will remain responsible for deciding radiotherapy treatment plans," the release states, "but it is hoped that the segmentation process could be reduced from up to four hours to around an hour." That would indeed be a significant time savings.

The Google DeepMind team says its goals for this work are two-fold: The first is freeing up clinicians' planning time to focus more on patient care, education, and research. The second is developing a radiotherapy-segmentation algorithm that can potentially be applied to other areas of the body.

Google DeepMind's first (and currently ongoing) collaboration with the U.K.'s NHS Trust was with Moorfields Eye Hospital, a 200-year-old institution, to explore how machine learning can potentially aid in faster detection, diagnosis, and treatment of two specific conditions that cause loss of vision: diabetic retinopathy and age-related macular degeneration.

Eye-care professionals currently use digital scans of the fundus (the back of the eye) along with scans optical coherence tomography scans to diagnose and determine treatment for these eye conditions. These scans are highly complex and require a long analysis time, which subsequently delays diagnosis and treament. Google DeepMind and Moorfields are investigating how machine learning could "help analyze these scans more efficiently and effectively, leading to earlier detection and intervention for patients and reducing the number of cases of patient deterioration," states Google DeepMind in a press release.

IBM Watson for Health: What's New

While Google's AI team expands its relationship with U.K.'s National Health Service, the humans behind IBM's AI-focused supercomputer, Watson, continue to leverage Watson's capabilities for healthcare purposes. Earlier this summer, Qmed reported on a smart tablet app under development that would use the Watson EMR Analyzer to allow radiologists to quickly view images along with patients' medical issues, clinical notes, lab results, medications, and more. Murthy Devarakonda, an IBM research scientist who is principal investigator for the Watson EMR Analyzer, presented this information at MD&M East in June of this year.

Other IBM Watson projects centered on what IBM refers to as "cognitive healthcare" include Medtronic insulin pumps that use Watson to predict dangerous changes in blood-glucose levels and the Talkspace app's use of Watson's Personality Insights API to match mental-health patients to online therapists.

Maureen Kingsley is a contributor to Qmed. 

Like what you're reading? Subscribe to our daily e-newsletter.

[Image courtesy of Google DeepMind]

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like