Google DeepMind Wants to Save Eyesight with Artificial Intelligence

The Alphabet subsidiary is partnering with a UK hospital to test whether its algorithm can help diagnose diabetic retinopathy and age-related macular degeneration sooner.

Jamie Hartford

Diabetic retinopathy and age-related macular degeneration are serious conditions that can lead to loss of eyesight, and they affect more than 100 million people across the globe. The good news is that early detection can minimize the damage, but the bad news is that diagnosis can often take time.

Now, Google DeepMind, a subsidiary of Google parent company Alphabet Inc., hopes to speed up the process by applying artificial intelligence.

Don't miss the MD&M Minneapolis conference and expo, September 21-22, 2016.

Optical coherence tomography, a noninvasive imaging technique that can produce 3-D scans of the eye, as well as digital scans of the back of the eye can be used to diagnose both diabetic retinopathy and age-related macular degeneration. The problem is that having professionals analyze these complex images is time-consuming, so patients’ eyesight can deteriorate while they’re waiting for a diagnosis, according to a press release from Google DeepMind.

Through a five-year partnership with Moorfields Eye Hospital NHS Foundation Trust, a leading provider of eye health services and center for ophthalmic research and education in the UK, the company will gain access to 1 million anonymized historic eye scans and associated information about the patient’s condition and disease management. The research will explore whether Google DeepMind’s learning algorithms can be used to analyze eye scans and determine whether the patient has a condition that needs immediate treatment or if waiting would not be harmful.

“Our research with DeepMind has the potential to revolutionise [sic] the way professionals carry out eye tests and could lead to earlier detection and treatment of common eye diseases such as age-related macular degeneration,” professor Sir Peng Tee Khaw, Director of the National Institute for Health Research Specialist Biomedical Research Centre in Ophthalmology at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, said in a statement on Google DeepMind’s website. “With sight loss predicted to double by the year 2050 it is vital we explore the use of cutting-edge technology to prevent eye disease."

Moorfields Eye Hospital is the oldest eye hospital in the world and sees more than 600,000 patients each year. Although this research will not be used to immediately direct care for patients, the project could lead to actionable insights in the future.  

"We are really excited about this collaboration and the potential of machine learning to analyse [sic] the thousands of retinal scans taken each week in the NHS allowing eye health professionals to make faster, more accurate diagnoses and more timely treatments thus preventing sight loss,” Dolores Conroy, director of research at Fight for Sight, a nonprofit organization that funds vision research, said in a statement. “In the longer term this technology could provide important insights into disease mechanisms in wet [age-related macular degeneration] and diabetic retinopathy."

Google DeepMind’s research protocol has been submitted for open peer review, and the company has also promised to subject any results from this research to scrutiny by peer-reviewed journals.

“It’s early days for this work, but we’re optimistic about the long-term potential for machine learning technology to help eye health professionals diagnose and treat other diseases that, like macular degeneration, affect the lives of millions of people across the world,” the company said in a statement.

London-based Google DeepMind, which was founded in 2010 as DeepMind Technologies and sold to Google in 2014, isn’t the only player bringing artificial intelligence to healthcare—or even to the field of ophthalmology.

Last year IBM announced its acquisition of Merge Healthcare, maker of a medical imaging management platform used in more than 7,500 healthcare sites in the United States. Pairing medical images with IBM’s Watson AI platform “could then help healthcare providers in fields including radiology, cardiology, orthopedics and ophthalmology to pursue more personalized approaches to diagnosis, treatment and monitoring of patients, the company said in a press release.

Also in 2015, Ben Graham, then an associate professor at the University of Warwick in the UK, won a competition put on by the California Health Care Foundation to find the best algorithm to detect diabetic retinopathy from digital images.

Jamie Hartford is MD+DI's editor-in-chief and serves as director of medical content for UBM's Advanced Manufacturing Group. Reach her at jamie.hartford@ubm.com or on Twitter @MedTechJamie.

Device talk Tags: