MDDI Online is part of the Informa Markets Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Is AI the Key to Diving Deeper into Images and Pathology?

Two separate studies show how researchers are using machine learning to train computers to spot and classify cells just by analyzing medical images.

Researchers at New York University School of Medicine trained a deep convolutional neural network (Google's Inception v3) on whole-slide images obtained from The Cancer Genome Atlas to accurately and automatically classify them into two prevalent subtypes of lung cancer, or normal lung tissue. Typically, pathologists use histopathology slides to assess the stage, type, and subtype of lung tumors. The work is published in Nature Medicine.

NYU School of Medicine

In a tale of two studies, it appears artificial intelligence is helping researchers analyze cells in ways that weren't possible before.

In one study, published this week in Nature Methods, scientists at the Allen Institute in Seattle, WA used machine learning to train computers to see parts of the cell that the human eye cannot easily distinguish. Using 3D images of fluorescently labeled cells, the team taught computers to find structures inside living cells without fluorescent labels, using only black and white images generated by an inexpensive technique known as brightfield microscopy.

Fluorescence microscopy, which uses glowing molecular labels to pinpoint specific parts of cells, is very precise but only allows scientists to see a few structures in the cell at a time, the researchers explained. Human cells have upwards of 20,000 different proteins that, if viewed together, could reveal important information about both healthy and diseased cells.

"This technology lets us view a larger set of those structures than was possible before," said Greg Johnson, PhD, a scientist at the Allen Institute for Cell Science, a division of the Allen Institute, and senior author on the study. "This means that we can explore the organization of the cell in ways that nobody has been able to do, especially in live cells."

According to Rick Horwitz, PhD, executive director of the Allen Institute for Cell Science, the prediction tool could also help scientists understand what goes wrong in cells during disease. Cancer researchers could potentially apply the technique to archived tumor biopsy samples to better understand how cellular structures change as cancers progress or respond to treatment, he said. The algorithm could also aid regeneration medicine by uncovering how cells change in real time as scientists attempt to grow organs or other new body structures in the lab.

"This technique has huge potential ramifications for these and related fields," Horwitz said. "You can watch processes live as they are taking place — it's almost like magic. This method allows us, in the most non-invasive way that we have so far, to obtain information about human cells that we were previously unable to get."

The combination of the freely available prediction toolset and brightfield microscopy could lower research costs if used in place of fluorescence microscopy, which requires expensive equipment and trained operators, the team noted. Fluorescent tags are also subject to fading, and the light itself can damage living cells, limiting the technique's utility to study live cells and their dynamics. The machine learning approach would allow scientists to track precise changes in cells over long periods of time, potentially shedding light on events such as early development or disease progression.

To the human eye, cells viewed in a brightfield microscope are sacs rendered in shades of gray. A trained scientist can find the edges of a cell and the nucleus, the cell's DNA-storage compartment, but not much else. The research team used an existing machine learning technique, known as a convolutional neural network, to train computers to recognize finer details in these images, such as the mitochondria, cells' powerhouses. They tested 12 different cellular structures and the model generated predicted images that matched the fluorescently labeled images for most of those structures, the researchers said.

It also turned out what the algorithm was able to capture surprised even the modeling scientists.

"Going in, we had this idea that if our own eyes aren't able to see a certain structure, then the machine wouldn't be able to learn it," said Molly Maleckar, PhD, director of modeling at the Allen Institute for Cell Science and an author on the study. "Machines can see things we can't. They can learn things we can't. And they can do it much faster."

The technique can also predict precise structural information from images taken with an electron microscope. The computational approach here is the same, said Forrest Collman, PhD, an assistant investigator at the Allen Institute for Brain Science and an author on the study, but the applications are different. Collman is part of a team working to map connections between neurons in the mouse brain. They are using the method to line up images of the neurons taken with different types of microscopes, normally a challenging problem for a computer and a laborious task for a human.

"Our progress in tackling this problem was accelerated by having our colleagues from the Allen Institute for Cell Science working with us on the solution," Collman said.

Roger Brent, PhD, a member of the basic sciences division at Fred Hutchinson Cancer Research Center, is using the new approach as part of a research effort he is leading to improve the "seeing power" of microscopes for biologists studying yeast and mammalian cells.

"Replacing fluorescence microscopes with less light intensive microscopes would enable researchers to accelerate their work, make better measurements of cell and tissue function, and save some money in the process," Brent said. "By making these networks available, the Allen Institute is helping to democratize biological and medical research."

In a separate study, published this week in Nature Medicineresearchers at New York University School of Medicine explained how they trained a deep convolutional neural network (Google's Inception v3) on whole-slide images obtained from The Cancer Genome Atlas to accurately and automatically classify them into two prevalent subtypes of lung cancer, or normal lung tissue. Typically, pathologists use histopathology slides to assess the stage, type, and subtype of lung tumors. 

The NYU researchers said the performance of their method is comparable to that of pathologists, with an average area under the curve (AUC) of 0.97. The model was validated on independent datasets of frozen tissues, formalin-fixed paraffin-embedded tissues, and biopsies, they said. 

The researchers also trained the network to predict the 10 most commonly mutated genes in lung adenocarcinoma (LUAD) and found that six of them can be predicted from pathology images. They said the findings suggest that deep-learning models can assist pathologists in the detection of cancer subtype or gene mutations. The approach can be applied to any cancer type, they noted.

Filed Under
500 characters remaining