Medical Deepfakes Are the Real Deal

Can deepfakes be beneficial in healthcare?

Greg Goth

September 27, 2022

5 Min Read
IMG_2022-9-27-035202.jpg
Image courtesy of Feng Yu / Alamy Stock Photo

In the popular imagination, synthetically generated digital images and videos, known colloquially as “deepfakes,” are usually considered negatively. Fake images of politicians saying things their real-world counterparts never did to engender voter outrage, or superimposing a celebrity’s face on a compromising video, are just some of the examples of the illegitimate uses of deepfakes.

Healthcare, however, has shown itself to be an area in which deepfakes can be beneficial. For instance, trying to train a digital system to recognize tumors or other abnormalities in an image can be hindered by the fact that such abnormalities are relatively rare when compared to benign samples. This relatively smaller number of positive training images can skew an AI algorithm, resulting in low accuracy in more generalized deployments. By generating synthetic images, and including a small amount of genuine images in a GAN, system accuracy can be greatly improved, as discovered by researchers from chipmaker Nvidia, the Mayo Clinic, and MGH & BWH Center for Clinical Data Science in a 2018 paper in ArXiv.

Likewise, data privacy laws can make it difficult to obtain a sufficient variety of genuine images that can be sufficiently protected to make patient identification impossible; generating synthetic images is a promising way around that.

“Generating realistic synthetic data is an alternative solution to the privacy issue,” the authors of a 2021 study in Nature Scientific Reports, examining the utility of synthetic electrocardiograms, found. “Synthetic data should contain all the desired characteristics of a specific population, but without any sensitive content, making it impossible to identify individuals. Therefore, properly generated synthetic data is a solution to the privacy problem which enables data sharing between research groups.”

Beneficial deepfake technology can also go beyond clinical images, according to a recent study published in the Journal of Internet Medical Research by scholars at Taipei Medical University in Taiwan: Using an existing facial emotion recognition system trained on more than 28,000 Asian faces that is 95% accurate on a widely recognized facial expression database, the researchers created videos that morphed the facial features of 327 real patients in an effort to create videos intended to improve physicians’ empathy by interpreting facial expressions while also protecting the patient’s privacy. The system used the results of the emotion analysis to remind the doctors to change their behaviors according to patients’ situations, so that the patients felt like the doctors understood their emotions and situations. In general, they discovered the FER system achieved a mean detection rate of greater than 80% on real-world data.

“Our real-world clinical video database was originally developed to demonstrate how facial emotion recognition can be used as an evaluation tool on how doctors’ and patients’ emotion change during clinical interaction,” the study’s first author, Edward Yang said. “However, future studies are needed to demonstrate how this system can demonstrate objective observation to study doctor reactions to patient expression or vice versa.”

Emerging into the market

The technological foundation of deepfakes and other synthetic images is a generative adversarial network, or GAN. A basic GAN consists of two deep neural networks, called a generator and a discriminator; in training the system, the generator transmits a mix of real images and those created by signals of random noise. The discriminator classifies each image as real or fake. As more training data is fed into the GAN, the generator and discriminator both become more accurate until an equilibrium is reached.

One of the latest GAN-generated technologies to emerge is already available to practicing clinicians, in this case dentists and periodontists. San Francisco-based dental practice data AI startup Retrace created a GAN-derived algorithm that helps dental practitioners predict the amount of bone level in areas of the mouth just outside the borders of dental bitewings, which have a narrow field of view.

Vasant Kearney, Retrace’s chief technology officer, said the technology, the description of which was published in the August edition of the Journal of Dentistry, was an outgrowth of the company’s core practice administration platform.

“Dentistry is one of the few fields that has had imaging as a requirement for claims submission,” Kearney said. “Some payers require bitewings, and it’s easy in patients with more advanced periodontal disease, or who might have a different anatomical configuration of their mouth, to leave out important parts of the anatomy.”

As a result, Kearney said payers will sometimes reject claims because the portion of the anatomy they are interested in is not visible in the X-ray.

“So, our initial idea with filling in that missing anatomy was to help mainly with insurance claims,” he said. “It turns out to have much broader applications. But it would help both the AI algorithm and the observer gain an understanding of what’s just outside of the viewed anatomy.”

Kearney and his colleagues, who included researchers from Retrace and the University of California-San Francisco, developed a predictive algorithm that employed inpainting (the art of filling in missing or damaged parts of an image, similar to how a photo-editing application works). In the study that evaluated the technology, which used more than 10,000 radiographs, the researchers found that the predictive accuracy of network nearly matched that of the clinical standard of 1-millimeter increments  in diagnosing oral bone and gum health.

Kearney said the technology does not need FDA approval and is already available commercially in “specific use cases.” He also expects a much wider adoption of GAN-based technology within a few years. Though real-world deployments are still rare, and, he said, intended to be more of an adjunct technology to augment genuine articles in a dataset, growing interest and more economically viable compute resources such as cloud systems will encourage much wider development and adoption.

“It will be an everyday occurrence,” he said. “When we think about healthcare, it won’t be that we think about deepfake, but we’ll be using it all the time.”

 

 

About the Author

Greg Goth

Greg Goth is a freelance technology writer.

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like