Musk Goes from Iron Man to Doctor Doom Faster Than Robert Downey Jr.

Elon Musk is asking people to upload their medical images to X's AI program. What could possibly go wrong?

Amanda Pedersen

November 25, 2024

3 Min Read
Graphic featuring the author's headshot and a pull quote from the article about the unregulated use of AI in healthcare

For years, Elon Musk has been compared to the fictional superhero Iron Man, but his recent activities on X, the platform he owns, feel more like a page out of Doctor Doom's playbook.

Musk has been asking people to upload their medical images to X’s AI tool Grok, and many people have been quick to do so.

“This is still early stage, but it is already quite accurate and will become extremely good,” Musk said in an X post. “Let us know where Grok gets it right or needs work.”

View post on X

In another post, Musk once again encouraged X users to share their medical images, saying that “Grok accurately diagnosed a friend of mine from his scans.”

So far, the results have been mixed. Some users, like Michael Trinh, have reported positive results.

“It’s good with MRI images,” Trinh said in a reply to Musk’s post.

Still, others say Grok missed the mark. X user Josh Sharp, for example, said Grok mistakenly diagnosed his broken clavicle as a dislocated shoulder.

View post on X

These mixed results highlight a critical question: What are the legal and ethical implications of sharing medical data on a platform like X?

Do I believe Musk has evil intentions in asking people to upload their medical images to Grok? No, but I do think he is being incredibly irresponsible. And I’m not the only one who thinks so.

View post on X

“Grok is not even close to being able to diagnose radiology images. ... A little bit of knowledge is a DANGEROUS thing. Scary situation would be a patient in denial of their diagnoses of cancer and then feeds Grok,” X user “CyberJoe” said in response to Musk’s post.

So, what’s the difference between a doctor’s office or hospital having your medical images and X having them? The most glaring difference is that the doctors and hospitals have to abide by the Health Insurance Portability and Accountability Act (HIPAA), a federal law protecting an individual’s medical data from being shared without their consent. Social media posts aren’t protected by HIPAA.

X says in its privacy policy that the company will not sell user data to a third party but that it does share the data with “related companies.” The policy also discourages users from uploading sensitive information, such as personal health data. That’s interesting given that Musk seems to be completely disregarding his own company’s privacy policy by encouraging users to share their medical images with Grok.

The New York Times’ Elizabeth Passarella explained it best in her article on the topic.

“It’s like telling your lawyer that you committed a crime versus telling your dog walker; one is bound by attorney-client privilege and the other can inform the whole neighborhood,” Passarella writes.

Bradley Malin, a professor of biomedical information at Vanderbilt University who has studied machine learning in healthcare, told The New York Times that uploading sensitive information like medical images to Grok is risky because you don’t know exactly what the AI is going to do with it.

I reached out to Malin to ask about the potential legal or ethical considerations with doctors uploading de-identified medical images from their patients to test Grok’s diagnostic capabilities.

For example, X user Gabe Wilson, MD, an emergency physician in East Texas, said he uploaded a de-identified EKG of a patient with “obvious acute inferior ST elevation myocardial infarction,” a type of heart attack. Grok called the EKG normal, he said in an X post.

View post on X

Malin told me that while de-identified data avoids HIPAA restrictions, it could raise ethical concerns about patient expectations.

I’m not against the use of AI in healthcare, but I am against the unregulated, Wild West approach to using an early-stage AI chatbot like Grok to analyze medical images. Without oversight, Grok risks becoming a dangerous experiment at the expense of public trust in AI-driven healthcare. This is why healthcare AI needs regulation to ensure safety and to protect public trust.

About the Author

Amanda Pedersen

Amanda Pedersen is a veteran journalist and award-winning columnist with a passion for helping medical device professionals connect the dots between the medtech news of the day and the bigger picture. She has been covering the medtech industry since 2006.

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like