Do Functional MRI Tests Have a Problem?

Nancy Crotti

July 19, 2016

3 Min Read
MDDI logo in a gray background | MDDI

Were 40,000 test results wrong? Or just 3500? Time for more research.

Nancy Crotti

fMRI

A recent study of functional MRI test results stated that up to 40,000 such tests may have been muddied by software problems.

The research set off a firestorm of publicity, and one of the authors has since submitted an errata to the Proceedings of the National Academy of Sciences (U.S.) or PNAS, which published the paper, saying the findings were misinterpreted. The more precise number, that author writes, may be closer to 3500.

Anders Eklund and Hans Knutsson of Linköping University in Sweden and Thomas Nichols at the University of Warwick in the U.K. point out that three types of the most commonly used fMRI software programs used in tests over many years were flawed. Even after those flaws were detected, the software may not have applied corrections properly, writes New Scientist columnist Simon Oxenham. Further, Oxenham notes, "when the team looked at 241 recent fMRI papers, it found that the researchers did not even ask their software to apply any kind of correction in 40 per cent of them."

fMRI has been around for about 25 years. Unlike traditional MRI, which creates detailed images of the body using a powerful magnetic field and radio waves, fMRI detects blood flow to show how certain areas of the brain "light up" in response to various stimuli. Researchers use far more fMRI scans than clinicians. The methodology has even found favor with some marketers.

Its accuracy has come into question previously. A 2009 study showed that some fMRI methods found statistically significant brain activity in a dead salmon

In a 2012 study analyzing test results calculated using just one popular fMRI software, SPM, the Swedish and British researchers found an alarmingly high percentage of false positives--70%.They decided to study whether scans that employed two other commonly used fMRI software programs, FSL and AFNI, might yield similar results.

The trio conducted 3 million task-group analyses on resting-state brain data from 499 healthy control subjects downloaded from an international open fMRI data-sharing initiative called 1000 Functional Connectomes. While they expected to find 5% false positive results, they again found 70%.

Here's where it gets sticky. Mainstream media picked up on paper's the "significance" statement, which concluded, "These results question the validity of some 40,000 fMRI studies and may have a large impact on the interpretation of neuroimaging results." Nichols then backpedaled, substituting the following sentence: "These results question the validity of a number of fMRI studies and may have a large impact on the interpretation of weakly significant neuroimaging results."

In another section of the original paper subtitled, "The future of fMRI," the researchers wrote that redoing 40,000 fMRI studies would not be feasible, and that "lamentable archiving and data-sharing practices mean most could not be reanalyzed either." Nichols asked that PNAS change that to:  "Due to lamentable archiving and data-sharing practices it is unlikely that problematic analyses can be redone."

Oxenham points out that "a large proportion of recent research probably contains the very same types of error highlighted by the dead fish study from seven years ago."

A spirited discussion continues, with cries of foul and calls for more research. Stay tuned.

Nancy Crotti is a contributor to Qmed.

Like what you're reading? Subscribe to our daily e-newsletter.

[Functional MRI image courtesy of OpenStax - https://cnx.org/contents/[email protected]:fEI3C8Ot@10/Preface, CC BY 4.0]

About the Author

Nancy Crotti

Nancy Crotti is a frequent contributor to MD+DI. Reach her at [email protected].

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like