© 2024 WMRA and WEMC
WMRA : More News, Less Noise WEMC: The Valley's Home for Classical Music
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Most Studies Don't Replicate... But Is That Bad for Science?

Kara Lofton

Last week, the Center for Open Science, a Charlottesville-based technology company, published a landmark analysis on scientific replication called the Reproducibility Project.  As it turns out, it’s much harder to replicate original research, at least in the social sciences, than you might think.  WMRA’s Kara Lofton reports.

The question the study sought to answer is this: how reproducible are psychological studies?  The answer is not as simple as you might think. Over the course of four years the Center for Open Science recruited 270 researchers who attempted to replicate 100 psychological studies. These replicators worked in consultation with the psychologists who published the original studies to try and imitate the original as closely as possible. Even with this provision, only 39 out of 100 studies were successfully replicated.

JOHANNA COHOON: Research seems to be more nuanced than it might seem at first glance….

That was Johanna Cohoon, one of the project managers for the Reproducibility Project.

COHOON: Conducting these extra replications looking further into the line of research allows you to say under these specific circumstances this is true or this is false.

And it turns out that those specific circumstances can change the results.  For example, in 2008, a researcher at Florida State University named E.J. Masicampo decided to test whether students at FSU would choose housing that was close to campus, but small or housing that was far from campus, but large. It was supposed to be a difficult decision for the students, many of whom already drove to campus.

The reproduction was attempted at the University of Virginia. Mallory Kidwell, the other project manager, explains.

MALLORY KIDWELL: For University of Virginia students where the replication was conducted most of them do not drive.

COHOON: First years are required not to have a car and the rest of the students rarely drive to school and using these materials where they were offered apartments that were five or seven or I think eight miles away that was just

KIDWELL: Absurd

COHOON: Ridiculous, yeah so of course they are going to choose five miles because of all these terrible options.

KIDWELL: Right, so for UVA students it was an incredibly easy decision, whereas when the original was conducted at Florida State it was a decision that required some teasing out, some decision-making. So the replication was not considered successful, but I think the original author and the replicators both hypothesized that it could be because of that difference in population.

Kidwell and Cohoon are quick to point out that failure does not necessarily mean the original study produced a false positive or that the data was fabricated, just that the results may only be accurate at FSU, not elsewhere.

At this point, you might be thinking: why should I care how far students want to drive to campus and whether the answer is different from Florida to Virginia?

The answer lies not in the study itself, but in the inability to reproduce it.

BRIAN NOSEK: It’s really important to recognize that a key aspect of science is a process of uncertainty reduction. No one study is definitive. You hear today that coffee is good for you, you hear tomorrow that coffee is bad for you. Someday you hear coffee – well we don’t know! And that can feel very confusing. 

That was Brian Nosek, director of the Center for Open Science. He said the point of the study was to try and develop reproducibility guidelines for scientific journals to adopt and to determine what factors influence failure to replicate.

NOSEK: It started because we don’t know what the reproducibility rate is and reproducibility is central to how science works. Science, a core assumption, is that the work that is done, the evidence that is gained, is reproducibility, that someone else could follow the same methods and obtain a similar result. But that’s not guaranteed to be the case – especially when we are working at the boundaries of knowledge – we’re investigating things we don’t understand yet. So we’re going to go down a lot of blind alleys, we are going to get a lot of initial evidence that doesn’t turn out and that’s ordinary science. But because we don’t know the frequency with which we are heading down alleys that are difficult to reproduce versus ones that are very robust, we just want to make sure we investigate it and if we do find challenges we find ways to address it.

The final data from the Reproducibility Project was published Thursday in Science Magazine and argues that, while both kinds of research are important, the scientific community needs to shift from only publishing original research to encouraging replication.

NOSEK: The key insight, or reminder of this study is that each study that’s done on a research question, on a topic, is one piece of evidence, and the real conclusion comes after accumulating evidence across many studies.

Kara Lofton is a photojournalist based in Harrisonburg, VA. She is a 2014 graduate of Eastern Mennonite University and has been published by EMU, Sojourners Magazine, and The Mennonite. Her reporting for WMRA is her radio debut.