A decade-long project that attempted to replicate experiments from several high-profile papers in the field of preclinical cancer biology has found that around half of the experiments couldn’t be replicated on most criteria.
‘This suggests the credibility of published findings in cancer biology are less certain than thought,’ said Brian Nosek, executive director of the Center for Open Science, which oversaw the project.
The Reproducibility Project: Cancer Biology, launched in 2012 in an attempt to replicate 193 experiments from 53 high-impact papers published between 2010 and 2012, selected research based on citations and readership. But a lack of transparency about data, reagents and protocols by some authors of the original papers meant that in the end just 50 experiments from 23 papers were completed, said Tim Errington, director of research at the Center for Open Science and the project’s leader.
There is no standard way of determining whether a replication is successful. So the researchers used five criteria to evaluate their replications, such as whether the effect measured was in the same direction as the original and statistically significant, and whether the replication effect size was within the original 95% confidence interval and vice versa. They found that just 18% succeeded on all five, while 20% failed on all five. Overall, 46% of effects succeeded on most criteria. A meta analysis of effect sizes also found that, of 158 effects measured, the size of the effects in the replications were 85% smaller on average.1,2
A failure to replicate does not necessarily mean the original results were wrong, said Nosek. While the original could be a false positive, it’s also possible that the replication was a false negative. Or both could be correct, with the discrepancy due to differences in the experimental conditions or design. But, he adds, ‘we tried to minimise that with high statistical power, using the original materials as much as possible, and peer review [of the planned experiments] in advance’.
The bigger problem for science, said Errington, is the lack of transparency on how to properly perform the experiments. ‘The biggest barriers were reagent sharing and getting information on the experimental protocols – every one needed clarifications,’ he said.
In some cases those clarifications proved difficult to get. Authors of the original papers sometimes had to hunt down people who had left their lab years earlier to find the protocol data, demonstrating that researchers need to make better use of the tools available to help them share their work and data.
‘We need records that allow people to build on others’ work,’ said Elizabeth Iorns, chief executive of Science Exchange, a research services marketplace, who originally conceived of the cancer biology replication project. ‘Going back to the original authors is insanity.’
The problem, said Marcia McNutt, president of the US National Academy of Sciences, is that there is little incentive for scientists to cooperate with someone who is trying to replicate their work. ‘The best they can hope for is that their work is confirmed, but they already have a highly-cited paper so there’s not a lot of upside,’ she said. ‘The downside can be truly devastating if their work is not confirmed.’
Alongside the grassroots open science movement that encourages researchers to make all their data and protocols freely available, more formal initiatives are underway to try and address the issues raised by replication efforts. The National Academies have created a new Strategic Council for Research Excellence, Integrity and Trust, which McNutt said will consider how to provide more incentives for cooperation with replications. The council met for the first time in October 2021.
The US National Institutes of Health is also bringing in a data management and sharing policy that will go into effect in January 2023, said Mike Lauer, NIH director of extramural research. The policy will make sharing of data the default for NIH-funded work, and help foster a culture of sharing, rigour and replicability, he said. ‘I can think of nothing that will make a scientist more motivated to make sure they’re doing things right than knowing that their data and code and methods are going to be shared more widely.’
Many journals now also require authors to make their data freely available, although McNutt says the existence of the policy does not always ensure there will be follow-through. The chemistry journal Organic Syntheses goes one step further, requiring all the experiments reported in an article to be successfully repeated by a member of the editorial board before publication. ‘People I know in chemistry say: everything in that, you can believe,’ said Errington.
Several other replications studies are underway in a variety of fields. The Center for Open Science is running one in the social and behavioural sciences, while the Repeat initiative at Brigham and Women’s Hospital and Harvard Medical School is working on one focused on longitudinal healthcare database research. But a lack of funding makes these kinds of long-running studies difficult, said Iorns. She approached all the major funders about the cancer biology project when she first had the idea, but none were interested. The project finally secured private philanthropic funding from the charity Arnold Ventures. ‘I’m not convinced it would be any different today,’ she said.
What might be different today, however, are the results of a new replication experiment on more recent papers. ‘The research culture has been evolving,’ said Nosek. ‘Some of the initiatives on improving reproducibility may start to have a measurable effect on increasing replicability rates.’
1 TM Errington et al, eLife, 2021,10, e67995 (DOI: 10.7554/eLife.67995)
2 TM Errington et al, eLife, 2021,10, e71601 (DOI: 10.7554/eLife.71601)
No comments yet