Efforts to get to grips with the problem has meant new ideas and technologies are now being brought to bear
Not a week passes without reproducibility in science – or the lack of it – hitting the headlines. Although much of the criticism is directed at the biomedical sciences or psychology, many of the same problems also pervade the chemical sciences.
A survey of over 1500 scientists conducted by Nature last year revealed that 70% of researchers think that science faces a reproducibility crisis. Over half, however, still have faith in published literature in their field – with chemists being amongst the most confident despite reporting the most difficulty replicating other researchers’ or their own work. Although this observation seems contradictory, an explanation might be that chemists are more often looking to repeat experiments exactly, says Rick Danheiser, a synthetic organic chemist at the Massachusetts Institute of Technology.
Chemical journal articles suffer from the inability of people to name compounds accurately
Anita Bandrowski, University of California, San Diego
Danheiser is the editor-in-chief of the unconventional journal Organic Syntheses that has verified the experiments of all the papers it has published since it launched in 1921. The journal does this by having the research replicated by independent chemists before publishing them – a practice that is almost unheard of in chemistry or any other research field (the exception being a few brief instances in history). All experiments are checked for reproducibility in the lab of one of the journal’s board of editors, often by graduate students and postdoctoral researchers working under the supervision of the Organic Syntheses editor. Danheiser, who has written about reproducibility, tells Chemistry World that the journal usually aims to check the work in submitted papers within six months.
But Danheiser says that rolling out this replication process to other fields is challenging as biology often has too many variables and physics – and some areas of chemistry – entails the use of specialised or costly equipment that may not be readily available in other labs. Between 2010 and 2016, the journal rejected 7.5% of submissions due to irreproducibility of yield or selectivity, Danheiser notes. ‘Most chemists would consider that to be frightening,’ he adds, as papers in conventional journals are therefore less likely to be reproducible.
Chemistry is to some extent in a ‘sweet spot’ when it comes to reproducing experiments as the apparatus used is usually not too exotic, notes Derek Lowe, a medicinal chemist who pens the In the pipeline blog. But he notes that, for instance, in natural product chemistry almost none of the multi-step syntheses used to produce the compound are a feasible way of getting to the product. As a result, most are never completely reproduced in the first place, explains Lowe. In fact, he says, products are often easy enough to obtain from their original source in most cases. What people do build on, Lowe says, is the unique techniques researchers use when creating compounds.
Medicinal chemistry and drug discovery literature are some of the more reproducible fields in chemistry, adds Lowe, as they’re more extensively conducted by industry scientists, who have fewer incentives to publish fake or substandard results. Lowe points out that industry researchers contribute more to patent literature, and patents lacking reproducibility and rigour could result in legal issues.
Chemistry’s reproducibility problem should be a ‘fairly easy thing to fix’
Phil Baran, Scripps Research Institute
Furthermore, Lowe believes irreproducibility is a bigger problem at high profile journals publishing cutting edge research, as well as low-end journals. Most solid papers are likely to be in what Lowe calls middle journals – those that are respectable, but not glamorous.
This month, the Chemical Probes Portal saw its 275th compound added. This portal is part of the campaign to make science more reproducible and holds a database of many of the small molecules used in drug discovery and to investigate biological processes. The problems being addressed by the portal are twofold, says Amy Donner, its director in Boston. The smaller problem, she says, is that of making sure appropriate reagents are used in experiments. Whereas the larger issue that results in a ‘tremendous amount of irreproducibility that we see in the literature’ is misunderstanding how to use these highly potent chemical probes to tweak biological processes in living organisms.
Donner says the number of chemical probes out there is in the tens of thousands, and that in itself is a problem. Biologists – who are most likely to use probes in experiments – don’t necessarily have the expertise to carry out relevant chemical validation tests to determine which molecules would work best in their experiments, says Donner. This is where the chemical probes portal comes into its own offering advice on which probes are suitable for specific circumstances and which are best avoided. Researchers can search their target protein and find a list of molecules that interact with it, Donner explains.
There’s currently no checklist outlining a compound’s information – rather, says Donner, scientists rely on aggregating information about probes as they go. Sharing that information on the portal should save others from replicating that effort as well as informing investigators how they should be designing experiments taking into account a probe’s strengths and weaknesses.
But the portal is still in its infancy and researchers face a huge task logging tens of thousands more molecules, Donner says. At present, the priority is highlighting the best and worst probes. ‘In the short term, that’s how we can have the biggest impact,’ she adds.
Research Resource Identifier (RRID) is another initiative with the goal of improving research reproducibility by helping to standardise how reagents, software, antibodies and model organisms are cited in the literature. Although over 200 journals have already started encouraging authors to use RRIDs to cite laboratory resources, chemistry journals are, on the whole, not on board yet, says Anita Bandrowski, a neuroscientist at the University of California, San Diego, who is coordinating the RRID project.
According to Bandrowski, ‘chemical journal articles also suffer from the inability of people to name compounds accurately’. As it’s difficult to teach everyone to use appropriate nomenclature, says Bandrowski, attention has mostly focused on pushing researchers to at least use numerical identifiers like the PubChem ID. Our findings show that ‘authors are highly accurate when they provide this type of information’, she says.
Another phenomenon that has taken industrial chemistry by storm is electronic laboratory notebooks (ELNs), which store experimental data and procedures, allowing researchers to access data years after it was collected. One feature of ELNs is that they allow researchers to search molecular structures of reactants and products, making it easier to keep track of reactions. Inside a research group or a company, ELNs may be set up to allow scientists to search their colleagues’ notebooks, which is useful to avoid duplicating work and assess differences in repeat experiments. But ELNs have seen slower uptake in academia probably due to their high costs.
There are many ELNs out there with different ways of collecting and storing data, says Richard Whitby, an organic chemist at the University of Southampton, UK, who leads the Dial-a-molecule network, which aims to speed up the process of synthesising new molecules. One of the network’s other goals is to promote a standard for ELN data to enable exchange and data mining. ‘ELNs which are designed for chemistry pay you back for using them,’ Whitby adds. He notes that they also have some useful features such as stoichiometry calculations and experimental safety data, which partly makes up for the fact that it’s actually more difficult to use an ELN than a paper notebook.
Phil Baran, a synthetic organic chemist at the Scripps Research Institute in La Jolla, California, says that blogging at the Open Flask has helped his group tackle issues like reproducibility and transparency around experiments. ‘We’ve gotten more use out of this than conventional peer review,’ Baran notes. He says chemistry’s reproducibility problem – compared with other fields – should be a ‘fairly easy thing to fix’. The solution, he says, is to put the onus on authors to make their supporting information clear and to encourage them to collaborate with other labs to get their work checked or field-tested. Journals should also make rules specific to different disciplines, and text only supporting information should no longer be allowed, particularly for methodology papers where the primary outcome is for others to reproduce a protocol, Baran says.