Peer review must change if it is to serve the scientific community

Peer review is often touted as a cornerstone of science. A research paper that survives its rigours bears a kitemark for quality – or at least plausibility – because experts have given it the once-over, kicked the tyres and judged it roadworthy.

Critical Point - Workers plugged into the matrix - concept illustration

Source: © iStock

Will peer review become more automated and more diverse in the next 10 years?

Yet despite its gold-standard reputation, its problems are manifold. Peer review is slow, burdensome and undermined by bias. That does not mean we should abandon it, but it is certainly ripe for an upgrade.

So, what might peer review look like in 2030? That’s the question addressed by a new report from publisher BioMed Central and technology company Digital Science.1 The solutions it offers range from artificial intelligence to innovative models of transparent peer review. Above all, though, it’s clear that improving peer review will demand a collective effort involving researchers, universities, publishers and funders.

The need for speed

It might feel like your paper is taking longer than ever to get through peer review, but an analysis of thousands of journals last year suggested that the median time between submission and acceptance has stuck at roughly 100 days since the 1980s.2 However, review times have increased markedly at some high profile journals, such as Nature and PLoS ONE. Some blame pickier referees who ask for ever-more experiments and data to justify a paper’s conclusions; others point the finger at journal editors who fail to arbitrate between conflicting reviews.

Whatever the cause, these delays potentially hold up scientific progress or raise the risk of being scooped, and they have helped to spawn a range of speedier options. For example, one could simply publish a pre-print: physicists have been using the arXiv repository to share manuscripts since 1991, and it now hosts around 10,000 new papers every month. Last year, the American Chemical Society announced that it would soon launch a similar ChemRxiv (garnering a somewhat lukewarm response from chemists).

Many pre-prints are eventually refereed at conventional journals, so alternative models are also coming to the fore. At F1000Research, referees are assigned to peer review papers only after they are published; at eLife, scientist-editors work with referees to come to a single verdict on a paper, thus avoiding the problems of contradictory reports. This proliferation of models is likely to continue through the coming decade, because each variation offers unique benefits that may be valued by particular researchers and scientists seem more willing than ever to experiment with new systems.

Some hope that artificial intelligence will quicken the pace. In principle, it could be used to help identify the best reviewers for the job, spot plagiarism or faked data, or check a paper’s statistical analysis. But AI systems should be used as tools to assist editors, rather than replacing human decision-making: imagine receiving a rejection letter that amounts to little more than ‘computer says no’.

The next generation

Many of these innovations include efforts to make peer review more transparent. This includes publishing reviews alongside papers, or even disclosing referees’ names. Advocates say that this helps to expose bias or carelessness in a review and also acknowledges the reviewers’ contributions. There are legitimate concerns that such openness might dissuade junior researchers from honestly critiquing the work of more senior scientists, fearing retribution. But a survey by Nature Communications last year found that 60% of its authors were happy for reviews of their papers to be published, and the journal now gives them that choice.

Transparency could help to solve another problem: the shortage of referees. Researchers often decline to peer review because it eats into their research or teaching time, but that might change if funders and universities gave them more credit for refereeing. A website called Publons enables that by recording peer reviewers’ contributions, which they can show off in job applications. It’s a great idea – as long as journals embrace it and employers take it seriously.

Capturing the talents of early-career researchers would help to swell the referees’ ranks, and formal instruction in peer review could provide the skills and confidence needed to draw them in. Publons has just launched a free online peer review training programme, but journal publishers must do more to offer this kind of instruction as well.

Training might also address the lack of diversity among referees. In 2016, for example, half of the reviewers used by Nature were based in the US, while Chinese reviewers made up less than 1% of the total, despite being corresponding authors on 11% of submissions. Excluding potential peer reviewers, whether consciously or unconsciously, inevitably diminishes the talent pool, meaning that journals are not always accessing the best reviewer for the job.

There is also evidence that women are underrepresented among referees. In January, an analysis of American Geophysical Union journals found that just 20% of referees from 2012 to 2015 were women;4 in comparison, 27% of first authors on published papers were female. This bias mostly resulted from male authors and editors suggesting women less often, a finding echoed by a study of Frontiers journals in March,5 and journals must redouble their efforts to tackle this disparity.

Indeed, this sort of data-crunching can offer a clear evidence base to improve peer review, so journals should routinely collect and publish such data to reveal what works, and what does not. This kind of open, collaborative analysis, involving the whole scientific community, is ultimately the best way to deliver more effective forms of peer review for the next decade and beyond.