Behaviour model hints at how peer review can stop scientists adopting the wrong hypotheses


Part of the herd? The peer review process could push scientists towards the wrong answer

UK researchers have suggested that purely objective scientific peer review could fail to eliminate false theories. Marcus Munafò at the University of Bristol and colleagues show that the peer review process, in which anonymous independent scientists review publications, could ‘herd’ researchers towards the wrong answer. But they also find reviewers’ subjective knowledge can reverse herding. ‘Our results suggest that peer review may not perform as badly as sometimes thought,’ Munafò tells Chemistry World.

Science’s weaknesses are coming under increased scrutiny. Damning criticism came from John Ioannidis at Stanford University, US, who claims that most published research findings are false. One possible driver is pressure to publish new findings that garner high profile papers. If early papers promote wrong theories, peer review can struggle to correct them.

Experimental psychologist Munafò and Bristol economists In-Uck Park and Mike Peacey modelled peer review behaviour in an attempt to scrutinise the process. In their model, scientists hold initial opinions on which of two hypotheses is more likely to be true. One scientist produces a manuscript advocating one hypothesis that is sent to another for peer review, who accepts or rejects it. The reviewer then prepares a paper, deciding which hypothesis to advocate. As this chain continues scientists shift their opinion of which hypothesis is true according to their experiences.

In one scenario, the modelled reviewers consider only objective criteria, such as study design and methodology. In another, they also subjectively consider how strongly they agree with its conclusion. Herding, where scientists submit manuscripts disagreeing with their initial opinion on an idea, was an inherent feature of the models because their behaviour is guided by others.

Munafò and teammates found the probability of publishing papers that came to incorrect conclusions fell as publication numbers grew, though herding meant the probability got stuck at a stable minimum in the purely objective model. Where subjectivity was allowed, but not excessive, the probability of backing a false hypothesis continued falling. However, if scientists stopped publishing on a topic too soon the herding behaviour was not completely reversed.

In reality complete objectivity is ‘essentially impossible’, notes Nicola Nugent, the Royal Society of Chemistry’s assistant manager for peer review. ‘This work supports what publishers and editors already know,’ she says. ‘A healthy balance of objectivity and subjectivity in the review process is crucial.’

David Colquhoun, a pharmacologist at University College London, UK, who has criticised the ‘publish or perish’ culture for eroding peer review, says the study’s ‘qualitative conclusions could well be right’. ‘But my guess would be that behaviour varies from one field to another,’ he adds. ‘Ioannidis is mostly writing about clinical trials and genome wide association studies, which have huge potential for abuse by cherry picking from a large number of outcomes. I suspect that things are better in the harder sciences.’ Colquhoun thinks post-publication peer review might be the solution for problem areas.

That is one way Munafò says researchers could share the diffusely-spread information contained in subjective commentary. ‘Finding ways to communicate knowledge is important,’ he explains. ‘Any mechanisms which improve information flow should be valuable. This could include post-publication peer review, or prediction markets to capture the “wisdom of crowds”.’