Sceptic communities should question their own biases toward peer review

An illustration for peer review

Source: © Matt Kenyon/Ikon Images

When questioning others, take a close look at your own biases too

Scepticism has been associated with science from an early time. The noble phrase ‘nullius in verba’ (take nobody’s word for it) was selected as the motto of the Royal Society in the 17th century: cited as ‘an expression of the determination of Fellows to withstand the domination of authority’. Those big issues may be a fundamental feature of humanity, as they don’t seem to have changed 360 years later. Scientists must still be driven to refute politicians and anyone who stands to gain from cherry-picking facts and applying logical fallacies. Its import was driven home to me when chatting about six years ago with the late Harry Kroto, who mysteriously claimed ‘“take no one’s word for it” is fundamental not only for science, but really for enlightenment’.

Thanks to technology and perhaps the open science movement, there’s now a subculture I’m going to call ‘science bros’. Driven by the desire to be always right and consisting mostly of non-scientists, their hobby is to seek out arguments and attempt to win with a haughty deference to peer review. If something’s peer reviewed, we are to understand, it attains a Biblical level of unchallengeable veracity. I point out the non-scientist aspect not because scientists aren’t similarly susceptible to seeking self-superiority, but rather that experience with science (hopefully) makes them aware of the flaws in the scientific method.

It’s surprisingly common that a small section of a paper will be disprovable

When a paper is published after peer review, we scientists hold some reasonable expectation that the work was performed as stated by the authors, and that the conclusions are accurate. The process is quite tight – I certainly hear more complaints about paper rejections than unwarranted acceptances! However, a panel of editors and reviewers may occasionally let a false positive pass.

It’s rare that an entire paper’s main conclusions are later debunked, but surprisingly quite common that a small section of a paper will be disprovable. Synthesis papers increasingly include sections on mechanism and energy calculations: insightful and invaluable when performed well. But these areas, rather than the main body of the work, seem to be where the brunt of obvious issues lie.

It’s fairly unremarkable to look through a specialist journal and find a few proposed mechanisms with some clear problems. For example, naked carbocations floating around in a strongly basic mixture, or strange transition states of multiple species that each have low availability in solution. I’ve even seen the proposed catalyst conspicuously missing from its cycle. Likewise, there are regular grumbles from computational chemists on amateurish ‘useless’ calculations shoehorned into publications. Natural product misassignments are also not infrequent, both from mistakes in analysis and undiscovered, inadvertent reactions during extraction and purification.

It’s gratifying to see the scientific process responding in the way it should

Normally, most of us let these go with just a few mutters under the breath: we don’t usually expect to see extreme mechanistic rigour in quick reports of new curios. But when a high-profile case is unearthed, it grips the organic community, sometimes for years! It’s gratifying to see the scientific process responding in the way it should, rationally correcting inadvertent errors, and most of us emerge having learned something.

Despite nullius in verba, in an information-dense world we lack time to assess each data item for reliability. Instead, trustworthiness of data sources is used as a proxy. For example, I fully believe that the Earth orbits the Sun, rather than any other astronomical model. Despite having never run the calculations or assessed the entire body of evidence, I am convinced by the heliocentric model due to trust in all the people who have told me about it (and my distrust in those sources who still push a geocentric alternative). I’d be happy to take a drug prescribed to me by an NHS doctor and approved by regulatory agencies, but alarmed if a random person in the street shook a bottle of miracle supplements in my face.

There’s also a fine line between scepticism and rationality. While sceptics use doubt to seek the truth, the complexity of any matter of debate leaves space for logical flaws like subjectivism. Attempts at building sceptic communities in recent years have been dogged by strong bias (the very thing they seek to avoid), in-group favouritism, and gatekeeping. This could be another fundamental human feature – Robert Boyle’s famous text The Sceptical Chymist, published around the same time as the Royal Society chose its motto, was an attempt by Boyle to distance and exalt himself as an alchemist-philosopher by disparaging ‘vulgar chymists’ who formulated early medical treatments in smelly labs.

Scepticism remains a cornerstone of science, threatened as much by scientists’ own biases as those of outsiders. But for all its issues, perhaps Harry Kroto is right – only transcendence over all unfounded biases can provide scientific enlightenment.