When formal investigations of research misconduct are opaque and sluggish, it is inevitable that chemists will take to the blogs to debate suspicious papers, says Mark Peplow

In the American west of the 19th century, when men were tough and justice was rough, citizens often took the law into their own hands. Vigilance committees were a common feature of frontier towns where there was no official authority to track down cattle rustlers or punish horse thieves. The results were mixed: some committees were remarkably successful in rooting out crime, while others were all too eager to have themselves a hanging.

But most of these groups shared a common stimulus. From the wild west to Gotham City, vigilantes emerge when a community decides that conventional law enforcement is ineffective.

Today, that logic is being played out on a rather different frontier: chemistry. In August, the popular blogs ChemBark and Chemistry Blog made a series of posts about research misconduct relating to alleged data manipulation in three papers. But are blogs the right forum to expose these cases?

Quality blogging

Over the past decade, blogs have provided a vital space for chemical debate. At their best, they are a continuation of the conversations that happen in tea rooms, conference halls and lab meetings around the world. At ChemBark, for example, the comments that follow blog posts often contain thoughtful analysis and measured discussion. It is to his credit that John Gladysz, editor-in-chief of Organometallics, which published one of the papers under scrutiny, joined in to offer his own insights. ‘Although I write this sentence with a wink to all my friends on my masthead page,’ he wrote in one comment, ‘this has made me muse whether an editor-in-chief could dispense with a high-maintenance editorial advisory board and simply throw the various thorny issues that arise out for adjudication on a quality blog like Chembark.’

Yet several commenters on ChemBark argued that its author, Paul Bracher, an assistant professor of chemistry at St Louis University in Missouri, had overstepped the mark, and that these discussions should only take place once a paper is retracted, or after an official investigation.

Blogging provides a safe haven for whistleblowers; it enables the community to police itself; and it also puts pressure on journals and universities to investigate

Those processes can take years, though, and many more cases are never even investigated. A 2008 survey of more than 2,200 US scientists found that they had seen about 200 examples of likely research misconduct over a three-year period.1 Very few of those would ever come before the US Office of Research Integrity (ORI), which reviews academic misconduct and has the punitive power to block federal funding. This may stem from an understandable reluctance to accuse a fellow scientist of unethical behavior. But it also reflects a suspicion that reporting concerns about research data to journals or universities will lead nowhere.

The ORI expects institutions or funding agencies to take the lead on investigating misconduct, but at least it can impose penalties – most countries, including the UK, do not even have an equivalent oversight body. Journal editors can merely ask questions of the authors, and then alert the host institution if necessary. Those universities have little incentive to tar their reputations by exposing misconduct. All too often they obfuscate; and when they do dig deeper, reports frequently remain confidential. If a paper is eventually retracted, the accompanying notices can be so opaque that the original transgression is completely obscured.

The upshot is that while chemists are largely expected to police their own activities, they are incredibly poorly informed about the overall scale of misconduct in their field, and blinded to specific incidents.

Blogging about research misconduct is a natural response to that parlous state of affairs. It provides a safe haven for whistleblowers; it enables the community to police itself; and it also puts pressure on journals and universities to investigate.

But there is a fine line between a caped crusader and a lynch mob. Bloggers must give researchers involved in the case enough time to explain their side of the story, if they wish to, and readers must not be left with an impression of wrongdoing unless the blogger can provide strong grounds for suspicion. Just like staff journalists, bloggers must take pains to avoid damaging the reputations of innocent parties – and, for their own sakes, to avoid the unwelcome surprise of a libel action.

Checking the numbers

Of course, it would be preferable to prevent misconduct from occurring in the first place. Much of the responsibility for this must rest with lab chiefs, whose reputations are burnished by high-profile publications. In return, they have a duty of oversight on the papers that bear their name.

Principal investigators should routinely spot-check the raw data that underlies experimental results before papers are submitted, or at least delegate the task to a senior postdoc. Their graduate students should receive training in scientific ethics, not least to understand why data fabrication undermines science, and how to take action if they see it happening. And funders should be more proactive in investigating allegations of misconduct, and being transparent about the results. If the possibility of being caught is low, and the punishment benign, cheats have little to deter them.

Blogs aren’t likely to change how we deal with suspicious data overnight, but they are helping to press for greater scrutiny of published results. That will ultimately benefit us all.