Inaccuracies in scientific papers can result from honest mistakes – a wrong calculation or bad choice of reagent – but others are intentional where researchers have manipulated, omitted or made up data. Now a new investigation has found that the prevalence of misconduct is ‘disturbingly common’ in biomedical literature.

© BioRxiv/Cold Spring Harbor Laboratory

Proportion of papers with image duplications by country

The study examined published papers containing one type of inaccurate data – inappropriate image duplication. The authors looked at images from 20,621 biomedical papers in 40 journals from 1995 to 2014. They found that almost 4% contained ‘problematic’ figures, with numbers rising markedly after 2002. ‘Within those 4%, we found that about 30% contained duplicated images (which could be an honest mistake), 45% had repositioning (such as flipping, rotating or stretching images; less likely to be the result of an honest mistake), and 25% showed alterations (eg duplications of lanes or cells within the same image; very likely to be done deliberately),’ reports Elisabeth Bik of the School of Medicine at Stanford University in the US. ‘It’s impossible to know exactly … but we think that at least half of them were not honest mistakes.’

Bik suspects that the overall prevalence of inaccuracies in the published literature is much higher than 4% because most inaccuracies are not as easy to spot as the subset they looked at. ‘We focused on duplications in photographic images, because our eyes can easily compare and distinguish small irregularities in gel bands and backgrounds. For line images such as microarray heatmaps, bar graphs, line graphs, as well as for tables, it is much harder to see if something is duplicated or altered.’

The team found a marked variation in the frequency of problematic images among journals. ‘The fact that we found such large differences – a factor of 40 between the best and the worst journal – suggests that certain journals put much more effort in publishing valuable science than others,’ she says. ‘With our study, we hope to make editors and peer reviewers more aware of these types of inaccuracies, so that they can be easier to spot during the peer review process, and be corrected before publishing the papers.’

But, she adds, ‘most importantly, we should keep in mind that many of these irregularities might be honest mistakes or were made by a researcher in a desperate situation. We should never lose sight for the sadness behind some of these cases.’ 

Paul Bracher, a chemist at St Louis University in the US who has helped uncover research misconduct in the past, is ‘both surprised and dismayed’ at these results. ‘At best, these data point to a serious problem regarding the education of scientists about the ethics and best practices of constructing images. At worst, these data suggest that a number of scientists are deliberately gaming the system in a manner that is completely antithetical to scientific research.’