After high profile hoaxes, the scientific community is looking to ensure that researchers maintain high standards of research integrity. Bea Perks reports
Research integrity is at the core of science. To quote Research Councils UK (RCUK): ‘It is a basis for scientists’ trust in each other and in the scientific record, and, equally importantly, society’s trust in science.’
The trust society places in science has looked a little shaky at times in recent years. Sometimes this was due to inaccurate reporting, but sometimes the blame lay with the scientists themselves. Media coverage of such events, the exponential growth in the number of papers published, the pressure that researchers are coming under to publish their findings, and the arrival of the internet have all led to a growth in concerns around the need to demonstrate research integrity.
The mighty fall
We all love to read about a scientific breakthrough and many of us might also quite like to be at the centre of an exciting new discovery ourselves. To get your name into the headlines for an unprecedented success finding a cure for something, finding the cause of something, or solving a problem that’s been bugging scientists for decades – might be nice. But there aren’t many projects that lead to a paradigm shift in our scientific understanding, and occasionally the temptation to be at the centre of a breakthrough, even if there wasn’t one, can be too great.
German physicist Jan Hendrik Schön, working for Lucent Technologies’ Bell Labs in New Jersey, US, hit the headlines twice: once for transforming microelectronics, and once for not having transformed microelectronics – he was exposed as a hoaxer. In 2001, Schön was publishing more than a paper per week. He was a nanotechnology superstar who was going to replace inorganic semiconductors with organic molecules. Unfortunately, his peers were having problems repeating his astonishing findings with single-molecule semiconductors. Then they started spotting identical graphs cropping up in different papers published in Nature, Science and other high impact journals. Then everything began to fall apart.
As the anomalies came to light, Lucent set up a committee to investigate. They concluded that Schön’s misconduct included: substitution of data; unrealistic precision of data; and results that contradict known physics. In the committee’s words, Schön had shown ‘reckless disregard for the sanctity of data in science’.
Schön lost not only his job, but also his PhD (a decision confirmed by Germany’s Federal Constitutional Court in October 2014). There remains, curiously, an archived web page from the Massachusetts Institute of Technology’s MIT Technology Review, where the mighty, but apparently softly spoken, Schön recalls being ‘very surprised’ by how well his molecular transistors worked.
The Schön scandal rocked science. It shook established institutions and wasted a lot of money and time – hundreds of years of research hours, reportedly. Importantly, it wasn’t an isolated incident. Schön had been a promising physicist, but a similar trail of deception was left by an equally promising chemist just a few years later.
Letting science down
By the summer of 2005, Bengü Sezen was a young organic chemist tipped for stardom with a PhD and six first-author papers to her name. However, as with Schön before her, Sezen’s peers were having difficulty replicating her results.
While she was explaining to everyone how her lab work was fiddly and therefore difficult to reproduce, a colleague of hers – fearing otherwise – set a trap. Sezen, working in Dalibor Sames’ lab at Columbia University, US, was told that dual copies of a standard example of her reactions – the conversion of imidazole to phenylimidazole – had been set up in the lab. In reality, one reaction had been set up with imidazole as the starting material and the other with N -methylimidazole. The next day, the product Sezen claimed would be there for (plain) imidazole was found in both flasks. The methyl group had vanished from N -methylimidazole, a result which could only be explained by sabotage, explained Paul Bracher on his chemistry blog ChemBark.
The results were presented to Sames, who set up an in-house investigation. After discovering that Sezen’s notebooks contained very little useful information, and that some of her spectra printouts had been modified with Tippex (before publication), a formal complaint was launched. Publications based on her findings were retracted and a subsequent investigation by the US Office of Research Integrity (ORI) cited 21 instances of misconduct, leading to Sezen’s suspension from receiving federal funds for five years.
Keeping a close eye on research conduct is more important than ever in an age where a slip up – purposeful or inadvertent – can become worldwide headlines that afternoon. Climate researchers at the University of East Anglia, UK, found this out the hard way in 2009 when a server at the university’s Climatic Research Unit (CRU) was hacked, and the content of informal email discussions was quoted out of context across the internet.
Phil Jones, research director of the CRU, has published over 400 research papers over the past 35 years. But at the end of 2009 he was – briefly – most famous for ‘hiding’ the truth behind climate change. Jones had been involved in an email discussion about the so-called tree ring ‘divergence problem’. Tree rings reveal how a tree has grown. Until 1960 they were used as a proxy for past temperature. But since 1960 the rings suggest that temperatures have fallen, while thermometers tell researchers that temperatures have risen: the divergence problem.
In an email, Jones said he had used a ‘trick’ that had previously been published by a colleague in the journal Nature ‘to hide the decline’ in proxy temperatures derived from tree ring analyses. The word ‘trick’ didn’t help, and the word ‘decline’ was widely misquoted by climate change sceptics, as though Jones had admitted that temperatures (rather than proxy temperatures, ie tree rings) were declining. This and numerous other emails were misquoted in support of climate scepticism.
Ultimately, eight committees investigated the allegations and found no evidence of fraud or scientific misconduct. But in their reports, the committees called on scientists to avoid future Climategates by opening access to the data they generate and to the methods they use.
A growing awareness about the possibility of scientific misconduct led lobby group Universities UK – with the help of RCUK and other related stakeholders – to draw up a national concordat to support research integrity. The concordat, released in 2012, outlines important commitments that researchers can make to help ensure that the highest standards of integrity are maintained. It also makes a clear statement about the responsibilities of researchers, employers and funders of research in maintaining high standards.
RCUK was already running an internal assurance programme to reassure research councils that funding was being used appropriately by research organisations. Approximately 35 to 40 organisations are assessed each year, answering a set of questions in writing before being visited by RCUK’s funding assurance team. From November 2012, the programme was extended to include questions on how universities are ensuring high research integrity amongst their researchers.
Institutions aren’t inclined to take this as seriously as they should
The pilot scheme (which covered April 2012 to March 2013) involved seven research organisations. They were all found to have complied with RCUK guidelines. The following year, the questions were modified slightly, and the latest statement, covering the responses of 15 organisations over the period 2013–14, was released in January 2015. Ten organisations were given a ‘satisfactory’ assurance rating for research integrity and five were given the highest ‘substantial’ rating. In 2014–15, 30 research organisations will be asked questions relating to research integrity.
‘The aim is to use the programme to raise awareness of research integrity in research organisations, and to provide a formal mechanism for [them] to report to RCUK on their policies, the implementation of those policies, and instances of cases [of misconduct] in the past three years’, a senior research integrity expert at RCUK tells Chemistry World. ‘It is also intended to encourage greater openness and help change culture towards it being in the research organisation’s reputational interest to publicise cases of proven misconduct. Overall this should improve research integrity in the UK and increase public trust in publicly-funded research.’
Calls for a statutory body
The concordat to support research integrity may provide assurances to government, the wider public and the international community that research in the UK is underpinned by the highest standards of rigour and integrity, but it has no regulatory power.
Richard Smith, who edited the British Medical Journal from 1991 to 2004, argues that serious scientific misconduct should be made a crime. In many cases, he says, researchers have been awarded substantial grants to carry out research, so misconduct is no different to financial fraud or theft. Researchers have often failed to deal with misconduct, he says, and our criminal justice system would do the job far more effectively. But this would require passing a law, and although research misconduct may be fairly high on the scientific agenda, says Smith, ‘I don’t think it’s high on the parliamentary agenda, and probably won’t be unless we have some startling case.’
‘Institutions aren’t inclined to take this as seriously as they should,’ he says. ‘If you’re a university and a prominent researcher turns out to be engaged in research misconduct, you can understand why you don’t want to make a big thing about that. You’d rather shove the person out the door as quietly and quickly as you can, or just do nothing at all.’ Also, he says, gathering evidence is not easy. ‘The police know how to do that – they seize computers etc – whereas the average university hasn’t much idea at all about how to investigate research misconduct.’ To complicate matters, research projects are often shared between institutions in different countries, making a thorough investigation extremely difficult.
Smith argues that science is particularly vulnerable. In the gambling trade nobody is trusted and everybody is watched closely, but in science almost the opposite is true, he explains. ‘When people ask the question “why does science misconduct happen?”, my answer is always “why wouldn’t it happen?” Whenever there are human beings there’s misbehaviour. There are substantial rewards if you can get away with it, and it can be rather easy to get away with it.’
Smith was, until a year ago, on the board of the UK Research Integrity Office (UK Rio), a charitable group that assisted in developing the concordat, and supports its implementation. ‘The concordat is a step in the right direction,’ says Smith, ‘but it doesn’t feel to me like a fully adequate response’.
He calls for a statutory body that oversees scientific research. While universities should investigate cases initially, a central body that offers support and ensures that investigations proceed effectively is needed. ‘A little bit like what UK Rio does,’ says Smith, ‘only the UK Rio doesn’t have any teeth.’
Aside from intentional misconduct, there are growing concerns about the reproducibility of published results. A recent review article in the journal Circulation Research by John Ioannidis from the Stanford School of Medicine, US, looked at recent evidence showing that most biomedical research findings published in high profile journals cannot be replicated, not even by the researchers who first published them.
One of the papers reviewed for this article was published in 2011 by a German team at Bayer Healthcare, in the journal Nature Reviews Drug Discovery. The Bayer team, noting that their scientists often struggle to reproduce published results, analysed reproducibility in their early stage in-house projects in disease areas including oncology, women’s health and cardiovascular disease. Their scientists were asked to compare in-house data with published data for 67 projects. The published data was found to match in-house findings in only a quarter of these projects. In almost two-thirds, inconsistencies between these data sets caused problems in the target validation process. Some projects were delayed, while many were terminated because the scientists failed to find sufficient evidence for the therapeutic hypothesis to merit further financial investments.
The Bayer team suggested many reasons for these inconsistencies. Maybe the published work didn’t use the right statistical methods or the sample sizes were too small. Maybe competition between labs and the pressure to publish had led researchers to leave important information out of their materials and methods. Or maybe the bias towards publishing positive results had pushed researchers to overlook negative findings.
It is not clear if the situation is improving, says Khusru Asadullah, former head of target discovery at Bayer HealthCare, who led the study. ‘Recognising that we have a serious issue here is the first step to changing the situation.’ Researchers and publishers need to reduce bias, he says. ‘We need to be more critical of our results when they seem to confirm our hypothesis and we need to publish negative results’. Asadullah dismisses the idea that clinical research might be unusually prone to these issues, compared to other scientific subjects. ‘Why should the situation be fundamentally different in other areas?’ he asks.
Bea Perks is a science writer based in Cambridge, UK