Campaigns to enforce complete and correct reporting of trial data are beginning to take hold

Publication bias is a real problem in clinical trials. The temptation to publish only favourable results is understandable. Academic researchers may worry that advertising trial failures could impact their future ability to get grants; industrial companies may be concerned about where the payback for their investment is going to come from. And scientists can get attached to pet projects and want to continue, despite trials showing that the drug is unlikely to work.

Clinical trials that give positive results are twice as likely to be published as those whose results are negative. This absence of reporting leaves a massive hole in the evidence, and introduces bias. Although the problem had been recognised for decades, it is only in the past five years that it has started to be addressed.

File folders in a filing cabinet

Source: © Shutterstock

Large amounts of historical trial data is languishing unpublished - predominantly when the results are negative or not ‘exciting’ enough to make it into the top journals.

The AllTrials campaign’s aim is to ensure that the results from all trials are properly reported, regardless of whether those results are positive or negative, and has succeeded in getting the issue on the global agenda. ‘It’s making groups like policymakers in the UK and the US, and especially the EU, realise that something must be done,’ says Síle Lane, head of international campaigns and policy at Sense about Science, and one of AllTrials’ founders. ‘It got a strong, sharp statement from the World Health Organization (WHO) saying that they expect researchers to publish results. And not doing it is misconduct.’

At launch, the EU tracker showed that 11 major sponsors – all companies – have reported results for all their due trials, but 32 sponsors – all European hospitals, universities and research institutes – have not reported any

The campaign has also been backed by patient support groups, who encourage their members to take part in clinical trials. ‘[Patients] trust that they are doing something that would help, if not them, then patients like them in the future,’ she says. ‘When you start telling them it’s 50–50 whether the results will ever be shared and whether any doctor can ever benefit from what was found out in that trial, they get furious. It’s an enormous betrayal of their trust.’

Lane is optimistic that from now on most trials will be reported, because of new regulations and the amount of scrutiny that is now on researchers. But the past is where the problem lies. ‘You can’t write a law today that applies retrospectively. It’s in the hands of the people who have access to the data to get it out there,’ she explains. AllTrials is, therefore, trying to get groups like research funders and universities to put pressure on researchers to publish these old data.

Trials tracker tools, developed by Ben Goldacre’s team from the Evidence-Based Medicine Data Labs (EBMDL) at the University of Oxford, UK, pull in information about clinical trials from sources such as registers and journals, and use algorithms to assess how many have been reported.

The US Food and Drug Administration’s (FDA) tracker, launched in February, looks at trials on the US register clinicaltrials.gov. The FDA Amendments Act mandates that all trials run in the US must be reported within 12 months of the trial’s end. It updates every day, and shows how many trials are overdue; currently, about 40% that should have reported have not. The tracker also shows the fines the FDA could have imposed – it can fine up to $11,569 (£8900) per day for overdue trials after a 30 day notice period, but has never levied any. The total was nearing $700 billion at the end of September.

A European tracker was launched in September. It highlights that almost half – 49% – of all clinical trials on the EU register have not reported results, despite rules implemented in December 2016 saying that this must be done within 12 months of the trial end. Yet the trials register does not flag up overdue trials, and there are no sanctions for non-compliance.

From now on, it’s all about publicly showing the good and the bad behaviour and policies. We have done enough raising awareness

Both trackers highlight that companies are far better at publishing results than academics. The EU tracker currently shows that 68% of company-sponsored trials due to report have done so, compared to just 11% of those sponsored by European universities, governments, charities and research centres.

At launch, the EU tracker showed that 11 major sponsors – all companies – have reported results for all their due trials, but 32 sponsors – all European hospitals, universities and research institutes – have not reported any.

Lane says there is little surprise that the corporate world is ahead of academia. ‘Companies are thinking about it a lot more, they have more resources, and have teams dedicated to sharing information and publishing results,’ she says. ‘They work in a highly regulated environment, have to have detailed operating procedures in place, and take changes in the law very seriously. Academics haven’t felt the same public pressure.’

Man in office holding document folder behind his back

Source: © Shutterstock

Research funders are beginning to recognise the issues and exert pressure by making new grant funding dependent on researchers having reported their previous trial results

It also begs the question whether those not complying should be allowed to start new trials. ‘Why are ethics committees giving them permission to run more trials?’ Lane says. ‘Why are funders paying for them? We are doing our best to make sure every academic working in the clinical trials world knows they have to publish results, and that people like us, and their funders and the universities they work in, are going to be asking questions of them about why they haven’t.’

One of the big incentives within academia is getting the next research grant, and Lane says it is pleasing that several big research funders, like the Wellcome Trust, the UK Medical Research Council (MRC), Médecins Sans Frontières, and the Bill and Melinda Gates Foundation are saying they want results published, and it will be a condition of getting future funding. The US National Institutes of Health said earlier this year that one of the criteria it will use when reviewing grant applications is whether that group has published previous trials. ‘That is the kind of lever that will make a difference,’ she says.

In the light of the WHO’s May 2017 statement on disclosure of results, which 11 funders ratified, Alltrials is now auditing whether the funders are matching up to what they said they would do a year earlier. ‘From now on for us, it’s all about publicly showing the good and the bad behaviour and policies,’ Lane says. ‘We have done enough raising awareness.’

Data sharing

Sharing patient-level data with other researchers is another important aspect of publication, not least because it can reveal a wider picture, and avoid wasted effort. One system, clinicalstudydatarequest.com (CSDR), is now used by 15 pharma companies, plus Cancer Research UK, the MRC, the Wellcome Trust and the Bill and Melinda Gates Foundation.

In July, another data sharing platform, Vivli, was launched in the US. This evolved from a project at the multi-regional clinical trials centre of Brigham and Women’s Hospital and Harvard University in Massachusetts, US, and focuses on the sharing and access of individual patient data from clinical trials. At launch, it included data from 2500 trials, from members including both pharma–biotech companies and academic centres.

GlaxoSmithKline was involved in CSDR at the outset, and is also a member of Vivli. According to head of medical policy Andrew Freeman, the company believes the results of clinical trials on its medicines should be made available. ‘We thought it was important to take a leadership role to show how this could be done, and to encourage others to do the same,’ he says. ‘The ultimate aim is a model where data from all research sponsors, including industry and academia, is made available for research.’

He adds that it is important that trial sponsors share their data with others so the findings can be reproduced, and further research conducted using data from different clinical trials. ‘This needs to be done in ways that make sure the data is readily accessible and useable,’ he says.

Outcome switching

Doctor playing the classic shell game with cups and a pile of pills

Source: © Shutterstock

Not reporting the originally designed outcomes of a trial, or adding new ones after the trial is running – without appropriate transparency – is at best misdirection, and at worst misconduct

Failure to report all trials is not the only problem. At the outset, the protocol for a trial has to state the outcomes that will determine its success. Outcome switching is when what is reported is not what the trial was designed to study, and the change is not highlighted.

‘This should be considered a form of trial misreporting,’ says Henry Drysdale, a junior doctor in the NHS and a co-founder of EBMDL’s Compare project. It looked at every trial published in the five biggest medical journals between October 2015 and January 2016, and tracked how many outcomes were not reported, and how many more silently added.

‘Outcome switching is when a trial is reported as if those were the outcomes all along, but different outcomes had been pre-specified,’ Drysdale says. ‘Unexpected results often arise, and can be more interesting than the pre-specified outcomes.’ Without strict measures in place to ensure the integrity of reporting against the protocol, he says, the original outcomes are more likely to be brushed under the carpet, while pretending the changed ones were the indicators of success all along.

It can be anywhere on the scale from deliberately misleading to a genuine belief the new outcome is a true result. ‘People can have one spurious result, and get the idea it is some kind of breakthrough,’ Drysdale says. But despite the guidelines, legislation, and widespread cultural agreement, switching still persists.

There are good reasons why outcome switching is a bad idea, including false positives, and negative or dangerous results being hidden. About half of all clinical data points are never reported, Drysdale says, and positive outcomes twice as likely to see the light of day than negative. The result is an intrinsic bias in the literature, which is translated into meta-analyses and systematic reviews.

The Compare trial – the formal results of which are currently in press – found that 58 of the 67 trials studied had discrepancies between the pre-specified and reported outcomes. ‘That could be anything from outcomes going missing, new outcomes being added in, or just a plain switch,’ Drysdale says. ‘It could also be a change in time point or change in method.’

He stresses that it is OK to change or add an outcome, if it is done properly and transparently. One might stop measuring something because it is dangerous, for example, or end part of the trial early. But it must be reported – one exciting result and 19 unreported ones is very different from 20 pre-specified outcomes, of which 19 were negative, with one positive. Interesting observations that were not pre-specified should be reported as exploratory outcomes that require further investigation in a trial that is appropriately designed to generate statistically significant results.

An audit six months after the initial study, Drysdale says, showed there was no improvement in the number of discrepancies in a very similar cohort of trials. ‘Anecdotally, from speaking to trial authors, we have been assured that changes have been made,’ he says. ‘I’m confident that if there were a continued effort of individual accountability, there would be a cultural change. Journal editors should be screaming for this at the submission stage. I guarantee that in one week they would get a revised submission with zero discrepancies. That is the bottleneck point at which it will be most effectively addressed.’