New incentive systems take into account more than just a researcher’s publication history

An image showing crumpled paper

Source: © Gary Bryan/Stone/Getty Images

Throw away your reliance on publications – it’s time to reward more of what a researcher does at work

The current preoccupation with journal metrics has skewed the system by which researchers are rewarded. It has led to a one-size-fits-all definition of excellent research that is unable to incentivise teamwork, scientific integrity and high-risk blue-skies research. ‘The good news is that everywhere in the world, but especially in Europe, people are thinking about incentives and rewards and making changes,’ says Frank Miedema, dean of the University Medical Center Utrecht in the Netherlands and a key advocate for improving academic incentives.

Rumblings against the established reward system began in the early 2000s when it became apparent that universities were not fairly rewarding all their researchers. At the University of York in the UK, ‘women were more likely than men to leave it an unnecessarily long time before applying [for promotion],’ explains Robin Perutz, who was head of the chemistry department at the time. Part of the problem came from a lack of transparency in promotion criteria, so a new system was set up that included workshops to explain to staff the range of achievement that would constitute a good promotion case at the university. ‘Essentially the same system has continued ever since,’ says Perutz. ‘I think it has had a major impact.’ Indeed, York’s chemistry department was the first in the UK to obtain a Gold Athena Swan award in 2007, recognising a commitment to advancing gender equality, and is the only department in the country to have retained the award for this long.

Other universities have taken up the good practice developed by York, including University College London (UCL) in the UK. In 2018, UCL also adopted a new promotion framework that encourages staff to put forward a more balanced portfolio of achievements than the previous research-focused assessment. Katherine Holt was promoted to professor in the first round of the new framework. ‘I had to meet professional criteria for research … but also hit some educational criteria, which were about demonstrating leadership and innovation in teaching and education, as well as in institutional citizenship and enterprise,’ she says.

One of the drivers for the framework’s introduction was improving teaching standards after poor student satisfaction ratings. ‘How do you get your academic staff to take teaching seriously if there’s actually no reward for them to do so?’ asks Holt, who was chair of the teaching committee for UCL’s chemistry department between 2014 and 2018. The move has also encouraged institutional citizenship – activities such as sitting on cross-departmental committees that support the running of the department. ‘I think it’s done a good job of encouraging less selfish behaviour,’ says Holt. ‘It encourages people to see [departmental or administrative tasks] as something that they can stamp their own ideas on.’

The credit cycle

One widely used model for understanding how science rewards researchers comes from an ethnographic lab study published in 1979, which proposed a ‘credibility cycle’. As philosopher of science Remco Heesen from the University of Western Australia explains, ‘we think of scientific work as being rewarded with credit, where credit is somewhat loosely defined as the recognition that you receive from other scientists.’ The credit is given in the form of publications, citations, awards and promotion. These in turn lead to funding, which is then invested in more knowledge creation.

The problem in recent years is that credit has widely become synonymous with metrics – specifically, the results of statistical analyses of publication and journal data. Publishing in high impact factor journals has grown increasingly important. In China, many universities even pay cash bonuses for papers in such journals – a practice launched by the physics department at Nanjing University around 1990. A study of the reward policies in 100 Chinese universities revealed that the average going rate for Science and Nature papers was $44,000 (£33,500) in 2016, and the highest payment was $165,000. The average salary of a university professor at the time was just $8600.

The problem with the metrics-heavy approach is that it has changed the type of science that scientists do and funders want to fund. ‘This is simply not good for science and others in society,’ says Miedema. For example, research into strokes has been dominated by genetic studies, rather than areas that might be of more immediate help to patients, such as improving rehabilitation; and areas of cancer research that struggle to make it into high impact factor journals have also struggled to attract funding.

In chemistry, the focus on publication metrics has led to the practice of ‘honorary authorship’, where those who have made no significant contribution to a paper are named as authors. This is a serious ethical problem, according to Perutz. The emphasis on being a first author, in order to gain the most credit, can also be destructive to interdisciplinary work. ‘To solve difficult problems, you need a team of people,’ says Perutz. ‘You should be recognising the team that’s involved and not saying [one author] is more important.’ He adds that the emphasis on producing large numbers of publications, and having to produce a new paper regularly, is stifling the types of ground-breaking research that take many years to complete.

Many departments are now trying to counterbalance the pressure of metrics. On the UCL promotion application, Holt says, ‘there is lots of space to write contextual information about [why a] paper was important’. And in 2014, Miedema moved University Medical Center Utrecht away from the numbers game and towards an appraisal system that allows for excellence in group leadership, academic culture and different types of science. The school also assesses researchers on their outward engagement, asking them if they consulted with patients and other external stakeholders when developing their research questions. The transition wasn’t and still isn’t easy, and some colleagues find the new approach difficult, but as Miedma says, issues surrounding promotion criteria are ‘discussed in the open and we are now seeing different people being promoted.’

Incentivising open science

Many disciplines are currently experiencing a ‘reproducibility crisis’ where research has been impossible to replicate, calling into question some fundamental results. However, the current reward system hampers efforts to increase data sharing and openness, according to epidemiologist David Moher, who is also the director of the Centre for Journalology at the University of Ottawa, Canada.

In 2017, Moher and others devised six principles for assessing scientists. As well as considering data sharing and transparent reporting of research, the principles also reflect all aspects of a researcher’s job, consider how researchers address societal needs and reward blue-skies thinking. This has been followed by the Hong Kong Principles (HKP) developed at the 2019 World Conference on Research Integrity, which focus on ensuring that researchers are explicitly recognised and rewarded for behaviour that leads to trustworthy research.

Currently few institutions reward such measures, although Moher points to the Delft University of Technology in the Netherlands which uses ‘data sharing champions,’ to assist others. Importantly, Delft’s promotion and tenure criteria account for – and reward – this activity.

A lot of academics understand that changes in how we assess and reward science are needed in principle, says Moher, but now ‘the key is implementation … that’s where there’s not been as much success as there needs to be’. He is planning to publish a tool kit to help departments develop new reward systems.

High risk, high awards

But changing the reward balance might not always be an easy way to get desired outcomes. Different and sometimes opposing incentives might be needed to encourage all the behaviours considered beneficial. Heesen has looked for the sort of incentives that might favour what he calls ‘maverick’ scientists – those who undertake high risk, high reward, paradigm-changing research – over ‘followers’ who make incremental progress in well-established areas. His analysis looked at the theoretical trade-off between research impact, risk of failure and speed of getting results and concludes current credit incentives are unlikely to help mould mavericks.

This can be seen in current funding schemes designed to reward high risk transformative research – the sorts of research a maverick might carry out. Heesen remembers funders evaluating such a scheme as a great success due to the large number of publications it produced, ‘but actually … if they had been successful at funding high risk, high reward research, there should have been quite a lot of [projects] that failed,’ he says.

Peer recognition is another feature of the credit cycle and this includes the Royal Society of Chemistry’s portfolio of over 60 awards and prizes. Alexandra Macaskill, RSC programme manager, says feedback from past winners shows awards have career benefits. ‘We send people on lecture tours and especially if they are at an early stage, they may have developed collaborations on the back of the opportunities they get,’ she says.

The RSC recently published the findings of an independent review of its recognition portfolio that investigated whether it is rewarding all aspects of excellence in chemistry. ‘There was an awareness within the organisation that the scientific environment had changed,’ says Macaskill. For example, science today depends more than ever on teamwork and collaboration.

The RSC is currently carrying out an independent review of its recognition portfolio to make sure it is rewarding all aspects of excellence in chemistry. ‘There was an awareness within the organisation that the scientific environment had changed,’ says Macaskill. For example, science today depends more than ever on teamwork and collaboration. A report will be published soon.

How we evaluate researchers clearly indicates what we value most and will have a powerful influence on behaviour. ‘At the moment, people have a very narrow concept of what constitutes a good scientist and I think that is part of the problem,’ says Perutz. But Moher is optimistic that a new attitude to reward and incentive is starting to appear. ‘We do have progressive leaders in many universities who recognise the problems in the current system,’ he says. ‘We’re embarking upon slow, steady change.’