In 2013, the Research Excellence Framework replaces the Research Assessment Exercise in rating every chemistry department in the UK. Leila Sattary weighs up the evidence

In 2013, the Research Excellence Framework replaces the Research Assessment Exercise in rating every chemistry department in the UK. Leila Sattary weighs up the evidence

David Graham Illustration

Preparing for the Research Excellence Framework (REF) and its predecessor, the Research Assessment Exercise (RAE), are large tasks of immense importance to universities in the UK. Not only does it determine how a £2 billion annual budget is allocated between institutions, it also establishes ‘star-ratings’ for research performed at academic departments and affects their position in league tables. Their funding and reputations are on the line.

The REF, led by the Higher Education Funding Council for England (Hefce), will assess the quality of research across all subject areas in all UK universities in the period 2008–2013. For universities in England and Wales, the outcome will determine their funding from 2015 onwards. This funding is crucially important to universities as, unlike project-centred funding from Research Councils, industry or charities, it is money with very few strings attached, allowing institutions to use it flexibly to support their research effort. Along with funding for specific projects, this Hefce block grant is the main source of income for research in UK chemistry departments.

RAE versus REF

Assessing the quality of academic research is not a straightforward task. Peer review of academic papers has been the favoured approach, but this requires a huge effort from the universities in preparing their submissions from some 200,000 papers, and from Hefce and the assessment panels in reading and rating them. Following the previous assessment in 2008, it was instead proposed to run a metric-based assessment, which would cut out the time-intensive peer review in favour of scores calculated based on journal impact factor and citation counts. However, while this approach may have been acceptable to the medical sciences, it did not go down well with the rest of the academic community. 

David Graham Illustration

With a metric-driven approach rejected, the REF 2014 has reverted to similar principles to its predecessor, the RAE. Each member of staff included in the submission must nominate their four best papers (or other research outputs, such as patents) to be rated on a scale of 0 to 4 for scientific excellence. These outputs will comprise the largest component of the assessment, contributing 65% to the final score. Unlike the RAE 2008, citation data will be available but will not act as a substitute for panel members’ expert judgement. No other metrics, such as journal impact factor, will be used, nor will the panel attempt to assess the contribution of individual authors in multi-author papers.

The number of ‘units of assessment’ subject panels has been reduced from 67 to 36 and many disciplines have been brought together into single panels. For example, geography and archaeology are now united and many of the humanities panels have been combined, causing much controversy. Chemistry, however, remains as a single expert review panel. 

The submissions to the REF will be rated as 4* (world-leading), 3* (internationally excellent), 2* (recognised internationally), 1* (recognised nationally) and U (unclassified). An overall quality profile will be produced from the ratings and, combined with a full time equivalent (FTE) multiplier, will ultimately determine the size of the slice of the funding cake received by each unit of assessment (a single or set of departments) at each university.  

In common with the RAE, an essay is required to describe how the unit provides an environment cohesive for excellent research and indicators of academic esteem. The extent to which this essay describes a research environment full of ‘vitality’ and ‘sustainability’ will be worth 15% of the quality profile. 

Both the outputs and environment components of the assessment process are familiar and are relatively well-understood processes. The academic community would have felt very comfortable with the new REF, if it was not for ‘impact’.

Impact

David Graham Illustration

‘Impact’ has been a word that has haunted academics since 2009 when the Research Councils introduced a new assessment criterion for research grants, which required applicants to write about their plans to ensure their research has an impact outside of academia. Pressure from the government 

to demonstrate the economic and social value of research also manifested itself in impact forming a new assessment criteria in the REF. Despite protest marches and letters of objection from many established researchers, the REF will continue as planned, attempting to assess the extent to which departments have been successful in impacting on the world around them, based on their research excellence. 

The impact component of the submission has two sections, describing the unit’s strategy for developing impact and a series of impact case studies. These case studies must cover specific examples of where their research, undertaken between 1993 and 2013, has had a demonstrable impact on the economy or society. This must be backed up by evidence to reinforce the ‘reach’ and ‘significance’ of that impact. 

Despite some academics’ obvious contempt for the ‘impact agenda’ over recent years, there is a feeling of acceptance of impact as another hoop to jump through. The discontent in the community has been somewhat quietened by the loosening of the definition to include social as well as economic impacts, and the increasing understanding from funders that impacts can take decades to materialise from fundamental research. The best outcome of the impact agenda would be if the hundreds of REF impact case studies currently being written by universities provide enough overwhelming evidence to persuade government to direct more funding for research. 

Chemistry panel

Being on the panel that will make the judgements is clearly a big commitment. There is a lot of pressure on the time of the individuals involved and also on ensuring the exercise fairly reflects the strengths of UK chemistry as a whole. The chemistry panel has 17 members, including 14 members from a cross-section of UK universities and two from industry. Chaired by Richard Catlow from University College London, the panel has a broad base of expertise covering the discipline. The panel will also have the option of bringing in additional experts to help with outputs or additional user assessors for parts of the impact assessment. 

We are in a good position to assess impact, but it is the new component

The panel guidance was published in January 2012 and the panel has not met since. In 2013, ahead of the submission deadline, Catlow will be leading the panel in evaluating how their collective expertise maps against the expected shape of the chemistry submission and recruiting additional assessors to fill in the gaps. ‘We have already identified that we need additional expertise in synthetic organic chemistry to assist with the outputs and impact,’ says Catlow. He is also clear that while the sheer volume of outputs will be time consuming, the biggest challenge for the panel will be assessing the impact. ‘We are in a good position to assess impact, but it is the new component,’ he acknowledges. 

Hefce has put concessions in place to encourage universities to submit the work of all of their excellent researchers, whether they have a long publication record or not. Early career researchers can be returned with fewer outputs without a penalty. Hefce has also put more obligations on universities to comply with the Equality Act 2010 to avoid low submission rates from eligible female, black and disabled researchers, which was the case with the RAE 2008. Hefce also hopes that the changes to allow individuals whose circumstances have significantly constrained their ability to work productively will persuade universities to include researchers whose career paths have deviated from the norm. 

Playing the game

Although the REF is designed to be a fair assessment of university research excellence, there are still many games that can be played to improve the outcome. When a small improvement in the final quality profile might make a difference of potentially millions of pounds, it is easy to understand why universities are putting lots of effort into the REF preparations. 

David Graham Illustration

Universities are not required to submit all of their academic staff to the REF. To optimise the overall rating, staff members who have not produced high-quality papers will probably be excluded from the entire exercise by their host institutions. Whether someone is ‘REFable’ is a key consideration in job applications in this current period running up to the census date in October 2013. This is also leading to a high degree of volatility in the system. Hiring, firing and poaching research stars are already widespread behaviours and likely to peak at the start of the next academic year. 

Deciding who to submit and who to exclude from the REF is the key decision for universities. Although Hefce has the power to change the funding formula in the future, the expectation is that they will only fund 3* and 4* research, with 4* attracting three times the amount of funding of 3* research. Such a heavy bias towards 4* research has the potential to create odd behaviours in the system. 

Submitting more people gives a higher FTE multiplier and is more likely to lead to more money. However, this strategy could lead to a lower average rating, which would affect league table position and reputation. Some units of assessment in some universities will have very little research that would be rated as above 2*. They might choose to forgo the money and instead only submit their very best people with the intention of gaining a high place in the rankings. These kinds of behaviours have the potential to create misleading pictures. 

Nick Norman, head of the school of chemistry at the University of Bristol, UK, is responsible for the REF submission for his department. ‘There is a tradeoff between money and star-rating and it is up to the institutions to determine their own policy,’ he says. Graeme Rosenberg, REF manager at Hefce, is well aware of the politics. ‘We will also compare the volume and profiles from the RAE and look out for odd behaviours,’ he confirmed. 

With reputations and so much money at stake, there are risks for individual institutions in choosing the correct strategy for their submission. However, there are also risks for the sector as a whole. With a clear focus on 3* and 4* research, institutions are likely to only submit researchers who fall squarely in this category. With fewer people being submitted to the REF, this could potentially lead to a false impression to those outside of academia that the volume of research is in decline. Rosenberg is not convinced universities will take this strategy. ‘When people talk about only submitting 3* and 4* people, it is always at someone else’s institution,’ he says. 

Predictions and preparations

Predicting the outcome of the REF is difficult. The full extent to which universities have successfully played the game and how panels have interpreted the rules will only be known when the assessment is over. Catlow feels that the REF is similar to the RAE in many ways and we should expect a broadly similar outcome. ‘It is based on peer review and the bulk of the mark is for outputs. However, impact could alter things and it is going to be very interesting to see how it works out,’ he says. Norman feels that there is still a lot of uncertainty in how impact will be assessed. ‘We have no experience on how it will be judged and this does cause some nervousness,’ he admitted. 

Hefce will not publish the algorithm for funding until after the assessment has taken place. ‘There is always going to be some uncertainty in terms of the outcome of the REF process although different eventualities can be modelled,’ says Norman. The University of Bristol, like most institutions, is already deep into their REF preparations. They are having two ‘mock REFs’ to ensure they are fully prepared. Norman says ‘starting in plenty of time is essential and it can’t be rushed. The information required can take a long time to collate.’

The long process does not end with the submission deadline in late November 2013. Throughout 2014, the panels will assess the submissions and the final verdict will be published in that December. And with the result informing funding decisions from 2015, the next year is crucial for the future of UK chemistry departments.

Leila Sattary is a science writer based in Oxford and is currently involved in preparing the REF submission for the University of Oxford’s physics department