Report claims wider impact of academic work was fairly determined for UK’s Research Excellence Framework
Last year, the research carried out at UK universities was graded as part of the Research Excellence Framework (REF), which will help to determine how much money each university receives. But controversially research impact was included as a new criterion and this was worth 20% of an institution’s score. Now a study has concluded that the new component, despite attracting some complaints about the extra effort it required, was worthwhile and assessed fairly.
At the request of the UK’s higher education funding bodies, RAND Europe, an independent not-for-profit research institute, conducted two evaluations of the impact component. The first asked 21 higher education institutions about their experience of compiling case studies and impact strategies. The second study examined the assessing panels’ performance.
We may have to be willing every five to six years to do this and see it as an opportunity
RAND found that academics and institutions reported some benefits, such as developing the ability to identify and understand impact, and the ‘stimulation of broader strategic thinking’. While research users felt it was a burden on resources and time, those asked to provide evidence did not find the process ‘overly burdensome’. The report concludes that REF 2014 contributed to a cultural change within institutions with academics thinking more deeply about the effects of their work.
For the second study, RAND surveyed academics and research users on the judging panels. By a large majority, they felt they had been able to assess impact in a ‘fair, reliable and robust way’. The report identified some areas for improvement and discussion to help subsequent REF exercises and other countries planning to run similar systems. These include how to manage variations in the way the process was conducted; how to avoid the risk of unsubstantiated and false claims being made; and how to clarify the processes for assessing different kinds of impact.
‘Given that the process for assessing impact was new, our evaluation shows that it worked, and it worked well,’ comments Catriona Manville, senior analyst at RAND. ‘The academics and research users who carried out the assessments felt that the process was fair.’
Richard Catlow, professor of materials and inorganic chemistry at University College London, UK, and chair of the chemistry judging panel, agrees the exercise worked well. ‘The community was a little apprehensive initially, but the outcome is a very good advert for UK science in general and chemistry in particular.’ Although the exercise does take time and resources, he argues that it is time well spent. ‘We may have to be willing every five to six years to do this and see it as an opportunity. There is a cultural change happening, and it is a good thing to get people to think about research in this way. It is not dictating the research agenda.’
However, Ralph Kenna of Coventry University, UK, who was part of the team which predicted REF 2014 results using metrics, is concerned about how impact submissions are calibrated. Although the process may attempt to adjust for bias, he does not see how it can take into account the margins of error associated with different assessors. A second criticism is transparency. ‘We still don’t know details of the criteria for different levels of impact and how these were implemented across disciplines. It is a great pity that we do not receive our submissions returned and graded so that we know precisely what REF are actually looking for in terms of impact (and everything else).’