‘Prediction market’ trial in chemistry departments suggest less arduous way to prepare for research assessment
Getting chemists to bet on the outcome of the UK’s Research Excellence Framework (REF) system for assessing university research quality could trim its costs. Psychologist Marcus Munafò from the University of Bristol and colleagues trialled a ‘prediction market’ for chemistry department performance just before the REF2014 results were released. The predictions correlated strongly with actual REF outcomes.
‘We’re not claiming to replace the REF,’ Munafò stresses. ‘But as universities prepare for the next cycle they can use prediction markets to gauge how they’re doing relative to their competitors in a potentially much quicker and more cost-effective way.’
An independent report commissioned by the Higher Education Funding Council for England (Hefce) estimates the total cost of REF2014 to English higher education institutions was £212 million. For the previous Research Assessment Exercise in 2008 that cost was £47 million. Suggested improvements include using metrics that incorporate journal citations, however this won’t entirely replace REF’s detailed – and therefore expensive – peer review.
The prediction market is like a stock market, trading via a website or smartphone app. Sixteen participants from a number of chemistry departments were given 10,000 points, worth £30 in total. Over two weeks, they spent these points trading contracts specifying one of 33 departments and predicting one of eight ranges that its REF score would fall in. Each contract would pay 100 points if the real REF outcome fell in that range, and contract prices varied with trading. Munafò’s team built a final predicted REF score from the number of contracts for each department in each range.
Overall predictions correlated strongly with actual REF scores – but not perfectly, Munafò warns. He also admits that the small number of participants was ‘an important limitation’, and admitted to difficulties recruiting chemists. However, the psychologist stresses that predictions with more people should be even more accurate, and that departments can mandate staff participation. ‘If instead of time-consuming mock peer review exercises you engage with this prediction market for a week, maybe five minutes a day, I think that’s a good trade-off,’ Munafò says.
Imperial College London’s Tom Welton finds the concept ‘interesting and fun’, and might see if it has use in his role as dean of the faculty of natural sciences. However, he is concerned that running it in a single university would bias that university’s predicted position. Munafò accepts this possibility, but stresses that strategic departmental decisions’ effects could still be monitored by regularly repeating the exercise.
M R Munafo et al, R. Soc. Open Sci., 2015, DOI:10.1098/rsos.150287