UK higher education body launches investigation into the pros and cons of metrics in managing research
A committee set up by Hefce (Higher Education Funding Council for England) aims to grapple with the thorny issue of using metrics to assess and manage research. Metrics include analysis of journal articles and citations, but have expanded to altmetrics, which track what people are saying about a paper online.
Hefce’s review will consider the role that such metrics might play in determining quality, impact and other key characteristics of research in the higher education sector. But the review will also consider the negative effect of metrics on research culture.
‘Individual researchers are experimenting with different tools, but to what extent do we want metrics to carry real weight in overall evaluations of the research system? That is a serious question to think through,’ says James Wilsdon, professor of science and democracy at the University of Sussex, UK, and chair of the independent review. The committee hopes to complete its work by spring 2015, in time to influence the next assessment of UK universities, the Research Excellence Framework.
The review will consider how a metric approach fits with the missions of universities and research institutes and look at what can, and cannot, be measured quantitatively. The inappropriate use of metrics will also be considered. ‘We want to avoid any rush to adopt particular matrices or indicators that may exacerbate particular negative features of the research system,’ says Wilsdon, citing the inappropriate use of journal impact factors in promotion decisions.
A central question is whether measuring metrics for your activities influences the way you do those activities, says Henry Rzepa, a computational chemist at Imperial College London, UK. The h-index, which looks at a scientist’s most cited papers and how many citations they receive, can be misused by ‘consortiums or cartels’ who agree to cite each other’s papers. ‘This activity has no scientific purpose, but it does mean the members of the cartel benefit by having their h-count increase quite dramatically,’ says Rzepa. ‘I know this happens in chemistry.’
Pharmacologist David Colquhoun at University College London, UK, is even more sceptical: ‘The h-index is terrible. It is biased against young people.’ He argues that you assess quality by reading scientific papers, not by looking at phony metric numbers. He predicts that if the steering group recommends more use of metrics it will result in ‘more dishonesty, more corruption, more gaming’. He has outlined on his blog that many papers topping the altmetrics chart are of little consequence and says this measure may be inversely related to quality.
Colquhoun adds that the biggest corrupting influence is probably impact factors and the importance given to ‘glamour journals’ that are well regarded because of their high impact factors and criticised recently by Nobel prizing winning cell biologist, Randy Schekman. ‘I think almost all practising scientists regard metrics as corrupting honest science. It is the hanger’s on, like Hefce, the government bodies and to some extent research councils [that are taken with them].’