Statistical analysis finds high peer review scores are linked to more ‘impactful’ research

Shutterstock

An analysis of US grant applications found peer review panels were good at predicting the best research

The role of peer review in assessing grant applications has often been the subject of debate among scientists. Now, a recent analysis by US researchers suggests the system is working well. It found review panels were good at predicting higher quality research, and did not, on the whole, favour those researchers with past accomplishments or elite employers.  

Danielle Li of Harvard University and Leila Agha of Boston University analysed over 130,000 research grants funded by the US National Institutes of Health from 1980-2008. Each grant was awarded a score during peer review. The researchers then looked at these grants’ scientific outcomes several years later, assessing them in terms of publications, citations and patents.

The researchers found that higher peer review scores were consistently associated with better research outcomes; these projects had more publications and citations, and generated more patents than those with lower scores. This relationship persisted even when they include detailed controls for an investigator’s publication and grant history, institutional affiliations, career stage and degree types. They conclude peer reviewers can look beyond an investigator’s reputation or record, and ‘contribute valuable new insights’ about the scientific quality of grant applications. However, they point out that the system is not perfect, as mistakes and biases still occur occasionally.

Kieron Flanagan of the University of Manchester, UK, says the work is an interesting and a welcome contribution to a very small but growing literature. However, he points out that the analysis only considers funded projects, not rejected proposals as well, and there is the possibility that rejected projects were later funded by other organisations and had an even greater scientific impact than these ones.

Flanagan also highlights the assumption that scientific ‘impact’ is measured by big-hitting papers and patents. ‘But it’s not clear to me that this is necessarily always the case,’ he says. ‘For instance, really transformative research might take much longer to have measurable impacts than ‘standard’ high quality work. Or it might even not be picked up at all by these indicators. And funders who want to prioritise high risk research might have to be willing to tolerate more failure, with implications for peer review.’

He also notes that these results are for one funder in one country and one set of disciplines. The situation may be different in other disciplines or for other funders. It may be that the NIH just have really good selection processes for reviewers and grants, he adds.

‘The researchers have found convincing evidence that peer review, on the whole, continues to do a reasonably effective job,’ comments science policy researcher Ben Martin of the University of Sussex, UK. ‘The work is rigorous and thorough, and statistically they deal with a large number of grants.’ A novel feature, he adds, is that they look at indirect patents (where patents cite a research grant) which widens the assessment of research impact.