Bibliometric studies of research groups are useful but should be interpreted with great care, suggest Christoph Neuhaus and Hans-Dieter Daniel

Bibliometric studies of research groups are useful but should be interpreted with great care, suggest Christoph Neuhaus and Hans-Dieter Daniel

COMMENT-p42-180

As universities face increasing competition for the best minds of tomorrow and financial resources become scarce, reliable indicators for research performance have become indispensable for decision-makers in science and technology. The Higher Education Funding Council for England (Hefce) is developing a new framework for assessing and funding research. Committed to full implementation in 2013, the Research Excellence Framework (REF) will replace the Research Assessment Exercise (RAE). The new framework will make greater use of quantitative indicators than the RAE and will place a special emphasis on bibliometric analyses.

Compare and contrast 

Comparative analyses of research groups are particularly revealing because performance differences within universities are frequently greater than those between universities. Of course, comparative analyses are only meaningful if we compare like with like. For this reason, the average citation rate of a research group is usually normalised with the average citation rate of the fields in which it is active. Normalisation reveals whether the impact of a research group is above or below the international average in its field.  

Whole journals are generally assigned to one or several subject categories. Such journal classification schemes are simple and have proved useful. However, they have their limitations. Papers published in multidisciplinary journals or more general journals are not assigned to a specific subject category. In the Science Citation Index (SCI), general chemistry journals are assigned to the category multidisciplinary chemistry. When assessing research groups working in highly specialised fields, the specificity of journal classification schemes is often insufficient. For comparative analyses of research groups, subject classification on a paper-by-paper basis is therefore unavoidable. 

In the abstract  

Sections of Chemical Abstracts (CA) can serve as a basis for determining reference standards. CA places each paper on the basis of its main subject thrust in one of 80 sections. Unlike journal classification schemes, CA assigns papers published in multidisciplinary and general journals to a specific subject category. The specificity of CA subject classification (eg mammalian biochemistry) is far greater than that of the SCI scheme. 

The average citation rate of research articles shows that citation habits differ considerably not only between fields but also within fields. Findings suggest that the SCI scheme may not be specific enough to assess the impact of research groups working in highly specialised fields. 

Taking research groups at ETH Z?rich, Switzerland, as an example we can show that the assessment of research performance depends on the frame of reference. With CA-based reference standards, the field-normalised citation counts for all of the research groups are higher than when using SCI-based reference standards.  

In over 20 per cent of cases studied, normalisation with SCI reference standards results in a poorer rating than with CA reference standards.  

Frame it 

So the two reference standards lead to different assessments of research performance. This does not call into question the validity of bibliometric analyses. Instead, the results illustrate the importance of the frame of reference.  

The impact of a group’s research is always assessed within a particular frame of reference and this varies with the specificity of the classification scheme.  

Thus, the activity of a research group working on the structure of antigens and antibodies, for example, can be assessed in the context of chemistry, biochemistry, or immunochemistry. None of these comparisons can be called fundamentally wrong; it is just that the research performance is viewed in different frames of reference. That is why bibliometric indicators require in-depth interpretation by peers who are familiar with the publication and citation habits in the specialist field. In this way, bibliometric analyses do not replace but rather complement the peer-review process.  

Christoph Neuhaus works on research evaluation at ETH Z?rich, Switzerland, and Hans-Dieter Daniel holds a dual professorship at ETH Z?rich and the University of Z?rich 

Further Reading

J Adamset alScientometrics, 2008, 75, 81 
W Gl?nzel et al, Scientometrics, 2009, 78, 165 
K Gurney and J Adams,Chem. World, May 2008, p42  
C Neuhaus and H-D Daniel,Scientometrics, 2009, 78, 219