Philip Ball rakes through the findings of new research into the h-index and unearths some top tips for citation-hungry researchers

Philip Ball rakes through the findings of new research into the h-index and unearths some top tips for citation-hungry researchers

Now own up - did the contents panel of this issue leave you slavering over the prospect of adding a few notches to your h-index? The description there - my suggestion - was a bit of shameless promotion, the scientific equivalent of promising to reveal how to improve your sex life. For the fact is that h-indices have become objects of obsession, making researchers fret over whether theirs is big enough and how to make it bigger.

OPINION-BALL-125

I’m not mocking. There is every reason to keep an eye on your h-index, because they are widely used semi-formally to assess job and tenure applications. And bibliometric quantification is becoming a determinant of status and funding. But just in case this has all passed you by, let me explain that the h-index is a bibliometric measure of the impact of an individual’s research, proposed in 2005 by US physicist Jorge Hirsch of the University of California at San Diego. It is equal to the highest number of your papers, h, that have all received at least h citations. At the last count, the chemists with the highest h include George Whitesides, E J Corey and Martin Karplus. Other related indices have since been proposed, but none has quite the same simplicity or transparency, and most importantly, none has caught on to the same degree.

Now, I haven’t entirely led you here on false pretences. Of course, the best way to improve your h-index is to write better papers, but that is probably not what you want to hear. So here’s another suggestion: publish in Science  or Nature. That sounds a little obvious too, but the suspicion that the journals with the highest impact factors accrue a disproportionate number of citations has now been quantitatively verified by Vincent Larivi?re and Yves Gingras at the University of Qu?bec in Montreal, Canada.2 They find that such journals enjoy a ’rich-get-richer’ advantage, known as the Matthew effect.3 To measure the strength of this effect, Larivi?re and Gingras surveyed a ’natural experiment’, identifying over 4000 papers that had been published in essentially identical form in more than one journal. This allowed them to assess the relation between outlet and citation count for papers of identical value. (It also reveals the extent of the highly questionable practice of duplicate publication and the inefficiency with which it is policed.) They find that papers published in a high-impact-factor journal receive on average about twice as many citations as the ’same’ paper in a lower-impact-factor journal. 

Interdisciplinary insight 

Larivi?re and Gingras also enable me to offer a second piece of advice: make your papers moderately interdisciplinary. In a separate study,4 they looked at how citation statistics vary with the degree of interdisciplinarity, both in the sciences and the humanities.  

Although the trends are complex (and not easy to tease out in the first place), one common element was that papers deemed to have either very low or very high interdisciplinarity were cited less often than ones somewhere in between.  

Again one can reconcile this intuitively: narrowly focused papers have a small target audience, while very general papers risk being too vague. But given how chemistry can sometimes seem prone to specificity (it’s not alone in that regard, mind), it seems worth knowing that a slightly broader field of view can enhance your visibility. 

Everyone knows that the h-index is imperfect in other ways too. Any bibliometric index is a rather narrow indicator of the value of a researcher’s oeuvre. And although it isn’t as easy to distort one’s own h-index, by self-citation say, as it is a pure citation count, this measure is not immune to such manipulation.5  

There is also the problem that it assigns a single value to very different publication profiles. For example, imagine two people with h indices of 30: one a young researcher who has written 30 papers, all of which received 100 citations; the other an old hand who has written 500 papers, most of which garnered scant attention but 30 of which clawed their way to 30 citations. Who would you rather employ? 

To address this sort of issue, Lutz Bornmann and his colleagues at ETH Zurich have proposed an addendum to the h index that quantifies the shape of the profile: how many papers it contains that have fewer and more than h citations.6 They suggest two other indices that measure these portions of the distributions, which enable distinctions between ’perfectionists’ who publish relatively few but consistently significant papers, ’prolific scientists’ who publish regular, good-quality papers, and ’mass producers’ who generate a prodigious output of mostly little consequence. Now you can discover which of these you are. And so can everyone else. 

Philip Ball is a science writer based in London, UK