An evidence-based approach could help chemists found better investigative partnerships, says Mark Peplow


The lone scientist, toiling in splendid isolation, is a dying breed. Today, collaboration is key, and research networks increasingly span nations and disciplines. But what are the essential ingredients of a fruitful research collaboration?

Two recent studies have now provided some answers, by looking at how the size and shape of collaborations affects the impact of their work, and by surveying researchers themselves on the factors for success.

Ideal partners

In the first study, a team of sociologists put nine interdisciplinary collaborations under the microscope.1 These networks – all judged successful by their funders – each included up to 15 researchers working in at least three different fields. Using a mixture of questionnaires, face-to-face interviews and ‘fly on the wall’ observations, the sociologists teased out what the researchers in each collaboration believed to be the key factors responsible for their success. These were then categorised into three strands: cognitive, emotional and interactional.

Perhaps unsurprisingly, 82% of researchers said that cognitive factors – such as the expertise within the group – were crucial to success. Even more researchers flagged interactional factors, such as leadership or a convivial social relationship between co-workers. But 58% also singled out emotional factors such as joy, passion and excitement, not least because they helped to motivate the work.

‘Joy and passion’ may not cut much ice with a funder that wants to know whether a project is delivering. But they are important, not least because excited scientists are invariably productive scientists. And the strong social bonds forged between researchers can persist throughout their careers, generating long-term benefits for their fields.

Michèle Lamont of Harvard University, US, part of the team behind this study, acknowledges that researchers may already sense that these factors are influential. But she says that the institutions that participated are finding the results extremely useful because they offer empirical evidence on what gives their research networks an advantage.

Strength in numbers

With the elements for a happy and productive bunch of scientists in place, we also need to know how big the group should be. To answer that question, a team led by David Hsiehchen, a clinical medical fellow at Harvard University, gathered data on around 24 million research papers published between 1973 and 2009. They looked for trends in the number of authors and countries involved in a paper, and calculated how many citations per author each paper had gathered. This gave a rough indication of the impact generated by the human resources invested in the project.2

The team found that single-author papers made up about 35% of publications in the early 1970s; by 2009, that had declined to 10%. In contrast, medium-sized teams of five to eight authors went from less than 10% to 30%, and papers by larger groups rose from almost nothing to 10%. Meanwhile, the proportion of papers by authors of a single nationality fell from 95% in 1973 to less than 80% by 2009.

It’s no shock that larger and more international groups are becoming more common. But do they actually deliver better science?

Up to a point. As groups got larger, their papers did indeed gather more citations per author, suggesting that the whole was greater than the sum of its parts. But once the groups swelled beyond 18 researchers, the citation rate per author dropped precipitously. Hsiehchen attributes this to ‘diseconomies of scale’, indicating that a collaboration has become so big that it is no longer efficient – researchers may start to duplicate their colleagues’ work, for example, or get bogged down by bureaucracy and communication difficulties. Adding more nationalities to a collaboration offered more modest gains in citation rate, and the benefits tailed off when the team comprised researchers from more than five countries.

Clearly, there are exceptions – more than 5000 people were involved in pinning down the mass of the Higgs boson – but the cautionary conclusion of Hsiehchen’s study is that very large collaborations do not necessarily produce the greatest return on investment.

Measure for measure

Researchers may well feel that their activities are already being subjected to more than enough measurement. In November, the UK’s Department for Business, Innovation and Skills launched a consultation on the future of higher education, confirming that the Research Excellence Framework – the UK’s vast assessment of researchers’ output – will be retained, and suggesting that a Teaching Excellence Framework could be added alongside it.

Since metrics are here to stay, it is vital to use a broad palette of methods to assess research quality. As well as peer review and smarter publication metrics, sociological assessments could also play a useful role. Lamont notes that scientists can sometimes be rather sceptical about this approach to their work: ‘They’re rigorous about their science, but when it comes to the social world then intuition suffices.’

But if the science of science can help us to develop better research collaborations, it should be embraced.

Mark Peplow is a science journalist based in Cambridge, UK

Not only research collaborations could use an overhaul – scientists are also playing it too safe when it comes to choosing topics for investigation, says Philip Ball