More holistic methods are needed to accurately assess the quality of a researcher’s work

Pie chart over pregnant woman

Source: © Mitch Blunt/Ikon Images

Taking time out to have a child has a long-term impact on publication metrics

March marked the anniversary of the start of my maternity leave. The research contract I was on expired in the meantime, but I’ve secured a research position at a new institution. For the year I was away from science and academia to focus on my baby, I fought the slow-burning background anxiety of knowing I would not publish during this time. Before I gave birth, I thought perhaps I could write a paper or two in between naps. When I tried, I could never focus enough to do so. Eventually I decided I was just going to enjoy motherhood and see what happened. I can’t say I regret it, but I also cannot say I am at peace with that decision.

Parenthood strongly affects women’s careers, with trends in mothers’ publication records closely following trends in the level of attention a child requires at different stages of development.1 To a certain extent this could be expected, since women physically go through childbirth and post-partum recovery. But because childcare responsibilities also fall mostly on women, the impact on their careers is not only deeper but also longer-lasting. Even in countries where parental leave is offered, it is common for paternity leave to be many months shorter than maternity leave, bringing home the implicit expectation that mothers will be the default caregiver.

Parents must choose between caring for their children or working towards their careers

There is a lot institutions and established academics can do to ease the effects of maternity leave  on someone’s career, such as being flexible about part-time and remote working, or supporting mums bringing their children to work or conferences.2 However, no amount of support and good intentions can change the reality of how academic careers are evaluated: if you don’t publish, you don’t get cited, which means your h-index (which supposedly calculates productivity and impact from publication and citation numbers) stagnates and you’re automatically less competitive.

Academia thus replicates a scenario pervasive in our society, in which parents, especially mothers, must choose between caring for their children or working towards their careers. The idea that there is a balance to be struck here is somewhat naïve, since mothers are automatically perceived to be less committed to the job, less competent and hence less competitive regardless of how short their maternity leave might be. What is usually achieved is not so much a balance as it is a double burden on mothers who feel forced to maintain their academic output. And yet research still shows that no career makes it through parenthood unscathed. The unrealistic (and brutal) pressure that research metrics put on researchers to publish continuously necessarily excludes people who take career breaks to have children or otherwise care for others; in fact, it excludes people who take career breaks for any reason.

To cite the Leiden manifesto, which proposes a set of principles to guide research evaluation, research metrics are ‘usually well intentioned, not always well informed, often ill applied’.3 Quantitative metrics such as publication and citation counts, impact factors and the h-index cannot possibly portray the quality of the research being carried out by the researcher. A well-rounded paper that is based on careful, diligently replicated measurements takes longer to produce than a rushed job, but quantitative metrics would favour the rushing scientist to the diligent one. Quantitative metrics are also a poor measure of productivity; a part-time researcher that publishes as much as a full-time researcher is more productive, but that does not transpire in these metrics. Furthermore, numbers of publications and citations, or even the level of attention (likes, shares, views) a paper gets, do not reflect any of the institutional responsibilities a researcher may have within their departments or universities. Quantitative research metrics therefore utterly fail at portraying the impact a researcher may have on their students, their institutions and their communities.

Since they fail at providing a holistic picture of a researcher’s work, competence or impact, research metrics cannot be used as a sole or even primary tool for measuring research quality. Instead, to be able to judge the quality of a researcher’s work, one should take the time to read a selection of their publications, and actually get to know their work. An initial round of short, online interviews can also help get a better feel of a candidate’s experience. These approaches are undoubtedly more time consuming, but they constitute a fairer method to measure research proficiency.

Ultimately, it is unacceptable for research metrics to be used to filter out applicants or to choose between two otherwise equally competent candidates. The uninformed, irresponsible use of quantitative research metrics not only puts parents – mothers, especially – at a grave disadvantage on the academic career path, but it is also a great disservice to science in general. Abusing research metrics limits access to scientific careers and thwarts diversity in scientific spaces.