Nandita Quaderi

Source: Courtesy of Clarivate

Nandita Quaderi says the incentives to publish in academia need a complete overhaul. Currently, the system encourages bad behaviour and there’s little in the way of incentives to do things properly

In the past three years, well over a hundred journals have been delisted by Clarivate, the owner of Web of Science. This is the result of the analytics firm getting tougher on journals failing to meet its quality selection criteria. The Taylor & Francis journal Bioengineered is one of the latest to fall foul of the crackdown after it was delisted in April. Staff at the journal discovered that the publication had a problem with paper milling that has already led to over 80 papers being retracted with hundreds more suspected, according to one analysis. Delisting is a serious penalty for a journal as it means that Clarivate will no longer index their papers, count their citations or give the title an impact factor – something important for many academics who need to provide evidence of the impact and importance of their work to funders.

Senior vice president at Clarivate and editor-in-chief of Web of Science, Nandita Quaderi, spoke to Chemistry World about what triggered the organisation to step up its screening processes, how the new system works and what she thinks needs to change in academic publishing to disincentivise bad practices, such as paper milling.

Clarivate is getting tougher on journals not meeting quality standards – why?

It’s always been the case that when a journal comes into the Web of Science, we can’t guarantee that it’s going to be in there forever; we use the same criteria to bring a journal in, as we then apply to see if a journal needs to go out. [But] we’ve got 22,000 journals so we don’t re-evaluate every journal every year.

We’ve seen, as has everyone else, that there has been an increasing level of sophistication in the activity that is going on to gain publication and citation numbers [and] we realised that we needed better ways to find needles in haystacks.

So that’s what we have been developing over the last few years; AI tools that help point our editors’ attention to journals that should be prioritised for re-evaluation. These tools don’t make the decisions for the editors, but they do help the editors know where to look.

How does the new system work?

We have a first-level screening of every journal in the Web of Science that looks at top-level journal characteristics, for example, a sudden increase in size or a change in author demographic. We compare it against the journal’s previous activity and all other journals in a given category.

If we see something that looks off, we will then do a deep dive using two other tools [to] look at whether the content in the journal is appropriate to what its stated scope is and whether the references are appropriate – they’re the two criteria that we tend to see, especially with paper mill content.

Are these issues becoming more common or just more difficult to spot?

Both. Our tools are better but there’s also more stuff to find. If we just look at since we introduced the tools in 2023, the first wave of journals that we delisted were really obviously inappropriate content – articles about yoga in a biochemistry journal – there was no attempt to hide it. Now, the most common reason we are delisting journals is for inappropriate citations, which takes a much deeper dive to find; people are becoming sneakier.

In terms of subject area, it was mostly in the clinical area, [but now] we’re seeing a shift to the applied physical sciences – computer science and engineering. And it’s not just journals, we’re also seeing a lot of fake conference proceedings. This is from the pandemic time when conferences started to become hybrid or totally virtual; it’s created another opportunity for bad actors.

Are special issues more vulnerable to being exploited by paper mills?

Yes. Special issues, per se, aren’t a problem. It’s when a special issue is administered outside the normal editorial workflow of a journal. Special issues have been seen by some journals as a way to increase content without increasing the editorial overhead. That’s when we have problems, when we have guest editors coming in that aren’t part of the standard editorial team and content is not even being seen by the editor-in-chief of the journal or any of the in-house staff.

The methods that are being used are evolving all the time – how do you keep track?

What we judge a journal on is what we can see, we have no idea what’s going on behind the scenes, we don’t see the peer reviews unless it’s an open peer review journal.

In terms of what to keep our eyes open for, it’s a mixture of people coming to us – we’ve got lots of trusted relationships within the community, be it librarians or researchers or the sleuths we work with quite a lot. But the things we see are getting more sophisticated. It’s a tech arms race at the moment. Tech is helping the bad actors create fraudulent content, but it’s also helping people like us catch that content.

What is the process for delisting a journal from the Web of Science?

Before a journal gets delisted, we put the journal ‘on hold’, which means that we don’t process any further content while we do our investigation. Typically, a journal is on hold for six weeks.

There’s a real spectrum of responses from publishers when we put a journal on hold. Some are really helpful, some deny there’s anything wrong. We’re trying to encourage the publishers that are acting responsibly to come to us.

Our long-standing policies have been that if we delist a journal, there’s an embargo period of two years before that journal can submit to be re-evaluated and relisted. Now we have another policy alongside that, saying that if a publisher is transparent and proactive in its investigations we will allow them up to 12 months to get their act together before we do our investigation.

[But] we want them to do a proper investigation on their own, not just piggyback on the representative articles we send them.

What are the main challenges clamping down on journals that fail to meet these criteria?

Keeping up to date. For decades, there’s been this sense that a high-impact journal is the same as a high-quality journal [but] that link has broken because of this pressure to increase citations.

One of the things we’ve done to help put our money where our mouth is, is regarding which journals are eligible to get a journal impact factor. Prior to 2023, only journals in [Science Citation Index Expanded] and [Social Sciences Citation Index] – the most scholarly impactful journals in the sciences and social sciences – were eligible to have an impact factor.

In 2023, we extended the journal impact factor to all journals within the Web of Science. So, regardless of how impactful the journals were, we wanted to make a clear signal that what’s important is trustworthiness.

Scholarly impact without trustworthiness means nothing. If these journals are just trying to increase citations without paying proper heed to making sure what they publish is trustworthy, that’s not the kind of content we want in the Web of Science.

Is this behaviour being driven solely by the need to drive up citations?

Publish or perish as a scholarly incentive framework is a problem. But it’s not just that. It’s the permissive environment we’ve created that’s allowing paper mills and other fraudulent entities to thrive.

There is pressure on researchers to have as many citations as possible, because that’s how they are rewarded. But universities also benefit from more citations, because they go up the rankings. Publishers that use a business model where there is a direct correlation between volume and revenue also profit from more volume.

It’s not open access (OA) that’s a problem, but the APC [article processing charge] model does create an incentive for publishers to publish more. If we look at the proportion of journals that we have delisted, there is an overrepresentation of OA APC journals.

What needs to change?

The entire scholarly incentive system needs to change. We need to look at more responsible ways of doing research assessment. The incentive is clear to cheat, and until there’s an equal and opposite disincentive, it’s going to be hard to get rid of.

[But] not all untrustworthy content is malicious … there is a role for education, particularly in the developing research economies to teach what proper research integrity means. Research integrity can’t just be at the publication level, it has to be from the very beginning, when you’re planning your experiments, when you’re looking at the literature.

This interview was edited for clarity and brevity.