The academic publisher, Springer Nature, is donating its artificial intelligence (AI) tool that can detect AI-generated text in research manuscripts to the International Association of Scientific, Technical & Medical Publisher (STM).

The tool, which was launched for use by Springer Nature a year ago, will be integrated into the STM Integrity Hub, a major initiative providing a cloud-based environment for publishers to check submitted articles for research integrity issues, as well as access to third-party tools.

The tool works by dividing each paper up into sections and uses its own algorithms to check the consistency of the text in every part. The sections are then all allocated a score based on the probability that the text in them has been AI generated – the higher the score, the greater the probability of there being integrity issues – enabling publishers to prioritise papers for assessment. The tool has already been used to detect hundreds of fake papers soon after submission, ensuring that they do not go on to be published.

Chris Graf, director of research integrity at Springer Nature and chair of the STM Integrity Hub Governance Committee, said developing the AI tool had been a ‘major investment’ and ‘long-running project’ involving close collaboration between leading research integrity and AI teams. ‘We are delighted we will now be able to share this technology with the wider publishing community so it can have an even bigger impact,’ he added. ‘The rise of AI has made it easier for unethical individuals to generate fake content and tools like this one, which harness the power of AI and pattern recognition, will be vital.’