One year on from ChatGPT’s launch does it offer hope or hype for science?

ChatGPT

Source: © Alamy Stock Photo

Chatbots could help chemists but their limitations need to be understood

It’s been a year since ChatGPT burst onto the world stage. The AI-powered text generator was notable for its convincing and natural-sounding responses and it quickly proved popular – it’s now approaching 200 million users. But after a year the question remains whether the large language models (LLMs) that enable tools such as ChatGPT will prove useful to science or are simply a distraction.

‘Chatbots’ aren’t a new innovation and can be traced back as far as the 1960s. However, the ability to train them using huge amounts of data is and has supercharged the field in recent years. Some of the bots trained on the roiling, messy mass of data that is the internet have, perhaps unsurprisingly, embarrassed their ‘parents’ by going on foul-mouthed tirades. And researchers have quickly discovered that while sophisticated chatbots such as ChatGPT could return sensible answers for basic scientific questions, they soon stumbled when challenged with something more technical. Instances such as these have led to LLMs being dismissed by critics as merely ‘plausible sentence generators’. So, training LLMs on the internet without filtering it for content and reliability clearly isn’t going to be a winner for creating a useful AI lab assistant. So what’s the alternative?