Beyond the ChatGPT hype

ChatGPT

Source: © Shutterstock

Large language models can be powerful tools for chemistry if we acknowledge their limits

Working in a technology role within pharma chemistry, it is currently impossible to evade conversations about the large language model (LLM) ChatGPT. This chatbot quickly became the fastest-growing consumer application ever, notable for making conversation with (loosely defined) artificial intelligence accessible to anybody. I signed up with my existing Google Account and can chat with the bot conversationally by typing plain English into a website. ChatGPT is also very powerful. It can summarise long texts or dig out specific information, give advice on tone in emails, edit or write code, make suggestions for projects, or do pretty much anything you ask it to – as long as you accept that the answer might be wrong. That combination of power and accessibility has made the bot the latest technology hype.

Tech buffs will be familiar with the Gartner hype cycle, a measure of excitability about any new development. Typically, once a breakthrough becomes popular, there soon follows a peak of hype and inflated expectations, often bolstered by marketers trying to make money. There’s a gold rush feeling, where everyone tries to get on board because they don’t want to be left behind. But when people take on new tools for the wrong reasons, or a use case naturally doesn’t work out, the peak gives way to a trough of disillusionment. Users are disappointed it hasn’t changed the world and made them a cup of tea at the same time, and companies relying on the tech may fail. Lastly, if the technology is actually worthwhile it eventually reaches a business-as-usual plateau with neither uncalled-for disillusionment nor overexcitement.