The emergence of self-driving labs and automated experimentation has brought with it the promise of increased rates of productivity and discovery in chemistry beyond what humans can achieve alone. But the black box nature of AI means we cannot see how or why deep learning systems make their decisions, making it difficult to know how it can best be used to optimise scientific research or if the outcomes can ever be trusted.

In November 2023, a paper was published in Nature reporting the discovery of over 40 novel materials using an autonomous laboratory guided by AI. However, researchers were quick to question the autonomous lab’s results. A preprint followed in January that reported that there were ‘systematic errors all the way through’ owing to issues with both the computational and experimental work.

One of the authors of the critique, Robert Palgrave, a materials chemist at University College London, UK, said that although AI had made ‘big advances’ there was ‘a bit of a tendency’ to feel that AI had to change everything ‘right now’ and that actually we should not expect things to change overnight.

Robots

Source: © Marilyn Sargent/Berkeley Lab

The autonomous A-Lab robotic system, which was guided by artificial intelligence, was claimed to have created more than 40 new materials. However, other researchers questioned this and concluded that the compounds were already known

Milad Abolhasani, who leads a research group that uses autonomous robotic experimentation to study flow chemistry strategies at North Carolina State University in the US, says the ‘hype’ has taken over somewhat when it comes to AI and it is time to pause. ‘As humans we are great at envisioning what the future is going to look like and what are the possibilities but … you have to move step by step and make sure things are done correctly.’

The risks of relying on AI

For many, the draw of AI comes from a need to enhance productivity. ‘Whether that’s reviewing the literature faster, running experiments faster, generating data faster, the productivity outcomes of AI are very appealing,’ explains Lisa Messeri, an anthropologist at Yale University in the US. ‘And that has to do with institutional pressures to publish, to get your research done so you can do all the other things that you have to do.’

Messeri says AI also holds the tantalising prospect of the ‘promise of objectivity’ – the idea that scientists are always in pursuit of tools that they feel are robust and limit human biases and interventions. While AI could indeed provide these benefits for some research, there are risks associated with relying too heavily on them and a need for us to remember the importance of including a diverse set of thinkers in the production of scientific knowledge. And, of course, AI models are only as good as the data that trains them.

There’s a rush for everyone to start doing the kind of science that’s well suited for AI tools

Molly Crockett, Princeton University

For Messeri and her colleague Molly Crockett, a neuroscientist at Princeton University in the US, who co-wrote a perspective on the topic in Nature, the risks fall into three categories, all of which arise from ‘the illusion of understanding’, a phenomenon, well-documented in the cognitive sciences, relating to our tendency to overestimate how well we understand something.

‘The first risk arises when an individual scientist is trying to solve a problem using an AI tool and because the AI tool performs well, the scientist mistakenly believes that they understand the world better than they actually do,’ explains Crockett.

The second two refer to scientists as a collective and the inadvertent creation of a scientific ‘monoculture’. ‘If you plant only one type of crop in a monoculture, this is very efficient and productive, but it also makes the crop much more vulnerable to disease, to pests,’ explains Crockett.

‘We’re worried about two kinds of monocultures,’ she continues. ‘The first is a monoculture of “knowing” – we can use lots of different approaches to solve problems in science and AI is one approach … but because of the productivity gains promised by AI tools, there’s a rush for everyone to start doing the kind of science that’s well suited for AI tools … [and the] questions that are less well suited for AI tools get neglected.’

They are also concerned about the development of a monoculture of ‘knowers’ where, instead of drawing on the knowledge of an entire team with disciplinary and cognitive diversity, only AI tools are used. ‘We know that it’s really beneficial to have interdisciplinary teams if you’re solving a complicated problem,’ says Crockett.

It’s great if you have people on your team who come from a lot of different backgrounds or have different skill sets … in an era where we are increasingly avoiding human interactions in favour of digital interactions … there may be a temptation to replace collaborators with AI tools … [but] that is a really dangerous practice because it’s precisely in those cases where you lack expertise that you will be less able to determine whether the outputs returned by an AI are actually valid.’

What are the solutions?

The question is, how can we tailor AI-driven tools such as self-driving labs to address specific research questions? Abolhasani and his colleague at NC State University, Amanda Volk, recently defined seven performance metrics to help ‘unleash’ the power of self-driving labs – something he was shocked to find did not already exist in the published literature.

‘The metrics are designed based on the notion that we want the machine-learning agent of self-driving labs to be as powerful as possible to help us make more informed decisions,’ he says. However, if the data the lab is trained on is not of a high enough quality, the decisions made by the lab are not going to be helpful, he adds.

A lot of self-driving labs do not even mention what the total chemical consumption was per experiment

Milad Abolhasani, North Carolina State University

The performance metrics they describe include degree of autonomy, which covers the level of influence a human has over the system; operational lifetime; throughput; experimental precision; material usage; accessible parameter space, which represents the range of experimental parameters that can be accessed; and optimisation efficiency, or the overall system performance.

‘We were surprised when we did the literature search that 95% of papers on self-driving labs did not report how long they could run the platform before it broke down [or] before they had to refill something,’ he explains. ‘I would like to know how many experiments can that self-driving lab do per hour per day … what is the precision of running the experiments … how much can I trust the data you’re producing?’

‘A lot of self-driving labs do not even mention what the total chemical consumption [was] per experiment and per optimisation that they did,’ he adds.

Abolhasani and Volk say that by clearly reporting these metrics, research can be guided towards more ‘productive and promising’ technological areas, and that without a thorough evaluation of self-driving labs, the field will lack the necessary information for guiding future research.

However, optimising the role AI can play within intricate fields such as synthetic chemistry will require more than improved categorisation and larger quantities of data. In a recent article in the Journal of the American Chemical Society, digital chemist Felix Strieth-Kalthoff, alongside such AI chemistry pioneers as Alán Aspuru-Guzik, Frank Glorius and Bartosz Grzybowski, argues that algorithm designers need to form closer ties with synthetic chemists to draw on their specialist knowledge.

They argue that such a collaboration would be mutually beneficial by enabling synthetic chemists to develop AI models for synthetic problems of particular interest, ‘transplanting the AI know-how into the synthetic community’.

Looking to the future

For Abolhasani, the success of autonomous experimentation in chemistry will ultimately come down to trust. ‘Autonomous experimentation is a tool that can help scientists … [but] in order to do that the hardware needs to be reproducible and trustworthy,’ he explains.

It’s a must for the community in order to expand the user base

Milad Abolhasani, North Carolina State University

And to build this trust entry barriers need to be lowered to give more chemists the opportunity to use self-driving labs in their work. ‘It has to be as intuitive as possible so that chemists with no expertise in autonomous experimentation can interact with self-driving labs,’ he explains.

In addition, he says, the best self-driving labs are currently very expensive, so lower-cost options need to be developed while still maintaining their reliability and reproducibility. ‘It’s a must for the community in order to expand the user base,’ he says.

‘Once [self-driving labs] become a mainstream tool in chemistry [they] can help us digitise chemistry and material science and provide access to high-quality experimental data … but the power of that expert data is when the data is reproducible, reliable and is standardised for everybody to use.’

Messeri believes AI will be most useful when it is seen only as an augmentation to humans, rather than a replacement. To do this, she says, the community will need to be much more particular about when and where it is used. ‘I am very confident that creative scientists are going to be able to come up with cases in which this can be responsibly and productively implemented,’ she adds.

Crockett suggests scientists consider AI tools as another approach to analysing data – one that is different to a human mind. ‘As long as we respect that … then we can strengthen our approach by including these tools as another diverse node in the network,’ she says.

Importantly, Crockett says this moment could also serve as a ‘wake-up call’ about the institutional pressures that may be driving scientists towards AI in order to improve ‘productivity without necessarily more understanding’. But this problem is much bigger than any individual and requires widespread institutional acceptance before any solution can be found.