An image showing a Twitter illustration

Source: © Gary Waters/Ikon Images/Getty Images

The first ever analysis of all the tweets discussing Crispr gene-editing has revealed how the public feels about the technology. The findings suggest that over a six years public approval of gene-editing has cooled and that most people chatting about the technology are worried.

Twitter offers researchers a vast amount of semi-structured data and provides direct access to the content of conversations. This goldmine of data on the public’s thoughts and feelings gave the team at the Swiss Federal Institutes of Technology in Lausanne and Zurich the idea of assessing the sentiment in each tweet as neutral, positive or negative. Mean sentiment toward gene-editing was initially judged to be very positive, but then began to decline and was driven by rare peaks of strong negative sentiment, the researchers reported in a pre-print that is awaiting peer review.

The controversy surrounding the November 2018 announcement by Jiankui He that he had edited two viable human embryos that were then implanted and gave rise to twin girls led to the highest average daily twitter count on this topic and elicited a strong negative response. The very first negative dip with high media attention, however, occurred in July 2018, when a Wellcome Sanger Institute study warned about serious side-effects, such as cancer, from the therapeutic use of Crispr.

The strongest negative sentiment occurred in February 2019, when biohackers encoded malware into a strand of DNA. Only a small number of hashtags were connected to bad feeling on the technology, the most prominent being #crisprbabies. ‘The frequency and magnitude of these dips has increased since 2017, which is underlined by the overall declining sentiment,’ the paper notes. ‘The dips usually coincide with high activity, meaning that most people are only exposed to the topic of Crispr when it is discussed in an unfavourable way.’

The researchers used recent advances in text classification models, with supervised machine learning. In order to deduce sentiment, a predictive model was first trained on a manually annotated subset of the data before turning up and analysing over one-and-a-half million tweets. People’s dislike of Crispr technology was found to be strongest when it was used in humans, but was mostly positive for other organisms. Some hashtags, such as #genetherapy, were linked to very positive sentiments, which the authors interpret as enthusiasm for the technology’s medical potential.

‘It is an impressive technical achievement,’ says Michael Morrison, social scientist at the Centre for Health, Law and Emerging Technologies, University of Oxford, ‘but I would have liked to see a little bit more attention to who tweets about Crispr.’

Academic researchers, social scientists and bioethicists, lawyers, biotech analysts and bioartists were all found to have tweeted about Crispr. ‘These are really all different conversations and different communities talking to each other,’ says Morrison. ‘You might have someone cross about a US court’s ruling on the Crispr patent, but then to code that as negative, it seems to me that you will lose some of the nuances.’

Dietram Scheufele, a social scientist at the University of Wisconsin-Madison, US, says that this work can help researchers get a handle of on what worries people about new technologies, but shouldn’t replace other ways of assessing public sentiment. ‘With the arrival of big data, there has been a temptation for researchers to focus on easily available data because real public opinion is much more complicated to measure,’ he says. ‘Just because there seems to be a pattern in large datasets like Twitter conversations doesn’t mean that there is a valid finding about the population at large.’

Social scientists compare this approach to a drunk person seeking lost car keys under a bright streetlight where things are easy to see rather than in the dark alley where they lie. ‘Twitter is a little bit like this,’ Scheufele says. ‘It’s easy to scrape and analyse Twitter data. But that doesn’t mean it’s where we should be searching if we want to figure out what public opinion really looks like.’