What should we expect from our students?
Maybe two decades ago, my friend Thomas was working on his PhD thesis in quantum gravity. He had some awfully complicated equations to solve and started using a software application that allowed high-level algebraic manipulations. Thomas could solve in a few seconds equations that would take him days on paper and pen and with fewer chances of making a mistake!
But Thomas’ supervisor was not happy with his automated solution. He claimed he could not rely on the computer’s answer. However, I suspect that he had an unexpressed moral objection, a belief that using a computer to solve the problem was cheating. At last, the supervisor compromised: Thomas could use the application but must verify each answer in the traditional way. After a few months of outstanding progress, the supervisor dropped his demand, and Thomas was entirely free to rely on the computer.
This small drama exemplifies generational anxiety – the feeling that each generation has that the next one is taking a wrong turn.
We are at the edge of a tremendous source of generational anxiety: AI-generated texts. We have been outsourcing our work to machines (like Thomas with his equations) for decades. Nevertheless, we have started to let those machines take over the writing process for the first time. We are about to experience the biggest revolution in intellectual production since the invention of writing. We would be fools if we were not anxious.
ChatGPT is a sophisticated language model that can generate natural language responses to user-generated prompts. It has been trained on vast quantities of text data using a form of artificial intelligence called deep learning. As a result, ChatGPT can offer information on a wide range of topics and engage in a friendly dialogue with users. More importantly, the text it produces is similar to what a human might write.
No one can force the genie into the bottle: AI texts are here to stay
Students will use ChatGPT and other AI tools for writing their essays, research papers and personal statements. A small industry of detection software is already emerging. However, AI-text detectors are prone to false negatives and, worse, false positives.
Even if AI-text detection worked, how long would it take till the next AI language model beat it? Do we really want to be dragged into such an arms race?
Like many, I am also concerned about the unforeseeable consequences of AI texts. Something whispers that it is wrong; it is cheating. Maybe what scares me the most is that AI texts may render writing skills as outdated as knowing how to use log tables for multiplication.
But I am trying to be open-mind and not subside to generational anxiety. No one can force the genie into the bottle: AI texts are here to stay. We are fast-tracking toward a future of personal computer assistants that quickly answer any of our requests. And this is wonderful!
I should not worry that my students will abuse it. I should be happy about the new avenues these tools will open for their lives and careers. And I should accept that my ways of gauging right from wrong in intellectual production will soon be outdated.
The crucial question that bothers many is whether it is right to claim the authorship of an AI text. From any reasonable authorship definition, the answer is clearly no.
However, a more appropriate question might be whether claiming authorship of intellectual work reported through an AI text is right.
Then, I would be more willing to say yes.
Contemporary intellectual work is usually much more than writing. It requires conceptualisation, methodology design and investigation. We must handle data curation, validation and visualisation. We should ensure resources and software are available. Any project sinks if it cannot count on funding, administration and supervision. If everything goes well, we will finally write.
With so many activities involved, it is not surprising that many intellectual works have become a collective enterprise. You can legitimately be a paper author without contributing to its writing.
Indeed, many academics and academic publishers use the Contributor Roles Taxonomy system (or CRediT), which defines 14 roles typically played by researchers. A person is considered an author if assigned to any of these roles. If we accept that someone can be an author of a paper another person wrote, why can’t they be the author of a paper written by an AI?
Instead of fighting AI technology, we should embrace it and help the next generation use it ethically and optimally. Indeed, the most important lesson we can teach our students is accountability. If they sign an essay – whether written by them, a co-author or an AI – they are automatically responsible for its content. If it is wrong or causes physical damages, financial losses or moral injuries, they cannot claim ignorance.
To be the human in the room is a role they cannot abdicate from.
No comments yet