Could AI free academics up or increase the pressure to publish?

It has been less than four months since OpenAI released ChatGPT, their chatbot built upon the GPT-3.5 large language model, but it already feels like we have been talking about generative artificial intelligence (AI) forever.

The release was perfectly calibrated to generate a viral hit as users throughout the world shared screenshots of their often eerie, occasionally erroneous, conversations through social media. Its evident capacity to produce plausible answers to descriptive questions provoked anxiety throughout higher education, raising the spectre of familiar instruments of assessment being rendered redundant overnight.

To the extent that scholarship has figured in these discussions, it has been restricted to the question of ChatGPT being cited as a co-author on papers. After at least four instances in which the chatbot was credited in this way, Science and Springer Nature have prohibited the practice.

This illustrates the speed with which norms surrounding the use of generative AI are being formed within the sector, as well as the discussions that will govern their development over time.

For example, Science’s editor-in-chief argued that text produced by ChatGPT would be seen as plagiarism whereas Nature has allowed its use under specific documented circumstances.

These early reactions are likely to establish the parameters in which future controversies will play out, with the release of GPT-4 and its integration into Microsoft Office likely to turbo charge these discussions.

Questioning our research productivity

While the formal attribution of authorship is clearly an important question, it raises a deeper issue about why authors would seek help from automated systems and what this suggests about the current state of academic publishing.

When multiple generations of academics have internalised the imperative to ‘publish or perish’, how will they respond to a technology that promises to automate significant elements of this process? Is there a risk that the capacity to automate aspects of the writing process will simply lead to more writing?

There are important issues remaining to be clarified about what constitutes acceptable uses of GPT within different domains of academic practice, but there is a broader challenge here in terms of how our conceptions of scholarly productivity have escalated in an accelerating academy.

One estimate from 2015 suggested that around 34,550 journals published around 2.5 million articles per year. A later study from 2018 found more than 2.5 million outputs in science and engineering alone, highlighting how growth rates over a 10-year period varied between 0.71% in the United States and 0.67% in the United Kingdom to 7.81% in China and 10.73% in India.

Obviously, there are factors at work here other than escalating expectations of scholarly output, such as the international growth of scientific fields and the intellectual interconnections generated by the digitalisation of academic publishing.

But, if we accept the premise that generative AI has the potential to automate parts of the writing process, then it increases how many outputs we can produce in the same amount of time. Imagine what annual outputs might look like globally if generative AI becomes a routine feature of scholarly publishing.

At a crossroads

Why do we publish? In my experience academics can be weirdly inarticulate about this question. It is what we are expected to do and it is therefore what we do, often with little overarching sense of the specific goals being served by these outputs other than meeting the (diffuse or explicit) expectation of our employers.

In a quantified academic world, it is far too easy to slip into imagining countable publications as an end in themselves. These are conditions in which technologies that change the time:output ratio could prove extremely seductive.

If this technology is taken up in an individualised way, reflecting the immediate pressures that staff are subject to in increasingly anxiety-ridden institutions, the consequences could be extremely negative.

In contrast, if we take this opportunity to reflect on what we might use this technology for as scholars and why, this could herald an exciting shift in how we work that reduces the time spent on routine tasks and contributes to a more creatively fulfilling life of the mind.

Dr Mark Carrigan is a lecturer in education at the University of Manchester in the United Kingdom, where he is programme director for the masters in digital technologies, communication and education. He directs the Post-Pandemic University project, which is an international network comprising an online magazine, podcast hub and conference series. He is the author of Social Media for Academics, published by Sage and now in its second edition. He tweets at @DrMarkCarrigan. This blog was first published on the London School of Economics and Political Science (LSE) Impact of Social Sciences blog. It gives the views and opinions of the author and does not reflect the views and opinions of the Impact of Social Sciences blog, nor of the London School of Economics and Political Science.