GLOBAL

Learning to live with ChatGPT

If you’ve spent the southern summer taking a break from news and social media, you may have missed it – but for the rest of us trying to dodge the drip-fed details from Prince Harry’s tell-all memoir, news channels have been buzzing with the launch of ChatGPT, a new online application released in November by OpenAI.

Generating text that is almost indistinguishable from that written by humans, it is already clear that this technology will disrupt education, as well as having implications for other aspects of our lives.

Already, there is evidence that industry is looking to ChatGPT for expert advice, perhaps alongside or as an adjunct to seeking out research experts, and this technology is being actively deployed in terms of creative endeavours.

All this interest begs the questions: is ChatGPT really a tool that might revolutionise the way we synthesise and use knowledge, or is it, as Australian singer-songwriter Nick Cave has stated, “a grotesque mockery of what it is to be human”? And should schools and universities be worried?

Those of us in the education sector ought to be concerned if we simply see ChatGPT as something to fear and suppress; but if we think creatively, we might just be able to turn this to our advantage. Unmanaged, ChatGPT presents real risks to academic integrity and to student well-being, staff workloads and a school or university’s reputation.

What is ChatGPT?

One of several writing applications and bots that claim to be able to create text, such as QuillBot, Wordtune, Outwrite and Essayailab, ChatGPT appears to be more sophisticated than most of the others.

Its capabilities range from writing drafts based on existing text, generating summaries of articles and YouTube videos, identifying coding errors, solving maths problems, generating course syllabi, assessment topics and rubrics, creating fake references and potentially even grading assessments.

The ChatGPT innovation roadmap also includes the development of a new application that will allow it to create audio recordings that mimic the user’s own voice.

It does have some limitations though. As a text-based artificial intelligence (AI) model, by its own admission ChatGPT doesn’t have the capability to provide real-time data or statistics, and its knowledge base currently only goes up to 2021.

What could the benefits be?

Education discussion forums and mainstream media alike are awash with posts expressing concerns about the potentially negative impact of ChatGPT on how we currently teach and assess students and the implications for academic integrity.

While these are real concerns, we need to also consider its potential as a ‘transformer technology’ with the capability to multiply our current abilities by, say, improving writing and communication; perhaps, in the same way the calculator facilitated more accurate calculations, with real possibilities for supporting second-language students and those with communication and learning difficulties.

In the current education environment, where students are often more focused on the collection of marks than on the process of learning itself, it is inevitable that those returning to class in 2023 will be turning to ChatGPT.

Already, at universities we are seeing requests to return to pre-digital pen-and-paper assessments and some calls for in-person, on-campus supervised examinations. There is a real risk here that well-intentioned knee-jerk reactions to ChatGPT will result in a move away from authentic assessments to more conservative and potentially old-fashioned teaching approaches.

Simple prevention and mitigation measures will further increase teaching staff workloads. This is particularly apposite for those education providers who see digital and blended delivery modes driving more access to higher education. Put simply, we don’t want to take a step backwards.

AI literacies

Unfortunately, there are no quick and easy solutions.

It is essential that leaders in the education sector ensure that our courses are designed and delivered in ways that prioritise engaging and meaningful teacher-student and peer to peer interactions and think about how we support the development of ‘AI literacies’ within our curriculum in the same way as we support the development of other academic and digital literacies.

What is clear though is that we ignore this at our peril.

The proactive management of academic integrity in the context of increasing infringements and a firm commitment to educative rather than punitive approaches has been a topic of discussion for several years, but we must now tackle this head on.

Reviewing how we conduct assessment, developing educative resources for staff and students and prioritising student and staff education rather than implementing ‘blanket bans’ and adopting punitive approaches focused on ‘catching’ users of ChatGPT will be key here too.

While there are apps already on the market that claim to be able to detect the use of AI tools, and Turnitin is working on a similar service, this approach is likely to become something akin to an arms race and then we will always be behind.

Above all, we need to ensure that our curricular focus remains on the development of good assessment design, and critical thinking and academic skills that will mean students are less likely to resort to AI.

Professor Giselle Byrnes is provost at Te Kunenga ki Purehuroa Massey University in New Zealand. This article was first published here.