GLOBAL

ChatGPT – A new relationship between humans and machines
The appearance of a computer programme that can produce texts that could seemingly be written by a human has caused quite a stir, particularly in the education community. Early on, widely shared examples presented automatically produced essays that were (if not brilliant) good enough to pass some exams.Likewise, automated texts were shown to be useful for designing tests and even whole courses.
Some higher education institutions, such as Sciences Po in France, have placed strict limits on the use of this tool.
Sciences Po refers to its anti-plagiarism charter and to its academic regulations, which state that cases where “a piece of work does not allow for distinguishing between the student’s own thought and that of other authors” is considered plagiarism.
However, as more students, teachers, researchers, and indeed many others, have begun to work with ChatGPT for academic texts, its limits have become increasingly clear. Instead of safeguards and bans, more arguments that using machine-generated texts should be embraced as a part of education are being put forward.
Understanding how ChatGPT works
Understanding how this type of artificial intelligence works is key to assessing benefits and risks. ChatGPT has not been designed to provide correct information. Rather, it has been designed to mimic human-made texts by putting words together in a way that the programme has seen in the texts it has been trained on.
If ChatGPT is asked to make a list of academic works on a specific topic, it will provide a list of books or articles that sound plausible, but most will not actually exist. ChatGPT has no critical filter to evaluate the content of what it produces that in any way automates critical thinking in the academic sense. This is because it deals with strings of words and not what is behind them.
However, if the words come out in a way that the tool has seen many times before, in many cases the content will be correct.
For instance, ChatGPT will answer who is considered to have discovered America in 1492 or who was the King of Denmark in the year 1500, but it will not answer questions about the here and now, as it is trained on material from before 2021.
Indeed, it seems that ChatGPT is actually less useful in many academic contexts, where new knowledge, dealing with ambiguity, and questioning existing beliefs are concerned. It will likely be helpful in cases such as exams that require clear answers to a well-established body of work and little need for references.
For example, ChatGPT would probably be able to give good answers for a driving theory test. It is also a helpful tool for texts that need a particular structure; ChatGPT is eerily good at creating poems, for example.
ChatGPT has only limited use for academic texts, but it can still be problematic. The ban from Sciences Po cites concerns about academic integrity. ChatGPT has also been used to write academic articles, and in several cases the tool has been credited as an author.
However, according to an article from Nature, many publishers do not accept that a digital tool can be a co-author, as it has not made a scientific contribution and cannot be held accountable for the content it has created.
ChatGPT as a possibility
Others see ChatGPT as a possibility. At the University of Namur in Belgium, professors explicitly encourage students to create text with the tool in order to see where machine-generated text differs from what they have learned or their own writing. Partly, this gives learners the possibility of seeing the limits of these kinds of tools.
One could make the argument that texts produced by artificial intelligence sum up what is already established knowledge and that these texts are well suited as points of departure for going beyond this with a critical approach (I have tried to imagine this use for improving policy-making.)
One of the members of the team behind ChatGPT has also pointed to the possibility of using machine-generated texts to embrace diversity, as it can be used as a chatbot for personalised learning. Presumably, such tools would have to start giving correct answers first, but this might not be far off, as large search engines are launching their chatbots.
Humans and machines
Questions about using artificial intelligence go much further than cheating on exams or generating texts for scientific articles. They are about university values, in particular the integrity of academic work at all levels, but they are also about exploring the new relationship between humans and machines.
Some standardised tasks can be automated using tools like ChatGPT. It writes good emails and small, simple texts. However, it often needs a critical review and editing. It could well be that through this process of polishing the raw, machine-made text, we become more aware of the differences between humans and machines and learn to value our creativity and playfulness.
As underlined in a recent statement from the European University Association, ChatGPT raises questions for universities in terms of updating policies to take these kinds of tools into account while safeguarding academic integrity.
This should be seen as part of the continuous development of learning and teaching and the discussions regarding recognition of course work and authentic assessment.
In a broader perspective, an academic culture based on critical thinking and awareness of the workings, opportunities and risks of artificial intelligence should be well equipped for a future with ChatGPT.
Thomas E Jørgensen is director of policy coordination and foresight at the European University Association (EUA), where he also coordinates the work of EUA’s Digital Transformation Steering Committee.