GLOBAL

Intelligent responses needed for artificial intelligence
The advent of new open artificial intelligence (AI) models and the implications for teaching, learning and assessment practices have in recent months been the topic of many debates in institutions of higher learning across the world.While some argue that these models can open a world of possibilities, others are ringing alarm bells around the limitations and risks associated with these platforms, including the ethical repercussions, cybersecurity threats, disinformation dissemination and privacy risks. This kind of mixed messaging does not bode well for what is possible and desirable.
I advocate that these innovative technologies should be considered as such – tools that can be harnessed for the benefit of the teaching and learning project and, ultimately, for the advancement of society, always knowing when their usage is essential and when not.
In the same way that some once balked at the idea of allowing students to use a calculator, a mobile device, or the internet to enhance their academic work, so, too, we see some who caution against the use of new AI models and platforms like ChatGPT, Bard and DALL-E2 due to the potential of these models to exacerbate plagiarism, to neglect copyright restrictions, to create deep fake accounts, to aggravate cyber fraud, to share biased views, to select sources, and to serve up inaccurate and sometimes even fictional information and references.
Some of these limitations – technical and intangible – are known, with many mitigation strategies already in place, but with many more undefined and still to be developed. For example, some of the technical risks related to plagiarism are now diminished through advanced software that can detect AI-generated texts, watermarks, and digital footprint tracing.
Data can be encrypted and stored safely, policies to protect privacy can be applied immediately, cybersecurity plans are a given, and watertight governance processes can be implemented.
At the same time, the harms of surveillance and plagiarism technologies must be considered.
Developing a culture of academic integrity in which the use of AI is transparent, attributed and disclosed, therefore, remains vital for promoting authentic learning.
AI and the purpose of the university
There are well-established methods to address some of the intangible constraints when we revisit what it means to be a university, and when we reflect on the purpose of universities and higher education in society.
It is at universities where students are encouraged to explore beyond boundaries, to think logically, to evaluate new concepts, to analyse information, to debate, and to exchange innovative ideas.
Universities are meant to provide safe spaces in which the next generation of thinkers can be nurtured, and where they learn the subtle skills required to think carefully about the world.
They function as a nexus for collaboration across sectors, and where the best knowledge, resources and talent can be brought to bear to overcome the challenges of the 21st century.
Universities can also be aspirational sites that interact with society, which have the potential to hold those in power to account. They are the places and spaces for questioning, for asking who controls the technology or what business models power these technologies; or, better still, ensuring that big technology companies are accountable or exposing those misusing technology for personal gain or profit.
Similarly, in the evolution of the news media industry, the advent of the internet and social media were once seen as threats to newspapers, broadcasters and academics, who now use these tools as an extension of their respective crafts. Not everything is true on the web or on social media, nor is all information accurate or unbiased.
The onus falls on us, as teachers, to support our students to develop the critical thinking capabilities that allow them to sift the wheat from the chaff, just as they would when using an AI model like ChatGPT. We would need to know, too, when the usage of such tools is necessary and when they are not.
Human-like qualities
Perhaps what makes newer AI models like ChatGPT seem more threatening to some are its ‘human-like’ qualities. While the ‘human-like’ qualities should not cause panic (younger generations with whom we work are used to dialoguing with Siri, Google and Alexa), the ability of ChatGPT to learn and spread misinformation or disinformation, especially in a polarised world, is cause for concern.
Martin Bekker, in a recent webinar, notes that sometimes there is a tendency among humans to treat technology as superior to ourselves and others and this could imply a misplaced trust in anything that tools like ChatGPT tell us.
Students will use open AI models like ChatGPT for educational purposes, whether teachers are ready to embrace this new tool or not. We, therefore, need to be imaginative and innovative in how we capitalise on such platforms in teaching, learning and assessment processes – for example, by reducing or removing some of the less cognitively demanding tasks (for staff and students) while focusing on testing higher order thinking skills.
Consider that ChatGPT struggles with causal or logical reasoning, and it is these skills that we should be assessing rather than simple or generic input or output reasoning.
In a university setting, ChatGPT can, for example, assist with basic writing and research (it can process up to 2,000 words at a time). Yet, given that it is a statistical model with its responses based on next word predictions, it does not have understanding.
Room for innovation
It is up to teachers to encourage students to adopt and use the technology, to assess how they used it, to compel them to fact-check the responses against multiple academic sources, to have them critique the model’s responses, to evaluate whether the model’s responses reflect bias, and to subject the model to critical and analytical thinking prompts.
As innovative technologies develop, higher education institutions will have to reimagine how students interact with and use text, images, video and audio which are auto-generated by AI-driven platforms.
As we give meanings, appropriations and critiques to the ideas associated with AI-generated tools and the implications of such for learning and teaching in higher education, let’s ensure that these conversations hold a deep commitment to the creative and innovative spirit of what it means to be a university in the Global South.
We need to keep pushing the boundaries and foster new organisational forms and knowledge architectures that work comfortably and critically with AI tools and platforms, while always understanding the risks of increased use of technology in pursuit of equity in and through education.
The challenge facing us now is how we create an enabling environment in which staff and students can experiment with the potential and limits of such tools, how we use the opportunity to develop and nurture higher order critical thinking capabilities, and how we instil a values-driven approach that encourages and rewards us all to act with integrity.
Equipped with an integrity framework, diverse forms of assessing student learning, and strong sanctions for transgressions, I am confident that we can learn and teach thoughtfully in the age of intelligence, even if it is artificial.
Professor Ruksana Osman is the senior deputy vice-chancellor: academic, at the University of the Witwatersrand, Johannesburg, South Africa, and a professor of education. She holds the UNESCO Chair in Teacher Education for Diversity and Development.
Bibliography
Bekker, M, 2023. ‘Heroes and Horror Stories: Thinking about ChatGPT.’ Webinar presented at Wits University. 9 March.
Bogost, I, 2022. ‘ChatGPT is dumber than you think.’ The Atlantic. 7 December.
Mollick, E, 2022. One Useful Thing, Automating a job with AI. Blog published in The Mechanical Professor (oneusefulthing.org). 7 December.
Mutambara, AG, 2023. ‘Artificial intelligence is exciting and risky and it cannot be undone.’ TimesLive. 7 May.
Roose, K, 2023. ‘Don’t ban ChatGPT in schools. Teach with it.’ New York Times. 12 January.
Rosman, B, 2023. ‘Generating the Future: The Power of AI Language Models.’ Webinar presented on 17 February 2023 at Wits University.
Watkins, M, 2022. ‘AI Will Augment, Not Replace.’ Inside Higher Education. 14 December.