AFRICA-GLOBAL
bookmark

Academics: Little more than intellectual orphans in AI era?

The automation of the university has been a phenomenal strategy for improving its efficiency in service delivery, policy implementation and operational streamlining. The objective is to deliver better, precise, less labour-intensive implementation of routine tasks. However, there are questions about academic authenticity with artificial intelligence moving into writing, peer review and publishing.

An allied computational function is to produce performance metrics and data analysis necessary for a university’s operation. What was perhaps unforeseen was that the proliferation of technologies into the automation of routine tasks was but an initial step and a precursor to the intrusion of artificial intelligence (AI) into the scene.

The pre-generative AI days were not exactly without AI, but they focused on AI as an assistive tool in human decision-making and service delivery. Unanticipated was AI’s propensity for takeover. The mechanisation of knowledge production is now an operational deployment. The changes that are occurring are not only latent – they are also decisive. The changes are creating a new kind of disconnection between scholars and their work.

We draw on the analogy of an orphanage to describe three phases of disconnect that constitute academics as intellectual orphans during different periods, emphasising, especially, the later generative AI (GenAI) that decentres intelligence from the highly trained academic to a language-trained machine. Academics might find themselves endistanced twice in their orphanage – first from their authored knowledge product, and second from the university – which is supposed to take care of the academic enterprise.

The first orphanage (technological)

The first orphanage was purely about skills in using assistive technologies in knowledge production. For the academics of yonder, who spent their careers in workspaces where the spirit of the combo of a typewriter on the desk, a wooden chair and a cup of coffee adorned the scholar’s workspace, the new changes under way are cataclysmically concerning. In these spaces, now rendered ‘traditional’ by the advent of newer technologies, the relish of physical interaction with the tools of knowledge production, aka punching the thoughts into the typewriter, feeding the paper into the roller and listening to the thuds of the keys, was first silenced by the digital typewriter, soon replaced by computers.

All these – including the thinking academic – are now replaced with a single query on the deceptively minimalist interface of most AI tools.

Although it feels as if we arrived at this point suddenly, nothing can be further from the truth. The mechanical typewriter, whose invention has remained claimed by various minds, was quietly replaced with the marvel of electric, and then the electronic typewriters. The outright benefit of these new marvels was the quieter workspace: the thuds were gone, and the customary secretary with nimble fingers was less audible next door. The manuscripts would now go in quietly and come out in the hushed motorised buzz.

The proliferation of personal computers in the early 1980s also disrupted how knowledge was being produced and consumed. Using computer programmes to process the typed work meant more functionality to achieve the desired formatting and structure and less laborious and faster knowledge transfer from the mind to its digital and print forms. It also offered flexibility in signposting the flow of thoughts using symbols (think of the asterisks and other section breaks of the Reader’s Digest era), but it did not change much in the relationship between scholars and their work. The professor would still develop the idea, write the paper, and have it typed into the machine.

Into this fray came the miniaturisation of the once mammoth printing press, but now a beneficiary of digital technology. From the mechanical operations of the German-invented press (by Johannes Gutenberg in 1440), we got to the digital printer in 1993, which can use ink to replicate what we see on the computer screen. Such an interface meant we had more control over how we could format our work. But it also meant we could easily share our knowledge faster and more efficiently (the photocopier came into the picture specifically for this role). However, the adaptive scholars of yesteryear still retained control of the transcription of their thinking into paper.

Knowledge brokers

Then came the internet, which created globally connected data infrastructures from January 1983. In this digital era, the scholar could not only disseminate their works through hard- or soft-cover books (hard copies, as it were), they could also share digital copies (soft copies). The internet marvel spawned further technologies in establishing academic databases, some surviving, others dipping – but always with new replacements which competed to host knowledge and make it available globally. These ‘knowledge brokers’ played a key role by offering middlemen services to academia.

They provided, and still do provide, extensive databases where scholars can access knowledge in organised forms, more efficiently, and of course, at a marginal price.

These technologies plateaued in the 2000s, a time when professors were relieved from much of the earlier stresses of producing and disseminating knowledge, which, while not comparable to the quill and cloth of the early manuscript days, still demanded much mental and physical labour. Remarkably, even then, professors retained much control over how knowledge was produced. But all of this is now changing. Not that we did not know that it was under way, but that we didn’t imagine ourselves outside the process, which has been, all along, ours to appropriate.

The question, ‘Since when did AI become a scholar?’ has now become due and must be asked with urgency.

The second orphanage (intellectual)

AI has proliferated in different aspects of the knowledge production process, attracting much support and disputation. For pro-AI academics, it is simply a platform, like the dictionary (now AI-powered language checkers), or the typewriter (initially replaced by computers, later adding the speech-to-text engines, and now GenAI), or other tools such as calculators and rulers (now replaced by AI algorithms). But, for anti-AI academics, the obvious replacement of humans by machine intelligence in knowledge production is now of concern.

For one, AI is not just a tool but a platform that hosts an ecosystem of many other AI algorithms (read ‘apps’). It is also a mammoth database of things, which, while anchored on the logic of internet infrastructure, is highly invasive. The so-called large language models are a politically correct way to designate exhaustive scraping of data from different sources and disciplines, aggregating it into different relational and then interrelational databases, and accessing this almost instantaneously. Thus, the AI logic is one of interoperability, at least within disciplines such as humanities and social sciences.

The intellectual labour and effort of scholars of the old days who used a quill, wrote on paper, typed on a typewriter, or used computers and floppy disks are now reduced to a very brief query. An entire academic article can now be ‘authored’ – read ‘generated’ – in seconds, from a query of under 30 words. But does AI really produce knowledge or just regurgitate it in different interrelations?

Mechanised paper mill

We acknowledge this new era with concerns that AI is not only generating knowledge and sorting more article submissions but is also being used as a peer reviewer. We mention this because we are in a situation now, namely, the mechanisation of knowledge production. This process not only undermines the rationale for scholarship (why study if you can produce a paper effortlessly with AI), but it also endangers the university enterprise by courting the knowledge agenda into a mechanised paper mill.

The publishers – who also operate an extensive database of the published materials – are now tasked to be different kinds of knowledge brokers who will revise their human clientele relations with machine-as-scholars. The only aspect remaining where AI is not yet popularised is entry into the classroom as a student and paying fees. For this, it has used the language learning approach, with some reported cases of outright plagiarism or illegal content downloads, with Facebook AI training being a case in point.

The worst-case scenario, as we see it, is that the normalisation of AI as the go-to replacement for the tenacious academic processes may compromise the university knowledge obligations in the not-too-distant future. We are not anti-AI, and we acknowledge that it will offer many benefits, even within the humanities and social sciences. But, being its theorists, we are curious that AI is now writing papers, recommending accepting or rejecting submissions, and is slowly coming into peer review.

It is managing post-publishing activities and is also influencing how even students produce knowledge. We use the term ‘produce’ with much reservation for AI knowledge (again, our base is in humanities and social sciences), which is not new knowledge; it is interpretational and inter-relational with existing knowledge. Our questions are: Where does such an invasion of AI into traditional academic processes of knowledge production leave the contemporary university? How will the university accredit AI-generated, AI-reviewed papers?

What happens to the succession of best knowledge production traditions when our students are AI query experts with no tenacity for the requisite mental labours of knowledge production? What will happen when they, having progressed through their careers with AI-based productivity, inculcate this same attitude to their students? What will the university become when almost all its faculty have studied through AI query-based generative publications?

Even more, what happens to the future of knowledge now, when all we have is the past knowledge, which the AI will cyclically regurgitate in different loops to give future scholars – and all new efforts towards authentic knowledge production ends?

The third orphanage (career)

AI cannot be intuitive, inductive, self-reflexive, emotional or spiritual. It cannot decide or respond to, or be guided by feelings, spontaneity, or fear, hope and passion. That’s why it cannot be more than a partial solution. As Arthur C Clarke predicted in 1992, AI will differ profoundly from human intelligence, so why would we bother to create it in our image?

That is the lesson lost. Many people use AI terminology without differentiating its mechanical automation trajectory versus intuitive thinking. It’s the latter (irrespective of whether it achieves it or not) that cautions the taken-for-granted use in academic knowledge production. It is more unwelcome when it achieves that capability because it will nullify the university and the entire education system, which is based on nurturing individuals to become critical thinkers capable of solving evolving problems.

The bulwark of authentic scholarship which seeks to offset wholesale AI adoption in academia, may, however, gather more fans in the face of emerging predictions of job losses. The realisation that AI automation may replace experts in various fields, including teaching, means that many university staff may be replaced with AI-powered tutors. The remaining minimal human capital will require less administrative labour – more jobs in this sector will also disappear.

The resulting lean AI-powered university may enter a loop of automation that pushes human experts out of the system, with massive career loss. Then, the question of why study when there is no career at the end may become imperative.

AI-generated knowledge is useful, and may be true and accurate, but it must not replace scientific methods of new knowledge production. If unchecked, AI will produce its second generation of orphans – robotic universities without students, teachers and administrators. Unlike the first cohort of scholars who were fazed by new technologies to which they were not accustomed or the second cohort of scholars who are AI-savvy but detached from the scientific rigour of knowledge production, AI may produce a third cohort of career redundancy and, with this, a self-actuating loop towards a dominantly techno-academia.

Dr Addamms Mututa is a senior lecturer in the department of communication and media, and Keyan G Tomaselli a distinguished professor in the faculty of humanities at the University of Johannesburg in South Africa.

This article is a commentary. Commentary articles are the opinion of the authors and do not necessarily reflect the views of University World News.