EUROPE

AI in HE has huge potential but experts call for caution
Artificial intelligence has huge potential, and universities are helping to further integrate this technology into wider society, said experts at the 2025 European University Association AI Conference. But they also expressed caution, warning that generative AI is not a panacea for all of higher education and society’s challenges.The conference, titled “How universities are shaping the era of artificial intelligence”, was held on 22 and 23 May, online from Brussels.
For Anna Borgström, writing instructor at the Karolinska Institute in Sweden, who spoke in a “Beyond the classroom – AI in management and services” session, “open and non-judging dialogue on the benefits of AI is key” for universities to help people demystify and make the best use of AI technology.
Partnerships and collaborations within institutions are essential to facilitate learning among all stakeholders – teachers, researchers and students – Borgström added. She highlighted Bites of Learning, an open pedagogy webinar series that she ran at the Karolinska Institute alongside librarian Lovisa Liljegren, which was aimed at teachers.
And while AI training and research programmes in higher education institutions are increasingly seen as a successful tool to shape how those technologies are understood, governed and adopted, “there is still a huge gap” between what different universities are doing with AI, Borgström told University World News after the conference.
“Some programmes at our [top-ranked] universities have workshops, open dialogues and guidelines – and others have absolutely nothing,” She warned that many students take AI for granted now and think they know everything about it, and this does not help its efficient application outside of the university setting.
An AI ‘sceptical optimist’
Jeroen Fransen – head of product at online assessment platform Cirrus Assessment, based in Utrecht in the Netherlands, and creator of Utrecht and Spain-based TheyCorrect, which uses AI to support human professional graders – told University World News that he was “a sceptical optimist as far as AI is concerned”.
He said that AI has the potential for both good and bad, “and right now the balance is negative, looking at the impact on the environment, the works of writers and artists being appropriated and the resulting information slop.
“In the long run, I am convinced that we will benefit from different types of AI, including generative AI for some uses. However, in the short term, I don’t see us getting the high-flying benefits we are being told about,” he said.
Even in cases where AI can provide value, for example, where it solves an actual problem in a better way than human approaches, “we should still evaluate its impact in all aspects”.
Fransen, who also founded the Breda, Netherlands-based grading tool Revisely, and is Taskforce Leader at the European Edtech Alliance (EEA) added: “In education, in particular, I would advocate for evolution and not revolution.” Decisions should be based on what an AI solution “is demonstrably able to deliver and not on promises about future capabilities.”
AI in the university library
In the same session, the limitations of AI use in a real-world university library setting were highlighted by Catherine Eagleton, librarian and director of collections and museums and director for the MLitt in Museum and Heritage at Scotland’s University of St Andrews.
She said AI could save hours in cataloguing St Andrews’ 1.5 million print volumes, one million photographs, 115,000 museum objects and 210,000 rare books – but it could also make mistakes.
“We piloted and tested an approach that used AI agents to recommend which books we should accept (or not) as donations to the library collection, but we have not yet deployed the approach in our workflows,” she explained to University World News after the conference.
This is because the AI agents chose to accept the donation of a potato cookery book instead of one of the existing copies of Shakespeare’s First Folio “that would have cost millions of pounds”, she said.
“We are still testing how and where it works well enough, and the potato book and a Shakespeare first edition are examples we found where human judgement is better than automated tools,” she added.
Eagleton also noted the limitations of an AI-powered video streaming and capture platform called Panopto, which allows instructors to record lectures. Panopto is “Okay for personal names but does not handle Latin well or Scottish accents”, she said, and so it “could be an assistant to but not a replacement for a person”.
In short, the quality of results obtained by AI is more important than the quantity of the hours of work it might save: “We can speed up processes a lot,” Eagleton said, “but our next step is to work with staff who run the processes.
“We want the team to become superhuman but not robots.”
Fransen also emphasised to University World News the need for AI and humans to co-exist. With Cirrus Assessment, the generative AI component highlights issues in students’ exams and papers, while professional human graders review and correct the AI’s output, add feedback and return the output to the teacher – a hybrid approach “allowing teachers to completely trust the end result”.
Chatbots and market impacts
Robert Clarisó, a lecturer in IT, multimedia and telecommunications at the private Barcelona, Spain-based Open University of Catalunya (Universitat Oberta de Catalunya), called for caution when using chatbots – apps or interfaces that can carry on human-like conversation.
Chatbots are omnipresent on commercial websites, from American multinational technology company known for online shopping Amazon to Hungarian low-cost airline WizzAir.
Chatbots are also used particularly by online universities, where they are considered efficient because they can answer simple questions regarding deadline dates or word lengths, while video answers can be pre-prepared and documents selected to enable the chatbot to ‘tell’ students how to prepare an assignment, Clarisó said.
But the aim of using chatbots is not for the student to ask the chatbot to do the work for him or her (with generative AI chatbot ChatGPT often a culprit here); or even, Clarisó said, to dispute a poor grade with the teacher, with the excuse “but the chatbot agrees with me”.
Clarisó also discussed an ethical question regarding the sharing of chatbot responses. The fact that chats are recorded so that instructors can access the interactions “may please some students who like the fact that a human reviews the AI responses”, but “some students feel uncomfortable about having their conversations shared”.
Ethics aside, recent research has shown that the impact of generative AI on the labour market may be less significant than it appears.
Indeed, Fransen noted a United States-based National Bureau of Economic Research working paper, released in May 2025 and assessing the effectiveness of AI chatbots in 7,000 workplaces in Denmark, which found “no significant impact on earnings or recorded hours in any occupation.”
Study authors Anders Humlum, a University of Chicago assistant professor in economics, and Emilie Vestergaard, a University of Copenhagen economics PhD student, assessed 25,000 employees who said they saved just 3% of their time by using chatbots. Only 3% to 7% of their productivity gains came back to them in higher pay.
Some cautionary notes
Cost is an obstacle to wider adoption of generative AI tools, said Pedro Ruiz, vice-rector for strategy and digital university at the public research University of Murcia in Spain.
He told delegates attending an “Are we entering an AI winter?” session that it was unaffordable to implement AI in all processes and areas and that only big companies invest heavily.
Also chair of the European University Association’s Task and Finish Group on AI, Ruiz said that universities and companies should prioritise where investments go, “considering the return on investment and how important AI is for your institution.
“We should treat AI like pharma products,” Jeroen Fransen added, “researching and testing it properly before putting it on the market to see if it is safe.”
Fear of missing out (FOMO) was also skewing investments, he said, with an indirect disadvantage that investors only pay for AI solutions, meaning that problems that cannot be solved by AI are not an investment priority at the moment.
Fransen concluded that the key for higher education institutions is to focus on products “that solve an actual problem, partnering with companies, such as ours, to co-create solutions that align with the real needs of students and teachers.”
While universities test-drive AI on campus, their researchers are deploying AI in efforts to provide solutions for all of society.
There are so many examples of universities using AI tools to enhance people’s lives, such as robots developed by ETH Zurich in Switzerland that assist in hospitals and homes and the University of Amsterdam data science centre’s generative AI solutions for hospitals.
Meanwhile, the University of Cambridge in the United Kingdom reports that “researchers are looking at ways that AI can transform everything from drug discovery to Alzheimer’s diagnoses to GP consultations.”
Generative AI is racing ahead, but it seems that universities are cautiously pressing the brakes – not to stop the journey with AI but to steer it responsibly.
From students dubiously using AI to do their work to AI that turns away a precious Shakespeare first edition, it seems that while the technology’s potential is extraordinary, so are its pitfalls.