EUROPE
bookmark

AI disruption demands a rethink of the university model

Current university models urgently need rethinking to prepare for a future of omnipresent AI, says Andrea Renda, director of research at top Brussels policy think tank Centre for European Policy Studies (CEPS) and adjunct professor of digital policy at the European University Institute.

Universities need to educate not just students but augmented humans who thrive in the age of AI.

There has been much debate on how generative AI is disrupting universities, such as in assessment, and on how to ensure responsible use of AI. A developing disruption, according to Renda, is ‘digital twins’ for academics that can deliver lectures (in multiple languages) and interact with students.

Renda has been very heavily involved in shaping rules on AI at the European Union level as a member of the EU High-Level Expert Group. He also led the impact assessment of what became the 2024 EU AI Act.

He was speaking at the 2025 European University Association AI Conference, “How universities are shaping the era of artificial intelligence”, held on 22 and 23 May, online from Brussels.

It seems that universities are being shaped by, as much as shaping, the era of AI.

Renda believes universities need to think not only about how they bring disciplines together so as to study AI from a variety of perspectives but also “really focus on what makes us human in the sense of what agency and control will enable us humans in the future to make the most of our cooperation with machines.

“This is essential. We have to find a way to accommodate AI in a human-centric way and find a new frontier for humans, for augmented humans”, he told a plenary session of the conference on principles and policies in AI and their implications for research and education.

Renda has studied what happens to artists when they are challenged. For example, when photography challenged their painting skills, artists became more subjective and interpretive – think the Impressionists, the Cubists, contemporary art in general.

“This is something we now have to replicate at scale for many other fields of study where the augmented human will have to focus on having agency and control of what AI can do and then specialise in what is eminently human,” he said.

The human component and critical analytical skills are what matter when becoming augmented humans.

“I don’t think we are advanced enough in the study of this complementarity of skills going forward,” Renda said.

The 2025 EUA AI Conference

The EUA – an association of more than 800 universities as well as rectors’ conferences across Europe – has become acutely aware of challenges faced and the diverging approaches to AI among the university community.

Some individuals and institutions are approaching AI cautiously and others with optimism, and some in the middle ground – cautious optimism.

In response, it established a group of experts to guide the work programme on AI.

The conference highlighted best practices, showing how some universities are deploying and supporting responsible use of generative AI; the importance of policies and guidelines; and the crucial need for training in AI competence among students, lecturers and staff.

EUA membership and project coordinator Clare Phelan said universities in Europe had an “enormous appetite for guidelines that can support the creative and safe use of AI.

“What we see is universities in this very interesting phase, a juncture perhaps, where they’re simultaneously developing strategies and still exploring the potential of the technologies.”

The outburst of experimentation with generative AI will inform decisions and policy-making around implementing AI in higher education, she said.

While he has been involved with AI at the EU policy level, Andrea Renda has also been at the coalface of AI in education, as a university adjunct professor who – post November 2022 and the arrival of ChatGPT – faced the dilemma of whether to accommodate AI, embrace it and demand more from students, or ban it – or somewhere in between.

In Europe, Renda said, while levels of AI investment and progress have not been as high as in some other parts of the world, there are nevertheless hubs of excellence located in areas where universities play an important role – Paris, Eindhoven, and Munich. Even London post-Brexit, as it remains connected to Europe.

“I’m currently in a research project that leads me to tour these ecosystems and hubs of excellence in AI.

“You see how important the presence of an established tradition in engineering and social sciences is, universities that really shape the environment and have helped the local ecosystem to develop excellence in AI and related technologies. That is very important, but extremely localised,” he stated.

The EU tries to build solutions for all of Europe, Renda continued, but AI requires computer infrastructure, skills, relatively deep financial markets, and universities that are powerful and multidisciplinary. Europe will need to grasp the challenge of both deepening and spreading AI benefits.

Regarding higher education, Europe trains more leading AI scientists than the United States (but not China), but very many end up in other parts of the world, especially America: “Let’s see what happens now,” he noted.

A changing world is also changing higher education’s skills imperatives. While there has been a lot of emphasis on the STEM fields and coding, “today there is growing emphasis on other types of skills for the future, multidisciplinary skills, empathy and social skills, the ability to connect the dots, and the ability to bring disciplines together”, Renda said.

“The future of humans is a future of generalists with deep knowledge of many fields. This is where we’re headed as mankind in the age of AI. We need to catch up with developments fast, and we have to think not only about the jobs of the future but also about the university of the future in a different way,” he noted.

Renda screenshared a video clip of an avatar of himself, “a friend of mine”, which looks and talks like him, though it is still in draft form.

The digital twin, developed after Renda had spent less than 10 minutes reading a text and making facial expressions and so on, now says words he has never spoken – they were written for him.

It is already possible to use avatars for ‘master classes’ that are able to be taught in multiple languages. The digital twin can be interactive, in the way ChatGPT is, “and could become my own teaching assistant, trained on things I’ve written and said in the past.

“It will be able to represent me while I’m hopefully somewhere on the beach. It can reach out everywhere in many languages, in an asynchronous way and in an interactive way.

“As you can see, this has enormous opportunities”, said Renda. “But you can also see the enormous risks: impersonation, confabulations, hallucinations, deviations from reality, the same things that we see in a GPT today.

“How do we master a world in which a few minds can spread their thoughts in an interactive way? We probably don’t need many mid-level scholars, but we need the top scholars because they can reach out with their avatars to many more people around the world. Are we going to need fewer universities?” he asked.

Hence the need for a university rethink.

Building an AI ecosystem in Europe

Session moderator Professor Wim van de Donk – rector of Tilburg University in the Netherlands and a board member of the EUA – said universities have, over the past 1,000 years, been confronted by new technologies, from the printed book to the internet.

“The general notion about technologies in universities is that we tend to overestimate the consequences in the short term and to underestimate the consequences over the long term,” he said.

This may prove to be true for AI, but in the meantime, universities are responding, and there has been a lot going on at the policy level and in the private sector across Europe.

The 2024 EU AI Act – the world’s first comprehensive regulation on AI by a major regulator – and its Digital Decade strategy place strong emphasis on trust, transparency, safety and human-centred design.

The need to embed principles into practice across all curricula and research across Europe at local, regional, national and European levels was emphasised by Dr Jose Martínez-Usero, director for projects at DIGITALEUROPE, which represents digitally transforming industries and works closely with the EU to accelerate the digital transformation of Europe.

The question is how to translate these values into actual job roles or real skills? To this end, DIGITALEUROPE has been collaborating with Europe’s main standardisation bodies to develop standards for critical areas including AI, cybersecurity, data, and smart technologies.

At the end of last month, DIGITALEUROPE launched a free pre-standardisation activity and framework.

“The objective is to define exactly the competencies and knowledge needed for professionals working in AI across different sectors in all of our countries,” said Martínez-Usero, who also teaches at the International University of La Rioja in Spain.

Universities across Europe are “helping to ensure our educational offer meets the evolving needs of AI labour markets, while at the same time being ethical and with scientific rigour”.

But there is a growing skills gap. Many thousands of professionals need to be skilled, reskilled and upskilled to work in the AI market. Martínez-Usero said that over 70% of member companies of DIGITALEUROPE and National Trade Associations report problems in recruiting AI professionals.

For the moment, only some higher education institutions in Europe offer dedicated AI programmes that are fully aligned with the market. Though, it is very difficult to understand what is fully aligned with the market when the market is evolving so fast.

“We at DIGITALEUROPE, representing the digital transforming industries, are trying to tackle this scale-up together with many, many partners across academia and research. We are promoting scalable, certified and inclusive learning paths to meet real industry needs.

“We need you to really innovate in the curriculum, to work across disciplines, and to embed both ethics and excellence in AI learning programmes.”

Europe will depend not only on AI technology but also on an educational and job market ecosystem, said Martínez-User. “We have to do it together: academia, industry and policymakers,” he said.

A tangle of AI and data laws

The final speaker was Dr Heidi Beate Bentzen, a lawyer and researcher in the Institute of Health and Society at the University of Oslo in Norway, who said the 2024 EU AI Act will be applied over two years and will harmonise AI rules across Europe.

It aims to promote the uptake of human-centric and trustworthy AI and uses a risk-based approach.

The AI Act does not replace any existing laws but is in addition to them. “The legal landscape for AI development is becoming very complex and fragmented, with rules spread across many legal instruments, each of which is quite comprehensive,” she said.

Just one of these is the General Data Protection Regulation (GDPR), which covers processing of personal data.

While the AI Act does not apply to AI systems developed for the sole purpose of scientific research and development, it does apply if an AI system is placed on the market as a result of research and development, Bentzen explained. Violations of the Act are serious, as they can carry fines of €35 million (US$39.88 million).

“It’s worth noting that many large projects will have an AI component, and in many cases, it is completely unproblematic – but far from always. It’s therefore important also to keep in mind how the big funders deal with this issue,” Bentzen told the conference.

The EU requires ethics approval for a lot of the AI research it funds. “If your country or university does not have a system in place for ethics review of AI research, you may get a negative remark about this at the point where the project is shortlisted for funding but where the final decision is not yet made,” she noted. Having an ethics committee is a good idea, she added.

So, what is so special about AI research? “It’s mostly that existing ethics issues are exacerbated. For example, if you process personal data in research projects, you’re familiar with the ethical challenges that entails. But with AI on board, it often comes on a larger scale and with more ethically challenging processing methods,” she stated.

Bentzen outlined three less talked-about challenges for universities.

European universities face hidden legal challenges in AI collaboration with United States federal institutions. Because of US sovereign immunity, rights for EU research participants are not guaranteed. This breaches EU data protection laws and blocks legal data transfers, even for joint AI projects under US national labs.

“This means that AI collaboration with some of the best research institutions in the field in the US is therefore highly legally complex and can quite easily lead to a breach of the GDPR, ” Bentzen noted.

Further, Bentzen said, EU rules treat pseudonymised data as personal. Re-identification risks are growing and are now accelerated by AI.

Studies show that machine learning can re-identify individuals from anonymised data, which raises concerns for open science. To protect research participants, “data must be as open as possible, but as closed as necessary”.

Finally, said Bentzen, “gone are the days where environmental impact largely focused on reducing the number of flights taken. AI is incredibly resource-demanding and has a huge environmental impact.”

AI should therefore be used by universities with appropriate care, and environmental aspects of the work should be considered.