EUROPE
bookmark

Skills gap in AI knowledge and use growing among students

Universities are seeing a gap among students in their knowledge and use of generative artificial intelligence, which they say risks getting wider and eventually becoming apparent in the labour market, the 2025 European University Association AI Conference heard.

Some of this lack of engagement with technology came from fear among students that they might be penalised, not knowing what is permissible or how AI might be helpful in their studies and research.

There is also a gap evident among teaching staff, some of whom prefer to ignore generative AI (GenAI) rather than to engage with it. Attitudes towards GenAI also vary in different disciplines and faculties depending on perceptions and likely impacts of its use.

The challenges of adopting AI use in higher education and levels of AI competency among users were highlighted during an ethics and frameworks session at the virtual European University Association conference, titled “How universities are shaping the era of artificial intelligence” and held on 22 and 23 May.

Participants heard the experiences of three higher education institutions and how they were managing both the advances in the technology and its use among students and lecturers.

A structural approach

Roel De Meu, policy adviser at KU Leuven in Belgium, said the university was trying to approach the issue structurally, and it was important for practice to be underpinned by some guiding principles, such as transparency and verification, respect and responsibility, and sustainability.

“Importantly, a key pillar of our approach is the skills gap, and we are all faced with that, especially in these stages of AI, and this is at several levels, inter- and intra-class between different staff and between support staff, who might already be more skilled in the use of GenAI than most or some of the students.

“So, it’s important to invest in AI literacy at all levels of the university, including support staff, professors and also the students,” he stated.

De Meu said the university has assembled groups and committees to oversee AI use, user protections, and guidelines that would include all stakeholders in the institution.

This includes a table of information that students can use to check how they can use GenAI safely, honestly and with academic integrity. It helps students to be transparent about how they are using GenAI.

However, Rune Vercauteren, a student at KU Leuven and a member of one of the GenAI committees at the university, stressed the widening of the skills gap among students in their knowledge of AI use.

“Some don’t want to use it due to climate impact or fear of AI detection and getting penalties if it is not clear what the regulations are and what they’re allowed to do. Some students are more interested in AI and more technically skilled to use it.

It was clear that the AI usage gap was widening among students, “especially as the AI itself keeps getting better. We can see that in student assignments. There is a lack of clarity for students, and it is necessary for the university to set clear guidelines for its use.”

Mixed responses

Other speakers also discussed the speed at which Gen AI was advancing and, in turn, the rate at which universities had to move to implement guidelines for its use, both practically and ethically.

James Mackay, associate professor of literature and digital culture at the European University Cyprus, said most universities had only relatively recently draughted and introduced guidelines around AI use to try to combat varying perceptions and misconceptions in use across different departments.

He said: “Our emphasis from the beginning was on guidelines. This arose from our first campus-wide consultation, which took place in February 2023 and which revealed an extremely wide range of responses across disciplines.

“Within the department there were some academics, like me, who were enthusiastic experimenters, while others paid no attention whatsoever to generative AI. Some lecturers were punishing student use as simple plagiarism. Others were mandating Gen AI in their classes.

“Some were putting all their trust in AI detection tools that had evident flaws, particularly for our students, while others were playing around with AI-assisted grading of student assignments,” Mackay continued.

“Without a framework, there were evident reputational and even legal risks. At the same time, we had to recognise that uses varied by discipline, so medical sciences were very enthusiastic about AI-driven diagnostics and research, but in my own area of English literature you're immediately aware of the risk to original process-driven critical thinking,” he noted.

A university-wide consultation on AI use delivered by a cross-disciplinary task group resulted in a framework at the university that is principle-based rather than prescriptive, with four core principles – human-centred inclusion, data privacy, safety and security, and communication.

It also explains the opportunities offered by the technology to automate routine tasks and to aid personalised learning experiences, Mackay added.

AI in the institutional ethos

Meanwhile, Dr Susanne Schumacher, co-chair of the digital council at Zurich University of the Arts (ZHdK) – which has an academic focus on film, music, drama and dance – said the diverse needs of its student and academic communities meant that competency development was a major challenge, alongside the rapid rate of AI development and the decentralised nature of the institution.

However, unlike other conference participants whose universities have undertaken the creation of distinctly AI-focused frameworks and strategies, at ZHdK AI competency was embedded into its overall institutional culture and into existing programmes to build digital confidence among students.

“We chose not to rely on a separate AI strategy but on the overall university strategy, which has a motto of artistic intelligence,” she said.

Instead of an explicit strategy, “living documents” were created to establish the institution’s position on AI, which included an information sheet detailing rules around the use of AI from the university’s legal team.

The ZHdK also collected data through a university-wide survey that captured the extent of AI use and competency of AI users.

“Students show a high level of critical awareness regarding the meaning and impact of AI. Text-based AI applications are widely used in different forms within the creative process,” Schumacher said, “and there is a growing demand and a strong willingness for critical and deeper reflection on AI.”

The university has an internal network that oversees aspects around digital technologies, including networking and exchanges of information and knowledge. There is also an AI forum and a curated online hub where tools are shared, and there are guidelines on its use, with signposts to useful resources.

Schumacher added: “Reaching the next level of systematic integration of AI and an explicit AI strategy will be our next milestone. But we need a call to action. We need to keep asking, why are we using AI? How does it improve our work, and how can AI help us think and act better as humans in higher education?”