EUROPE
bookmark

An inclusive, university-wide approach to AI guidelines

Italy’s University of Florence took a dynamic and participatory approach to developing generative AI guidelines for teachers and students, Professor Maria Ranieri told a best-practice session at the 2025 European University Association AI Conference. “We engaged diverse stakeholders to ensure the guidelines address real-world challenges,” she stated.

A dynamic approach was also adopted “because we consider this tool as dynamic in nature. We are registering new needs all the time. Not only because our AI tools rapidly evolve, but also because practices rapidly evolve”, she noted. One of those new – and urgent – needs, the session agreed, was to develop improved AI competencies among both students and staff.

Ranieri was one of three experts from universities in Italy, Lithuania and North Macedonia who presented best practice examples of how AI is developing in European higher education. The 2025 EUA AI Conference, themed “How universities are shaping the era of artificial intelligence”, was held online from 22 to 23 May. The Lithuania case is reported separately.

Thomas Jorgensen, director of policy coordination and foresight at the EUA, said that over the past two years generative AI has increasingly been used in teaching and learning in higher education, and there is an urgent need to learn from best practices.

Amid all the talk about disruption, “what are the cases that actually work, and what can we see for the future?” he asked.

Generative AI in universities is especially challenging, said session moderator Hallvard Fossheim, professor of philosophy at the University of Bergen and chair of the Norwegian National Committee for Research Ethics in Science and Technology.

“Many of us have experienced a lot of insecurity about how to deal with AI. Many of us have been waiting for guidelines from our institutions. At the same time, many of the questions are very abstract. It is difficult to get from, say, certain ethical principles and down to the ground level. How do we tackle this?” he asked.

The University of Florence case

The first case was presented by Ranieri, professor of education and instructional technology at the University of Florence, which she represents in the Digital Education Hub ALMA. She is also co-editor of the journal Computers & Education.

Ranieri shared how the university moved from AI experiments to an AI ecosystem that operates responsibly and with impact and has a holistic institution-wide approach. “In parallel, we developed institutional guidelines, including for teaching and learning.

“At the same time, we provided technologies and technical support for using them and also training on how to use AI for teaching and learning, moving from pedagogical perspectives with reflection in terms of what we are missing and what we are gaining through the use of AI in education on one side and, on the other side, providing training on practices and tools,” she stated.

The core principles of the university’s institutional approach to AI in teaching and learning were summarised in four key words – agency, integrity, privacy, and well-being.

“In terms of human agency and critical thinking, we underline in our guidelines that AI must augment rather than replace academics. Students must show their intellectual growth in the sense that we need to preserve their intellectual growth,” she noted.

Regarding academic integrity and transparency, there must be honest attribution of AI assistance. “Invisible use is considered misconduct,” she said.

For data protection and privacy, no personal or sensitive data may be taught to external models unless there are contractual safeguards. “There’s very little data literacy among our students, and not only them,” she noted. So students must be taught how to protect key information.

Finally, regarding well-being, she stated: “You must ask every time whether it is worth using AI, because it has a cost.”

From principle to practice in pedagogy

Moving from principle to practice, Ranieri said that there are recommendations for faculty, staff and students, identifying key areas and providing concrete examples.

“Our guidelines are not abstract at all. For each area, we provide at least a couple of examples of uses, either from the perspective of teachers or from the perspective of the students.

“In terms of course design, every syllabus must identify why a tool is used, framing AI as a supplement and not as a substitute,” she stated. Each syllabus must clearly articulate policies regarding AI uses in order to guide students on that topic.

Regarding assessment adaptation, she said: “We privilege in-class discussions and oral defences and project work so that students can use generative AI tools to elaborate on the tasks, the projects, and the essay. But then they have to justify [and] describe the process that they have undertaken to produce their essay or project, etcetera, and why that process was adopted.

“Another component of our guidelines for teachers and faculty staff is dedicating time to develop their own AI literacy.” Teachers and staff are supported with webinars at a university level and a mini-MOOC providing guidelines and examples.

An AI literacy framework was developed, within which four dimensions were identified. They were related to knowledge, to operational abilities, to a critical approach, and to an ethical dimension. Then, researchers attempted to design an AI literacy curriculum.

There are certain skills already known by researchers, such as data literacy and the critical use of data. But using ChatGPT, for instance, a need for information accuracy awareness quickly becomes apparent. “ChatGPT may invent because it’s a tool to generate content, not to search for content,” she noted.

The University of Florence, said Ranieri, is also developing more cognitive guidelines to support the human-machine interaction “to protect the quality of the cognitive process”.

The university believes that providing guidance is a way to make students and teachers more comfortable with using AI. Student guidelines provide clear boundaries while encouraging responsible experimentation with AI by specifying permitted and prohibited uses.

Permitted uses include, for example, brainstorming and idea generation, language learning, assistance, and editing and explaining complex concepts.

The institution-wide process, key lessons

The process began at the University of Florence with a working group at university level that included one representative for each disciplinary area – the social sciences, humanities and education, as well as the range of sciences. As the work unfolded, diverse stakeholders were engaged, and their different perspectives were crucial to the process.

In particular, Ranieri said, a six-stage participatory cycle was designed involving a committee representing different voices from the field. There was research and analysis and comparison of existing guidelines from international bodies and other universities.

“We developed draft guidelines. Then we had discussion and revision. Now we are in the phase of piloting.

“During this phase, students and teachers are testing the guidelines. In the next six months, we will collect data about this testing. Of course, we will revise the guidelines according to the feedback that we are going to collect from students, teachers and administrative staff,” she stated.

Ranieri outlined key lessons from the University of Florence that are of possible interest to other institutions.

The first lesson is about the inclusive development of guidelines. This considers different needs and perspectives and may also reduce scepticism or resistance. Second, it is important to find a balance between innovation and oversight.

Further, she said: “You need to consider the benefits and challenges that the innovation process entails. We also strongly believe that we need to invest in AI literacy programmes and support for teachers and students.”

It is important to be aware of inappropriate uses of AI. Additionally, she noted: “You don’t just need to provide rules; you need to endorse the adoption of rules through training and explaining why certain rules have been put in place. Then, be open to revising and changing and improving what you have learned.”

Ranieri concluded by stressing that the university sees generative AI “not as a threat or a shiny gadget but as a mirror of our education values. When we safeguard agency, integrity, privacy, well-being, we believe that AI becomes a catalyst for a richer learning experience”.

Using AI to boost student engagement

Agon Memeti, a professor of computer science at the University of Tetovo in North Macedonia, shared a case study on leveraging AI to personalise course selection and boost student engagement, undertaken by himself and colleague Ibrahim Neziri, associate professor in organisational psychology, psychometrics and methodology at the university.

“We are leveraging AI to personalise the learning experience, not only to benefit students but also to support broader institutional planning and engagement goals,” he told the conference.

In many universities, students are overwhelmed by elective choices, and traditional learning management systems do little to help students make informed decisions. “While choice is empowering, too many options can lead to decision fatigue,” he said.

Memeti and his team developed an AI-driven system that personalises elective course recommendations for students, helping them with course selection, minimising overload and improving academic engagement.

The AI support is directly embedded inside the learning management system, as a seamless part of the university’s digital ecosystem, rather than AI functioning as an external tool.

The system was developed using the Microsoft Blazor framework and integrates ChatGPT to analyse a student’s academic programme and prior coursework. When students log into their dashboards, the AI suggests the most suitable among available electives.

The recommended course is displayed clearly on the student’s profile, with a visual highlight.

“Our aim was to move beyond static advising tools and create a dynamic, data-driven interface that supports students in real time. The system doesn’t just suggest a course – it explains why that course may be a good fit, based on the student’s academic background,” he noted.

A brief quiz was introduced, tailored to the recommended elective, using past exam performance data to further personalise the process. According to pilot results from spring 2024, there was a 38% increase in enrolments for AI-recommended courses. A follow-up student survey showed that 51% of students felt more confident in their course choices.

Academics also saw benefits, with improved foresight into enrolment patterns helping them better manage course loads and resources. The team hopes to develop and scale the system. “We are excited to collaborate with peers across Europe on this work,” Mehmeti said.

Summing up

Fossheim said the presentations made clear the importance of thinking both in terms of continuity and of challenging new times and possibilities, so as not to lose contact with ethical principles and academic ideals. “But we need to think properly about what these mean now in this context and how to implement them now,” he noted.

Another observation was that using generative AI technologies requires some competences that many academics do not have.

That suggests that in terms of AI, universities are at what is sometimes called the ‘heroic’ phase of development – “when it’s often down to individuals who get a good idea and spend a lot of extra time trying to pull something together.

“The other possibility is that the institution actually takes responsibility to start a process that involves many and various competences and parts of the institution.”

Fossheim returned to the question of competences. “There is a competence economy that we need to be very aware of,” he said in conclusion. It is partly controlled by teachers, “but a lot of it is also what our students already have learned and the habits they’ve got into when they enter the higher education institution.

“Making sure that this technology is used to support some of the classic competencies and is not just seen as a threat to them is very important”.