LITHUANIA

Leading the world in creating academic avatars for students
Two academics at Vilnius University in Lithuania were among the first in the world to develop “personal AI knowledge twins” – avatars, trained on the law lecturers’ knowledge, teaching and publications – and integrate them into a live course to provide support for students around the clock.“The twins became part of the teaching and learning experience as well because they answered questions, they clarified complex topics, and they helped students stay engaged throughout lectures and during the exam preparations as well,” said Goda Strikaite-Latušinskaja, one of the avatar academics and a PhD candidate and law lecturer at Vilnius.
She was speaking at the “2025 European University Association AI Conference, titled “How universities are shaping the era of artificial intelligence”, held on 22 and 23 May, online from Brussels. The session highlighted practices of AI-driven learning and teaching.
In April 2024 Strikaite-Latušinskaja and Dr Paulius Jurcys, also a law lecturer, taught an intensive course on data protection and privacy law at Vilnius University and again in early 2025. “We knew right away back in 2024 that it couldn’t be a traditional course, not when students are already using various AI tools every day.
“Instead of ignoring or discouraging them from the use of generative AI, we made a different choice. We decided to adapt the course to the reality of today’s students and teach them not only legal content but also how to work with AI tools properly, which is vitally important,” said Jurcys.
This was brought home to the law lecturers in 2023 when a New York lawyer used ChatGPT to find legal precedents – but several of them were fabricated. Judge P Kevin Castel stressed AI’s potential but criticised the lawyer’s lack of verification and warned of the vital need for responsible AI use with human oversight to ensure accuracy.
The avatar innovation
“So from day one we told our students, use any AI tools you may find helpful,” he noted. This is stated clearly in the course materials. “This open approach led to meaningful conversations about source reliability, about hallucinations, about bias and what it means to trust or to question AI-generated answers. But actually, we didn’t stop there; we wanted to go one step further,” he explained.
The innovation of Strikaite-Latušinskaja and Jurcys began with this question: “What if each and every student could have a 24/7 version of their professor, ready to explain, guide or support them throughout the course? Imagine professors being available at 2am, without coffee, without the morning grumpiness, no office hours needed, just simply good WiFi.”
The young academics developed chatbot versions of themselves – Goda AI and Paul AI – which were trained on their academic work, lecture slides, research papers, course materials and so on. “They weren’t built on generic internet data.
“They were designed to reflect how we teach, what we value, and how we explain things. They became a way for students to interact with our knowledge even when we weren’t available, outside office hours,” Strikaite-Latušinskaja noted.
“As far as we know, this was the first time in the world where personal AI knowledge twins were integrated into a live university course.” She reads the AI-generated reports of student interactions every day, among other reasons, because it is informative to see what is asked and answered.
Strikaite-Latušinskaja continued: “The avatars weren’t meant to replace us. They were designed to extend what we could offer. The idea was to make our knowledge accessible in a student’s own rhythm. Of course, once we introduced the knowledge twins, we couldn’t just keep teaching the same way we did before.”
Pushing boundaries is a challenge
Pushing boundaries and changing can be challenging, she acknowledged: “To be honest, it would have been much easier for us to reuse last year’s PowerPoint slides, ask a few multiple-choice questions and call it a day. But our students were evolving, so we had to.
“First of all, we rethought all the lectures. If students could just pop a question and get an answer instantly to definitions, summaries and other information, there was no point in us repeating textbook content in class.
“So we shifted toward more interactive sessions like debating, problem solving, analysing real-world examples, and so on. Since students could get answers quickly, from a teacher’s perspective this led to more sophisticated discussions. We’re not only talking about what something is; we’re focusing on why it is like that.”
Then the big challenge came – the exam. Students were allowed to use AI tools during the exam, which forced a complete rethink of how to assess learning in general.
“To prepare for the exam, we consulted with educators and edtech experts from Japan, the United States, Denmark and the Netherlands. It took us a while to come up with the exam task. In the end, we focused on open-ended questions requiring legal reasoning, practical application of knowledge and critical thinking.
“We didn’t test what they knew, but how they thought,” she stated.
Strikaite-Latušinskaja said students were positive about the experience of working with AI for learning. “One student told us. I’ve never seen AI and professors working hand in hand before, and it gave me confidence and curiosity throughout the course.’ Another said, ‘I often worry that I’ll forget something or I’ll misunderstand. AI gave me more confidence.”
Progress but with downsides
Strikaite-Latušinskaja and Jurcys are currently developing two sets of guidelines – one an AI knowledge plan and one teaching students so that they know how to prompt. “What we also discovered, which is an important issue, is that digital literacy among students is uneven.”
They are also working on a set of guidelines for other professors – how to use the AI twins, for example – because the plan is to expand this work to other faculties.
“While this initiative has been positively received by many and students as well, we are also encountering some resistance from our colleagues, to be honest, who are hesitant to adopt AI twins in academic settings,” Strikaite-Latušinskaja noted. One reason is lack of familiarity with the tools, and another is that it needs additional time and resources to use AI tools effectively.
Of course, said Strikaite-Latušinskaja, there are other challenges, such as academic integrity, data protection and transparency. “But ignoring AI doesn’t protect students. We believe it just leaves them unprepared. That’s why we believe that the future of education isn’t AI or human, but AI and human, working together, learning together, teaching together,” she emphasised.