New UK university principles promote AI literacy and integrity
The principles are aimed at helping universities “capitalise on the opportunities technological breakthroughs provide for teaching and learning”. They are supported by the 24 members of the Russell Group – top universities such as Cambridge, Oxford, Edinburgh, Imperial College London, King’s College London, the London School of Economics and Political Science, and University College London.
Gavin McLachlan, vice-principal of the University of Edinburgh, said: “Universities will now have a responsibility to ensure their students are AI literate, both to support the use of these tools in their learning, but also more widely to equip them with the skills they need to use these tools appropriately through their careers. Because it seems very likely every job and sector will be transformed by AI to some extent.”
The Russell Group principles – developed by AI and educational experts to guide the use of generative AI, new technology and software like ChatGPT – follow the government’s launch of a consultation on the use of generative AI in education in England.
In March, the Department for Education published a statement on generative AI in education. Its key messages are that while AI has the potential to reduce workloads and allow teachers to focus on excellence, institutions need to take steps to prevent malpractice involving new technologies to protect data, resources, staff and students.
The Russell Group AI principles
“This is a rapidly developing field, and the risks and opportunities of these technologies are changing constantly,” said Dr Tim Bradshaw, chief executive of the Russell Group.
“It’s in everyone’s interests that AI choices in education are taken on the basis of clearly understood values. The transformative opportunity provided by AI is huge and our universities are determined to grasp it.”
The main thrust of the principles is to ensure that generative AI tools are used to benefit students and staff – “enhancing teaching practices and student learning experiences, ensuring students develop skills for the future, and enabling educators to benefit from efficiencies to develop innovative methods of teaching”.
Aside from the Department for Education statement, the Russell Group referred to work of the Quality Assurance Agency for Higher Education and the Jisc National Centre for AI in Tertiary Education in helping to develop higher education’s understanding of generative AI.
For their part, universities have contributed sector-wide insights, said the statement, “and have been proactively working with experts to revise and develop policies that provide guidance to students and staff”. Going forward, it stressed, collaboration, coordination and consistency on generative AI will be crucial.
The Russell Group AI principles are:
• Increasing AI literacy: The first principle is that universities will support students and staff to become AI literate.
Students will be taught the skills needed to use AI tools appropriately in study and future careers, and staff will acquire the skills to deploy AI to support learning.
It is crucial, according to the Russell Group, that all students and staff understand the opportunities, limitations and ethical issues associated with using AI tools “and can apply what they have learned as the capabilities of generative AI develop”.
These issues include: privacy, data and intellectual property considerations; potential bias as generative AI replicates human biases and stereotypes; inaccuracy and misinterpretation, as AI may draw on incorrect, irrelevant and obsolete information; insufficient ethics codes embedded within AI tools; plagiarism as AI reproduces information developed by others; and exploitation in the processes by which AI tools are built.
• Equipping staff with AI skills: Staff should be equipped to support students to use generative AI tools effectively and appropriately in their learning experience, is the second principle.
Russell Group universities will develop resources and training that enable staff to provide students with clear guidance on how to use AI in learning, assignments and research. Regular engagement between academics and students will be crucial, to establish shared understanding of appropriate AI use, given the pace at which technology is evolving.
“The appropriate uses of AI tools are likely to differ between academic disciplines and will be informed by policies and guidance from subject associations, therefore universities will encourage academic departments to apply institution-wide policies within their own context.”
• Ethical and equitable AI use: Third, universities will adapt teaching and assessment to incorporate the ethical use of generative AI and support equal access.
Integrating generative AI tools into teaching methods and assessments can enhance the student learning experience, improve critical reasoning skills and prepare students for the real-world applications of AI technologies beyond university.
“Appropriate adaptations to teaching and assessment methods will vary by university and discipline, and protecting this autonomy is vital,” said the statement. All teachers should be empowered to design sessions, materials and assessments using AI tools. Professional bodies will be key in supporting universities to adapt practices, particularly regarding accreditation.
In future, there may be new technologies and AI tools that reside behind paywalls and restrictions. Universities will need to respond, to ensure fair access to students and staff to AI tools and resources they need for teaching and learning.
• Upholding academic rigour and integrity: The fourth principle is that universities will ensure that academic rigour and integrity is upheld.
All 24 universities have reviewed academic conduct policies and guidance to reflect the emergence of generative AI, making it clear to students and staff where use is inappropriate and supporting them to make informed decisions and use the tools correctly.
Clear and transparent policies are critical to quality, the statement says. Academic integrity and the ethical use of AI can also be furthered by creating an environment where students can ask questions about AI uses and challenges openly and without fear of being penalised.
• Working collaboratively: The fifth principle commits universities to “work collaboratively to share best practice as the technology and its application in education evolves”.
Navigating an ever-changing technological landscape will require collaboration not only between universities and students, AI experts and leading academics and researchers, but also with schools, further education colleges, employers, and sector and professional bodies.
There will need to be ongoing evaluation of policies and principles and their implementation. An inter-disciplinary approach must be deployed to address emerging challenges and promote the ethical use of AI.
Some concluding comments
“The rapid rise of generative AI will mean we need to continually review and re-evaluate our assessment practices,” agreed Professor Michael Grove, deputy pro-vice chancellor (education policy and standards) at the University of Birmingham. “But we should view this as an opportunity rather than a threat.
“We have an opportunity to rethink the role of assessment and how it can be used to enhance student learning and in helping students appraise their own educational gain.
“By focusing our assessments upon higher order cognitive skills such as the application, analysis, synthesis and evaluation of knowledge, we can ensure we continue to produce graduates with the skills that are needed in a knowledge-based economy by business and industry, and who are equipped to be the future leaders in research and innovation,” he said.