EUROPE
bookmark

AI in qualification recognition: Risks vs opportunities

The debate about the risks and opportunities of artificial intelligence involves many sectors of society. Higher education is one of them and the recognition of qualifications is one part of it.

In the recent Tirana communiqué, signed in the Albanian capital at the end of May 2024 by the ministers in charge of higher education in the countries which compose the European Higher Education Area (EHEA), there is a commitment to support “the ethical, trustworthy, responsible, and rights-based use of AI”.

Ministers have asked that the wider and longer-term impact of the digital transition, including AI, in particular with regard to the three EHEA key commitments (qualifications frameworks and European Credit Transfer and Accumulation System, recognition of qualifications, quality assurance) and the use of Bologna Process tools be taken into consideration.

The communiqué builds on the previous 2020 Rome Communiqué, where the term artificial intelligence appears for the first time, with the focus already on ethical standards and human rights.

The human rights approach underpins the Council of Europe (CoE) Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, also adopted in May 2024, and education is the focus of the CoE publication, Artificial Intelligence: A critical view through the lens of human rights, democracy and the rule of law.

According to this report, some scientific publications are exploring AI support for administrative and institutional services, and some higher education institutions (mainly in the United States) already use AI-supported software for enhancing their admissions processes.

The AI Act

May also saw the approval by the Council of the European Union of the law aiming to harmonise rules on artificial intelligence in the European Union, the so-called AI Act.

Aiming to promote a European human-centric approach to AI, the regulatory framework follows a ‘risk-based’ approach (the higher the risk, the stricter the rules), classifying the risk associated with the use of AI into four categories: a level of risk that is considered minimal or absent; limited; high; or unacceptable.

AI systems classified as constituting an unacceptable risk will be banned in the EU, while high-risk AI systems will be subject to a set of requirements and obligations if they want to gain access to the EU market.

Annex III of the AI Act defines high-risk AI systems in a section on “education and vocational training” that lists, among others, the AI systems intended to be used “to determine access or admission or to assign natural persons to educational and vocational training institutions at all levels”, “to evaluate learning outcomes, including when those outcomes are used to steer the learning process of natural persons in educational and vocational training institutions at all levels” and “for the purpose of assessing the appropriate level of education that an individual will receive or will be able to access, in the context of or within educational and vocational training institutions at all levels”.

On a global scale, UNESCO has a specific focus on the use of artificial intelligence in education. The 2019 Beijing Consensus on Artificial Intelligence and Education refers to the use of data and AI in transforming evidence-based policy planning processes, and the role of AI in enabling flexible learning pathways and the accumulation, recognition, certification and transfer of individual learning outcomes.

The implementation of innovative technological solutions capable of leading to an improvement in processes is significant for its potential to contribute to the fulfilment of the Sustainable Development Goals (SDGs), with a particular focus on SDG 4, with the aim being to guarantee inclusive education, equitable access and promote lifelong learning opportunities for all.

A cluster of questions

The development and potential use of AI in the recognition of qualifications in higher education poses a number of questions. Some of them are explored in the report, Artificial intelligence and recognition of qualifications: opportunities and risks from an ENIC-NARIC perspective, published by CIMEA, the National Information Centre on recognition of qualifications in Italy, part of the networks of the 56 ENIC-NARIC centres.

One cluster of questions relates to equity: given the massive amount of data that many AI systems rely upon, the risk of unequal access to this data is a big challenge.

What happens if a recognition authority or a higher education institution does not have an equally massive archive of data including qualifications and the results of assessments already carried out?

Will it be possible, for instance, for higher education institutions and ENIC-NARIC centres to share access to this data in a cooperative manner, in full respect of national and international regulations?

Can AI support the right to fair recognition of qualifications in line with the principle of the Lisbon Recognition Convention, the international convention regulating recognition of qualifications in the European region?

Another set of questions refers to the broader topic of learning outcomes that qualifications should certify in relation to teaching and learning and to academic integrity.

Is AI a tool that supports the quality of learning in a context of transparency and integrity? Or to what extent could it be used to cheat and obtain a qualification that has no authentic knowledge behind it? Can we trust learning outcomes in the AI era?

Another dimension is whether, and to what extent, AI systems can support international academic mobility, for instance, with tools supporting teaching or assisting in mitigating some of the obstacles and barriers to mobility, such as language issues.

One key question is about the potential impact of AI in the recognition process, if and to what extent it could support faster and fairer recognition, supporting and automating the most routine work, or if its use presents more risks than opportunities (and more costs than benefits).

The CIMEA report talks about the main questions with regard to the assessment of comparability with the corresponding qualification in the receiving education system, “deconstructing” the different phases of the process and analysing the potential use of AI in each phase (and the related risks).

It also discusses the possible use of AI to support human decision-making in the verification of authenticity of documents and the detection of fraud, for instance, through natural language processing used to analyse the correctness of qualifications, machine learning used for fraud identification and computer vision employed to make it easier to spot anomalies.

Three key considerations

The potential use of AI in the sector has been explored already in a number of Erasmus+ funded projects, such as FraudSCAN – False Records, Altered University Diploma Samples Collection and Alert for NARICs, and the more recent project MARTe – A technological approach to micro-credentials which applies text-mining technology to the analysis and identification of common patterns in learning outcomes.

The reports mentioned above suggest three main considerations when it comes to AI and qualifications recognition.

Firstly, the importance of data and of recognition workflow management.

The lack of a fully digitalised workflow in the recognition process within institutions and organisations, the fragmented collection of data due to the use of different software and applications during the lifecycle of students and qualifications (for instance, one software system used for admissions, one for students’ academic career management, one for awarding of qualifications, etcetera) and a lack of awareness about the importance of data-driven decisions could hinder exploration of the potential use of AI in this area.

Secondly, the need for AI to feature in the professional development of credential evaluators, admission officers and staff performing recognition processes.

AI literacy, knowledge of key regulatory frameworks at a national and international level and of the ethical implications of the use of AI in recognition, access and admissions, and, at the very least, basic data analysis and data interpretation capabilities seem to be an increasingly relevant part of the knowledge, skills and competences required by credential evaluators.

Thirdly, capacity building, training, the exchange of practices and peer support can play a role in supporting the application of AI that is ethically consistent, human-centred and able to support a quality recognition process.

Chiara Finocchietti is director of CIMEA-NARIC Italia, the Italian ENIC-NARIC centre, and president of the ENIC network. Serena Spitalieri is president of APICE, the Italian Professional Association of Credential Evaluators.

This article is a commentary. Commentary articles are the opinion of the author and do not necessarily reflect the views of University World News.