EUROPE

How will AI impact upon the recognition of qualifications?

Artificial intelligence is developed on the basis of knowledge, understanding and the ability to act developed on the basis of academic research, teaching and learning. This is the traditional definition of learning outcomes, but AI raises issues that concern the important fourth dimension of learning outcomes – the willingness to act.

As the Council of Europe’s Reference Framework of Competences for Democratic Culture emphasises, we may be able to do things that, for ethical and other reasons, we should abstain from doing.

The European Commission has published ethical guidelines on the use of AI and data in teaching and learning, while the Council of Europe has published a book on AI and education seen through the lens of democracy, human rights and the rule of law.

What may be the gut reaction of many people, at least those of my generation – to say that we do not want to relate to AI or that we must somehow try to block its use – is not an option. The question is not whether higher education will need to deal with AI, but how we should do so.

That also holds true for the recognition of qualifications.

The classic case is one where a credentials evaluator examines a well-documented application by an individual who wishes to use his or her qualifications in a country other than the one in which the qualifications were earned. Thus, the qualifications must be assessed and given value in another education system and qualifications framework.

The methodology builds on at least two key assumptions: the documents are authentic and they certify the successful completion of academic work undertaken by the holder of the qualification. In other words, the documents have been issued by the institution whose name appears on the diploma, and the person named is the one who has earned the qualification.

Things are, of course, not always quite as straightforward as this.

Documents may be falsified or they may be issued by providers that are not recognised as being part of an education system.

Such providers are unlikely to have undergone proper quality assurance, which in Europe means that the institution follows the European Standards and Guidelines. They may even be degree mills, which issue diplomas to people who pay a fee, with little or no work required.

In some cases, applicants cannot, for good reason, fully document their qualifications. A typical case is that of many refugees, and the European Qualifications Passport for Refugees has developed an interviews based methodology for assessing whether refugees are likely to have earned the qualifications they claim as well as for describing the assessment process.

An ally as well as a foe?

A key question is therefore whether AI changes the basic assumptions on which recognition is based.

It is possible that AI may make it easier to produce false diplomas that look authentic, but this has already been an issue for quite some time, and recognition specialists are quite advanced in identifying fraudulent documents, as shown through the FRAUDOC project led by CIMEA – the Information Centre on Academic Mobility and Equivalence – the Italian ENIC-NARIC. (ENIC is the European Network of Information Centres and NARIC stands for National Academic Recognition Information Centres.)

AI may also make it easier to identify false diplomas. The ENICs-NARICs must continue to invest in the competence and technology that enable them to identify even advanced attempts at fraud. In this, AI may be an ally as well as a foe, and those seeking to uphold rules and regulations are often at least one step behind those seeking to break them.

The broader question, however, is to what extent AI can do the academic work of humans. In recognition terms, this means we need to be sure that the holder of a qualification has undertaken the work certified.

Even if, as Thomas Jørgensen underlines in another article in University World News, tools like ChatGPT are better suited to producing shorter, administrative-type text based on fairly standardised information than for analytical and creative writing based on a variety of sources and references, we must assume that the technology will evolve rapidly.

Very recently, the head of the Institute of Education at Østfold University College in Norway conducted an experiment in which he made illicit use of ChatGPT in a home exam at masters level and got a top grade. Even if AI will hopefully not fully replace humans’ brains and certainly not their souls, identifying AI-based fraud may become more difficult than it is today.

Ultimately, it is higher education institutions that will need to ensure that their diplomas certify work undertaken by their graduates and not by robots – a term introduced by the Czech writer Karel Capek and based on the Czech word robota, which is often translated as ‘work’ but more precisely seems to mean ‘serf labour’ or ‘drudgery’.

In the first instance, recognition specialists assess qualifications on the basis of the information they receive from awarding institutions. As the European University Association’s position statement shows, AI is a challenge universities take very seriously. My alma mater, the University of Oslo, is one of many institutions currently assessing the impact of AI on the way exams are organised.

The consequences of AI for recognition needs to be part of this broader exploration, and it should be undertaken in close cooperation between universities, public authorities and recognition specialists in the ENICs-NARICs.

Changing laws and regulations

One part of the exploration should be to assess whether laws and other legal regulations are adapted to deal with the challenges of AI. It will not be possible to regulate all aspects of AI relevant to recognition, but national laws should be reviewed to make sure they include general provisions that are flexible enough to meet foreseeable needs.

A national legal framework that does not provide a basis for addressing fraud and other negative aspects of AI would be inadequate, but so too would be a framework that tried to overregulate. In this, legislation is no different with regard to AI than to other aspects of developing technologies with a transnational dimension. A law that tries to be too specific will most likely not have a long shelf life.

In the European context, one useful measure would be to develop a subsidiary text to the Lisbon Recognition Convention addressing the use and challenges of AI for recognition.

The latest report on the state of implementation of the Council of Europe-UNESCO Lisbon Recognition Convention includes a section on ‘digital solutions’, in which one of the recommendations is that “a new subsidiary text of the Lisbon Recognition Convention on digital solutions should be drafted”. Any such text will need to go well beyond ‘digital solutions’ and look at the much broader challenges and opportunities of AI in regard to recognition.

That leads us to considering how laws should be put into practice. The key principle of the Lisbon Recognition Convention is that one should recognise foreign qualifications unless one can demonstrate that there is a substantial difference between the qualification for which recognition is sought and similar qualifications in one’s own system – today we would use the term qualifications framework.

‘Substantial difference’ is a key concept of the Lisbon Recognition Convention, but it is exceedingly difficult to give a precise definition of what the term means in practice. Put simply, substantial differences are those that are important to the possible uses of a qualification, so they may not be the same, for example, for undertaking further studies as for entering a specific part of the labour market.

An adequate understanding of ‘substantial differences’ can only be developed through discussion and shared practice. This is what the ENIC and NARIC networks did almost 15 years ago, and the discussions were reflected in a book edited by E Stephen Hunt and myself.

Understanding the uses and abuses of AI

We need a similar debate on how the world of recognition should deal with AI.

There is also a likely parallel to substantial differences in that a good number of credentials evaluators and some national systems took an overly restrictive approach to substantial differences – and some still do. Each difference is not substantial, and each use of AI cannot be a reason for non-recognition.

It is essential that higher education staff and students generally, as well as those working with the recognition of qualifications, share an understanding of what is proper and improper use of AI. It is only the latter that should lead to a qualification not being recognised.

For that to happen, however, institutions and their staff must make sure that improper use of AI does not lead to a qualification. A degree is a degree regardless of whether the holder has used AI as part of the work as long as the use of AI is proper.

A worst case scenario would be if institutions did not address issues of AI adequately in designing their study programmes, organising exams and assessing their students’ learning outcomes, or if credentials evaluators did not understand the uses and abuses of AI.

In both cases, we could face a situation where qualifications would not be assessed mainly on the basis of documentation, but where credentials evaluators would interview candidates for recognition or undertake additional testing.

What is necessary in cases where qualifications cannot be documented would be unnecessary and even harmful where they can. Such a practice would also go against the current work by ENICs and NARICs as well as the European Commission to promote automatic recognition.

Even if I believe the term automatic recognition is infelicitous because it can promise too much to those who do not know the recognition field, the reality behind it is important.

By using the policy reforms of the European Higher Education Area, we can get easy answers to three of the five questions credentials evaluators are likely to ask about a qualification.

One, is it of adequate quality (answered if the institution issuing the qualification has undergone quality assurance in accordance with the European Standards and Guidelines)? Two, is the workload sufficient (answered through the use of European Credit Transfer System credits)? Three, is the level right (answered with reference to national qualifications frameworks and their relationship to the overarching framework of the European Higher Education Area and the European Qualifications Framework for lifelong learning)?

The profile and learning outcomes of the qualification will still have to be assessed in relation to the purpose for which recognition is sought.

The need for clear information

A third part of the recognition response to AI must be to provide easily understandable information on the potential but also the pitfalls of AI in higher education to prospective users of qualifications: students and their parents, but also employers and civil society.

My impression is that in choosing a study programme, students are insufficiently aware of the need to check whether the institution is recognised as part of a national education system and – if they wish to study abroad – whether the qualification they plan to earn is likely to be recognised when they return home.

At least in the immediate to medium term, AI is likely to make the situation more complex.

ENICs and NARICs would provide prospective users of qualifications with a valuable service if they published easily understandable information on the challenges AI poses to recognition. Since many users of qualifications do not ask enough questions, a list of ‘frequently asked questions’ may be less helpful than an indication of the questions that should be asked.

AI is likely to have an impact on the recognition of qualifications as much as on other areas of education. It will be important to review laws and regulations to make them fit for the challenges of AI. But developing understanding and practice in the higher education community and among recognition specialists is probably even more important.

On the basis of this shared understanding, ENICs and NARICs should provide easily understandable information to those who wish to obtain or use qualifications.

The recognition community can do that only in cooperation with the academic community of institutions, staff and students. The practice that will evolve should be balanced and recognise the use of AI as well as seek to counter its abuse.

Sjur Bergan was head of the Council of Europe’s Education Department until the end of January 2022 and was a long-time member of the Bologna Follow-Up Group. He remains a member of the European Higher Education Area’s Working Group on Fundamental Values and has written extensively on higher education, including as series editor of the Council of Europe Higher Education Series. He is one of the authors of the Lisbon Recognition Convention. In June 2022, Dublin City University awarded Bergan an honorary doctorate.