AHELO: the myth of measurement and comparability

The idea of AHELO, the Assessment of Higher Education Learning Outcomes, has been around for a decade. The basic concept is to test students in several academic fields in a variety of countries to compare learning outcomes across countries.

The brainchild of the Organisation for Economic Co-operation and Development, or OECD, a feasibility study was conducted and evaluated in 2012. Now, in 2015, the OECD is proposing a full-scale implementation of the project.

The pilot was deemed by most to be a failure, and it is very difficult to see how a resurrection of the project would yield any better results. Among the problems cited were the soundness of the instrument used – based on the US Collegiate Learning Assessment – and other methodological issues inherent to cross-national research.

AHELO advocates point out that the only way that academic institutions and systems are compared today is through flawed rankings that use questionable methods and have little validity. They also mention that learning outcomes are not included.

While these advocates claim that AHELO will not be a ranking, they propose to compare the achievements of institutions and countries – leading inevitably to a hierarchy. Indeed, the OECD’s Andreas Schleicher, in the Times Higher Education issue of May 7, noted that AHELO would likely emerge as another, according to him, more meaningful ranking.

A bit of history

In January of 2010, the OECD’s Institutional Management in Higher Education, or IMHE, programme proposed the development of a learning outcomes test for global use.

A feasibility study was carried out, involving 17 countries and three American states, costing perhaps US$10 million. It included two disciplines – economics and civil engineering – plus a somewhat ill-defined category of ‘generic skills’.

The IMHE board recommended in 2012 that the project be discontinued. Thus, it is a surprise to many that the OECD administration is seeking to proceed with the full-scale AHELO effort.

This comes at a time when the OECD has systematically cut its programming in higher education by eliminating Higher Education Management and Policy, an excellent journal, and other initiatives. IMHE itself may be on the chopping block. Thus, it is questionable if the OECD has the internal capacity to thoughtfully administer a highly complex initiative like AHELO.

Who pays the bills?

The AHELO revised scoping paper, issued in April 2015, is somewhat unclear about who will be paying for what as the study proceeds. The costs will run into the millions of dollars during the several years of the initial study.

The individual countries joining AHELO will probably be expected to pay the costs both of their own participation and perhaps of the OECD bureaucracy responsible for central planning and coordination.

Some basic problems

From the beginning, a variety of questions were raised about the basic concepts and practicality of AHELO. Many of these questions proved to be sufficiently compelling that those responsible for evaluating the feasibility study recommended the end of the project.

The basic concepts seem to be largely unchanged in the April 2015 scoping paper, which is apparently the main roadmap for the new project.

It seems highly unlikely that a common benchmark can be obtained for comparing achievements in a range of quite different countries. Indeed, postsecondary studies start at different ages globally.

Some smaller and highly homogenous places are likely to score better. Perhaps this contributes to such high scoring entities as Finland and Shanghai in the secondary school PISA – Programme for International Student Assessment – test. At least at the school level it is more likely to find some commonality of curriculum across countries. At the tertiary level, courses and curricula vary significantly and it is hard to imagine much commonality.

Further, who is to determine what the ‘gold standard’ is in different disciplines across institutions and countries? Thus, AHELO would be testing apples and oranges, not to mention kumquats and broccoli.

Universities that are highly selective in admissions would presumably do better than mass access institutions. AHELO, after all, would not be testing for ‘value-added’ knowledge, but accomplishment at a particular time. Large and highly diverse countries such as India, the Russian Federation and perhaps the United States can be expected to have a wider range of achievement and knowledge among students.

In differentiated systems, an additional question should be asked: will AHELO look at all of postsecondary education or only at the university sector?

The current project seems to emphasise generic skills even more than the feasibility study. These skills are mainly critical thinking and communication. Defining these elusive characteristics may be difficult – and interpreting them in different national contexts will be even more challenging. Critical thinking may be one thing in China and quite another in Norway.

Those few countries that have a strong liberal arts tradition where broad thinking and communication are embedded into the curriculum, such as many colleges and universities in the United States, may have an advantage. But even in the US, definitions of the liberal arts vary considerably among institutions.

Further, in most countries undergraduate education is highly specialised, with students often admitted to specific discipline-based faculties and having no opportunity to develop generic skills. Such skills may have been imparted during secondary studies, which last for varying periods of time in different countries, creating further challenges for measuring post-secondary achievement.

The two specific disciplines chosen for examination, civil engineering and economics, also present problems. While there have been some efforts to build a consensus in some fields concerning what is appropriate content for postsecondary study, this process is far from complete.

Even for civil engineering, there are no doubt variations among universities and countries with regard to an appropriate knowledge base and the depth of study. Economics is even more problematic since approaches to the field vary according to different academic traditions, political realities in various countries, and the like.

Further, a student enrolled in an undergraduate business curriculum may receive a quite different economics curriculum than someone in an economics department. And those who are studying in narrow faculty-based programmes may have deeper knowledge than students studying a broader curriculum.

If there are problems in these two reasonably well-defined fields, the possibility of being able to compare student achievement in the humanities or most social sciences will prove to be much more challenging.

While AHELO intends to test students at the end of the first year of study, degree programmes lasting three years, as is now the norm in much of Europe, may well differ from programmes lasting four years, as is common in North America and much of Asia. More content may be required in a single year of a three-year programme.

These problems, and many others, have no doubt been experienced in the AHELO feasibility study – and might well have contributed to the recommendation not to proceed.

Let’s drop a bad and expensive project

Proceeding to a full-scale AHELO project seems like an extraordinarily bad idea. There is far from a consensus or even a significant number of countries interested – the scoping paper seems to be anticipating eight countries. The costs are quite high – in the millions of dollars.

The OECD seems to want to keep close control over the study, although it will be funded almost exclusively by the participating countries. It is unclear how individual academic institutions or even governments will have a significant say in the management or conceptualisation of the study.

It is also unclear what will be learned from the results of AHELO – and major questions remain about the basic methodology, assessment instruments to be used and orientation of the effort.

Much money has already been spent, some would say wasted, on the feasibility study. Now there is the opportunity to save considerable time, effort and money. Those genuinely concerned about the quality of student learning and learning outcomes might better focus on developing authentic assessment tools that universities and colleges can use in self-evaluation and for self-improvement.

Philip G Altbach is research professor and founding director of the Center for International Higher Education at Boston College, USA.