GLOBAL

What makes a real international university ranking?
Around the world university rectors, presidents and managers are bracing themselves for the next wave of classifications called, rightly or wrongly, ‘rankings’. But before the new ranking wave rolls in, we should ask what makes a real international university ranking?The first international university ranking was produced by Asiaweek in 1999 and 2000, to be followed in 2003 by the Shanghai Jiao Tong University’s Academic Ranking of World Universities (ARWU). The Webometrics Ranking and the Times Higher Education – Quacquarelli Symonds World University Rankings both started in 2004. Since then, international university rankings have proliferated.
The IREG Observatory on Academic Ranking and Excellence, whose mission is ‘to act as a repository of information about rankings and to keep track of the constantly evolving and diverse world of rankings’, commissioned the Perspektywy Education Foundation (Poland) to prepare a comprehensive IREG Inventory on International Rankings that would serve stakeholders such as students, faculty, administrators and policy-makers.
Perspektywy collected the addresses of all potential international (global or regional) rankings and then sent questionnaires asking the authors of each ranking to provide detailed information on its methodology, scope, mode of publication and other features. If there was no response, publicly available information was used. The problem then arose of defining which rankings should be included in the inventory.
The term ‘ranking’ is often overused. A list of universities or colleges based on a single indicator should not be considered a ranking. A classification requires a set of at least two indicators with assigned weights unless users are allowed to determine the weights. The choice of indicators and their weighting reflects the concept of institutional quality used by the compilers of the ranking.
What rankings are included in the IREG Inventory?
In defining what rankings should be in the IREG Inventory, the authors of the project decided that the ranking should:
- • Include two or more indicators or metrics;
- • Include at least one indicator or metric that measures or is related to academic quality;
- • Have been published at least twice;
- • Have been published since 2014;
- • Have a transparent methodology published in English;
- • Present results which are accessible on the internet.
The inventory does not include as independent the regional rankings generated by application of a regional filter to a global ranking. The regional rankings that are included in the inventory either use recalibrated indicators of the global rankings or combine those indicators with new indicators.
The IREG Inventory on International Rankings contains a total of 45 rankings: 21 global rankings (including four ‘sub-rankings’), nine regional rankings (three Asian, two Latin American, two Arab region rankings and two BRICS rankings), five rankings by subject, eight rankings of business schools and two rankings of national higher education systems.
It is worth noting that, of the 45 international rankings that meet the strict criteria set by the inventory, more than half (24) have been prepared by just four ranking organisations: QS rankings [11], Times Higher Education (THE) rankings [seven], ShanghaiRanking Consultancy rankings [three] and US News rankings [three]. Altogether, the inventory covers 22 ranking organisations. We can safely say that in a year from now there will be more rankings and more ranking organisations.
Keeping up with the rankings
The world of international rankings is constantly changing. More and more rankings are appearing, they cover an ever-larger number of higher education institutions and they are becoming more regional. The ‘Top 1,000’ has become a standard.
But with the increasing number of institutions a problem arises. Are the percentage differences between the institutions in the lower half of the rankings big enough to outweigh the measurement error effect? We may soon see publications analysing the effects that the growing number of ranked institutions has on the quality of rankings.
The analysis of indicators used in international rankings also shows the controversial and outdated nature of the indicators related to reputation and their excessive weight, for example, in the QS and THE world rankings. The authors of an interesting analysis published in PLoS ONE propose that “the ideal ranking system limits the significance of peer reputation to no more than 10%".
Another issue is the rapid spread of subject rankings, the fact that they cover ever more disciplines and that they are inspiring new methodologies. Positive examples here are ShanghaiRanking’s Global Ranking of Sport Science Schools and Departments and the Global Fashion School Rankings. It is also clear that new indicators are needed for disciplines such as medicine and engineering.
Comparable information
An important observation deriving from the work on the IREG Inventory concerns the databases used for ranking. When it comes to research, the Elsevier and Clarivate Analytics databases are the obvious source. However, if rankings are to reflect a valid and transparent methodology, they must also reflect the teaching and learning processes.
We urgently need solid databases that cover the teaching mission of universities. Otherwise, we will find ourselves in a ranking trap, since such obvious terms as student, academic teacher and researcher are differently defined in different countries.
The call for such data should be addressed foremost to the international organisations devoted to education such as UNESCO and the OECD. These organisations have, however, abdicated their responsibilities, failing to produce a comparable international database on education. A few years back Professor Philip Altbach wrote about this in University World News. Unfortunately, his call remains unanswered to this day.
Waldemar Siwinski is vice-president of the IREG Observatory on Academic Ranking and Excellence. Richard Holmes is editor of University Ranking Watch.