21 August 2017 Register to receive our free newsletter by email each week
Advanced Search
View Printable Version
GLOBAL
Rankings - Higher education systems vs universities

International university rankings have become a familiar feature on the higher education scene. As their impact has grown, reactions have followed, running from enthusiastic adherence to passive resistance to outright criticism.

Thanks to the latter, methodologies are improving - guidelines and safeguards are being developed (for example, the Berlin Principles) and followed up (for instance, the International Ranking Expert Group).

Yet serious criticisms relate to the fact that, by definition, these rankings focus exclusively on individual institutions - the world-class universities - which are found only in a small cluster of countries.

Thus, university rankings ignore the vast majority of institutions worldwide that cannot compete on the same playing field as world-class universities. In turn, policy-makers tend to prioritise a small number of institutions in order to improve their country's position in the rankings, often at the expense of the rest of the higher education system.

To counter these unexpected and perverse effects, attempts are being made to measure, rank and compare national higher education systems rather than individual institutions. To figure out whether these attempts are successful, this article compares their results with those obtained by university rankings.

Selecting and comparing rankings

As a first step in the comparison, university rankings and system rankings need to be selected. The Academic Ranking of World Universities, usually referred to as the Shanghai ranking, and the Times Higher Education and the QS rankings are selected for being the most popular and well-established league tables. Because of its innovative aspect, the Webometrics ranking is added to the 'big three'.

As far as system rankings are concerned, the choice is limited, and Universitas 21 - or U21, led by the University of Melbourne in Australia - stands out as an obvious pick, with currently no real competitor, even though earlier works have explored ways to assess entire systems.

U21 uses 22 measures - 'desirable attributes' - grouped into four categories or modules: resources, environment, connectivity and outputs weighted respectively (25%, 20%, 15%, and 40%).

Most measures draw from conventional and verifiable sources - the OECD, University Information Systems and SCImago data etc - and they provide a comprehensive view of the most important facets of higher education systems.

Particularly interesting is the inclusion of the unemployment rates of university graduates to reflect external efficiency, even if the measure needs some fine-tuning.

Another welcome feature is the effort to reflect the regulatory environment of higher education systems. However, the modalities needed to come up with an indicator for this dimension are elusive and rely on a combination of sources - a survey of U21 institutions, and data from renowned institutions and from websites.

Finally, the use of an 'overall' indicator built on the four modules' indicators is highly dependent on the weights of its components and, therefore, remains controversial because of the arbitrariness of such weights - a pitfall shared by university rankings.

Then, the results of the four selected university rankings need to be normalised at the country level so that the size effect is neutralised. More specifically, the number of top universities in each country is weighted by the higher education-aged population of the country. This indicator can be seen as reflecting the 'density' of world-class universities in each nation.

First, there is no significant correlation between the number of top universities in a country and their density. Second, the normalised results of the four selected university rankings are very similar: their methodologies differ substantially on some points, but also share common features.

Third, countries that can boast at least one of the top 400 universities in each of the four rankings constitute a rather homogenous club of less than 40 members, mostly high-income economies.

Across the four rankings, the density of top universities is the highest in small and rich countries - Denmark, Switzerland, Sweden and Finland followed by Ireland, The Netherlands and Hong Kong.

Similar outcomes

The four normalised university rankings produced by U21 (2012 edition) leads to a clear conclusion: a strong and positive correlation between the two sets of results.

To double check this finding, correlations are also examined for the 2013 editions of both Shanghai and U21 rankings and the results show an even stronger association.

A further test is administered, correlating the results of each of the four U21 categories with those of the major university leagues. The correlations are significant and the relationship is largely positive, regardless of the university league considered (Shanghai first) and the U21 category selected (resources and output strongest).

The only noticeable exception to the convergence of the two types of rankings is the United States, which comes first under U21 but does not show among the winners of the university leagues when analysed in terms of density.

Results

These comparisons may lead to the idea that a high density of world-class universities guarantees a country as a world-class higher education system. They may also give the impression that the similarity of results between U21 and university rankings means that the former effects are not more informative than the latter.

Three types of observations suggest that such conclusions are not warranted.

A first one is that U21 selects 50 countries among the G20 members and countries which perform best in the National Science Foundation international ranking of research institutions: thus, although the pool of U21 countries is slightly larger than that of the 'big three' university rankings, the mode of selection of these countries constitutes a twofold bias toward wealthy countries and those heavily investing in research.

Second, U21 incorporates some of the indicators of the university rankings (Shanghai and Webometrics) in its own measures and even counts the number of world-class universities among its measures of output, which certainly explains the US exception.

Finally, a reclassification of all 22 measures confirms the heavy bias toward research. Therefore, the convergence of the two types of rankings is almost inevitable and is a logical consequence of the methodology used by U21.

Finally, a critical element to keep in mind is that a world-class higher education system is an elusive concept including many dimensions, running from equity in access and internal efficiency to teaching and learning, relevance within the socioeconomic fabric of the country and external efficiency.

Indeed, these dimensions are difficult to capture and despite U21's laudable attempts to reflect several of them, they fall short of fully accounting for all the complexity and diversity of national higher education systems.

Room for improvement

Comparing national higher education systems across countries remains a priority. U21 has taken bold steps in that direction but needs to go further to demonstrate its usefulness.

Two routes are critical. First, digging further into the structure of the systems so that the rankings are better contextualized. Second, expanding the number and diversity of the countries to be ranked, data permitting, so that the exercise is more inclusive.

Taking these routes would certainly lead to results that are more clearly differentiated from those yielded by university rankings and would contribute to meeting the high expectations created by the U21 initiative.

The U21 rankings illustrate the vast potential of system rankings as important complements to university rankings and as contributors to better-informed decisions by higher education policy-makers.

* Benoit Millot is an independent consultant. He is a former education economist at the World Bank. E-mail: benoitmillot2013@gmail.com. This article, entitled "Top universities or top higher education systems?", was first published in the current edition of International Higher Education, number 75, Spring 2004, of the Boston College Center for International Higher Education.
Receive UWN's free weekly e-newsletters

Email address *
First name *
Last name *
Post code / Zip code *
Country *
Organisation / institution *
Job title *
Please send me UWN’s Global Edition      Africa Edition     Both
I receive my email on my mobile phone
I have read the Terms & Conditions *