The international university ranking system has entered a state of rapid transformation. The time when the ‘big three’ were the main players on the international university ranking scene is gone.
The Shanghai, Times Higher Education and QS rankings might be older, but otherwise they are no better than the University Ranking by Academic Performance, or URAP, from Turkey, the CWTS Leiden Ranking from the Netherlands (based on sociometrics methodology), the Jeddah ranking from Saudi Arabia (which uses a broad spectrum of international academic awards) or the global US News ranking introduced last year by Bob Morse of the US News & World Report, the most experienced provider of educational rankings.
Everybody follows with interest the latest announcements from U-Multirank. This European project, however, cannot liberate itself from political correctness (the European Commission foots the bill) and remains a giant database, although with information gaps here and there. However, it still has the potential to transform into a bona fide ranking.
Many countries look favourably at Webometrics (after all, they can find their universities there), but there is no evidence that a strong presence on the internet proves the quality of a higher education institution.
Much talked about these days are regional rankings. In the race are: Times Higher Education or THE (for Asia and the BRICS & Emerging Economies countries, Africa ranking under preparation), QS (Asia, Latin America, the Arab Region and the BRICS countries) and US News (the Arab Region).
It is worth mentioning that national rankings seem to be getting a makeover as well. At the last NAFSA: Association of International Educators 2015 conference there was a special session on national rankings. On the website www.ireg-observatory.org one can find the IREG Inventory of National Rankings.
But the real change in the ranking field is coming from a new direction defined as rankings by subject. It is more than a change – it is a revolution in understanding what academic rankings represent today, what they can be and what purpose they can serve.
In a quiet, picturesque town in the North of Denmark in June an international conference, IREG Forum on Rankings by Subject, took place. The conference was organised by IREG Observatory on Academic Ranking and Excellence. In an introduction to the conference it says: "Rankings by subject illustrate that many universities not visible in major academic rankings perform remarkably well in limited academic fields and study areas."
The debate that ensued at the conference suggests that hopes placed in rankings by subject are counterbalanced by serious obstacles that have still to be overcome.
Growing interest in rankings by subject is inevitable. International rankings provide only a very narrow, limited picture of global higher education. In fact, what we see in these rankings is no more than the tip of an iceberg. There are possibly as many as 19,000 higher education institutions worldwide. Yet, in most cases, global rankings include some 500 of them.
Every university has its stronger and weaker departments. Understandably, an approach focused on specific fields provides a larger group of universities with the chance to be visible, to stand out in the rankings, at least in some fields. Consequently, there is an expectation in academic circles that ranking organisations will address the issue of ranking by subject.
Speaking at the IREG Forum, Professor Victor Koksharov, rector of the Ural Federal University, or UrFU, in Yekaterinburg, presented a telling example. His university participates in the 5-100 Russian Academic Excellence Project – one of the most ambitious projects in the world aimed at pushing a group of Russian universities up the global rankings. Since advancing up the ranking for an institution as a whole is an extremely difficult task, the university leadership started to identify its strengths and the fields where the university could advance faster.
With the help of Thomson Reuters UrFU identified 36 research areas in which the university could claim to be in the world’s top 10% in terms of publications, and another 36 fields in which the university could get there fairly soon. These 72 narrowly defined fields were considered to be the areas the university should develop.
Unfortunately, the present generation of rankings by subject does not provide much help in monitoring progress in these areas. As one of the IREG Forum participants put it: “Rankings by subject are young and weak!” It is indeed true that their methodology is still in the early stages of development.
There are three main ways the problem of rankings by subject has been approached so far.
One way is characterised by methodology designed and prepared specifically to suit a given field. Let’s take a look at the MBA ranking by the Financial Times. It is an example of a ranking where the indicators are tailored to the needs of the discipline.
We don't see indicators based on publications, citation or H-Index. No mention there of Nobel Prizes or Fields Medals. Instead, we find such words as salary, money, career, success and employment. In the case of business education these words represent the key elements.
The other way is to extract relevant data from the data collected for the overall main ranking. In this case rankings by subject are byproducts of the main rankings.
Every year Times Higher Education publishes six rankings by subject. These are classic extracts from the main rankings; they use the same criteria as the main ranking with somewhat different weights assigned to them. Similarly, the QS Rankings by Subject are an extract from the QS overall ranking. Four criteria from the main ranking are used with their weights somehow differentiated: academic reputation, employer reputation, publications and H-Index.
To this group we can also add the University Ranking by Academic Performance, or URAP, prepared by Professor Ural Akbulut from the Middle East Technical University in Turkey.
This approach distorts the higher education picture in some countries since attention has been focused only on universities that make it to the international rankings. This method leaves no chance for universities that excel in certain fields but are not included in the international rankings as institutions.
There can also be a mixed approach to ranking by subject. It uses data from the overall ranking but with some modifications. Such rankings – for example, the Shanghai ranking and the US News global ranking – while closely linked to the main global rankings, introduce modifications into the methodology that help capture the specific characteristics of particular fields. Notwithstanding this, the reliance on a limited set of indicators characteristic for the overall international rankings produces rankings (by subject) of an inadequate quality.
In early spring, Quacquarelli Symonds or QS sent out the preliminary results of the QS World University Rankings by Subject 2015 to universities and announced the date of the launching ceremony.
Some universities were very surprised to learn that their faculties did not achieve a high position in this ranking. QS postponed the launch and the 'proper' ranking was launched with a two-month delay. This shows that it is no longer possible to publish a 'ranking by subject' as a low-cost extract from the main league table. There is a need for a new, fresh approach to this very important group of rankings.
Subject specific characteristics
The main challenge facing the authors of rankings by subject is how to define the critical characteristics of a given discipline or field and find indicators that will best reflect these characteristics. The rich professional literature on quality in higher education suggests that international rankings are doing well only in the area of ‘science’.
This is quite natural and intuitive given that results in this area are in the form of publications. Comparing the number of publications or calculating the Hirsch Index, it is possible to fairly accurately compare institutions or faculties in such fields as mathematics, physics, chemistry or others falling into the ‘science’ group. The use of indicators based on publications in other fields of research and teaching seems to be less obvious.
When we want to build our dream house and are looking for a good architect, do we ask him for the number of citations or his Hirsch index? We would rather ask him to show us a building he has built and would ask people if they are comfortable living in his houses. Why should we do otherwise when we are assessing and comparing faculties of architecture?
The same is true when it comes to medicine. When we are looking for a good hospital, we are not interested in the publications and Hirsch index of the doctors working there. Instead, we want to know the patients' opinions and read an assessment by a professional medical association.
Such examples can easily be multiplied, but the conclusion is that each discipline has its own hierarchy of values. Building a ranking ‘by subject’ we need to identify specific characteristics and use them in the ranking. It will not be easy, but if we want rankings ‘by subject’ to meet our expectations, we absolutely have to do so.
The next methodological challenge will be to increase the number of institutions classified in rankings by subject. Aware of the shortcomings of their methodology, the authors of such rankings consciously limit the number of institutions in the rankings by subject.
Looking forward, we can expect more fields to be analysed. The question is: how many? For their analysis the Centre for Science and Technology Studies at Leiden University identified 769 fields of science. Currently, within the annual Leiden Ranking seven field rankings are published.
Universities are interested in rankings by subject, but it is students that could use these rankings the most. In a survey How Do Students Use Rankings? conducted by the QS Intelligence Unit in London, Paris, Milan, Rome and Moscow, students were asked: Do you find subject-specific or overall rankings more useful?
The overall response was overwhelmingly in favour of more specialised tools, with 78% of students saying subject-specific rankings were more useful than overall tables. Also, university managers and prospective students increasingly expect an in-depth analysis of institutions at a field or faculty level.
No doubt, the future of academic rankings lies with rankings by subject, but how we get there and when remains an open question.
Waldemar Siwinski is vice-president of the IREG Observatory on Academic Ranking and Excellence.
Receive UWN's free weekly e-newsletters