GLOBAL

What direction next for university rankings?
There are clear signs that a new order is emerging in the sphere of academic ranking. Over the past years rankings have stormed the higher education world and changed it or, at least, changed its image.But at the same time, rankings have also exposed their shortcomings, their weak points. The authors of rankings seem to be aware of these shortcomings. In the 2016 ranking season that is coming to an end, we have witnessed a number of improvements in their scope and methodology.
The improvement of ranking methodology has been, to a considerable measure, linked to the needs of the so-called 'Excellence Initiatives' that governments in a number of countries established with the purpose of accelerating the development of a selected group of universities.
Jamil Salmi, former head of the World Bank programme on higher education, and Professor Isak Froumin, academic advisor of the Institute of Education at the National Research University Higher School of Economics in Russia, calculated that since 2000 over 30 such excellence programmes have been launched in 20 countries. Their total funding exceeds US$40 billion.
As a consequence of these initiatives a group of the so-called ’Accelerated’ World-Class Universities have emerged. These institutions receive additional funding to speed up the process of transformation to world-class status. These additional funds are supposed to act in a similar way to how booster rockets help military jets to take off.
Many excellence initiatives, including Russia’s 5-100 Project, use rankings as a handy tool to monitor the implementation of reforms. Excellence initiatives have already forced rankings to introduce changes into their methodologies.
At the International Conference on Excellence Initiatives organised in St Petersburg in June, I presented a summary paper on the main trends emerging in rankings. Let me expand here a bit more.
Trend #one: Including a large number of institutions
For some time the academic community has suggested that rankings should include a larger number of institutions. For the first decade of their existence, international rankings operated within the magic circle of the 'Top 100', 'Top 200' or, at best, the 'Top 500'. Yet, worldwide, there are close to 20,000 higher education institutions.
The analysis of a group of the leading 100 institutions (0.5% of their total number) may very well be of great interest to experts in higher education and the press, but it is grossly unfair to the vast number of universities and countries where these universities function.
The limit on the number of the institutions that are ranked results from the methodology that rankings are based on. It is particularly true in the case of the Shanghai Ranking or Academic Ranking of World Universities. However, some new rankings like the University Ranking by Academic Performance, or URAP, of the Middle East Technical University in Ankara has overcome this limitation. The URAP ranking covers 2,000 institutions.
Thanks, to a large extent, to the pressure by Russian universities in the 5-100 Project, some of the main ranking organisations such as Times Higher Education and QS have significantly increased the number of institutions in their ranking.
Earlier this year Times Higher Education published a list of 978 universities (it started with 200); and QS published a list covering 916 universities, hence doubling the original number. The US News Best Global Universities Ranking publisher this year lists 1,000 institutions (it started with a list of 500 two years ago).
This trend will only become stronger. In the space of a year, I believe, ranking of 1,000 universities will be standard and within three years international rankings will cover up to 2,000 institutions (10% of the total number). This should satisfy the ranking ambitions of many countries and their universities that know they don’t belong to the top 100 or the top 200 but believe they are good enough to deserve being ranked.
The new '1,000 standard' in global rankings will help dispel Professor Ellen Hazelkorn’s reservations, expressed at the end of the book The Global Academic Rankings Game, where she wrote: "While rankings have filled the comparability and accountability gap, their narrow focus on elite universities and research may be their Achilles' heel.
"The question is not whether the objective of building up globally competitive universities should be eclipsed but rather the extent to which pursuit of narrow ranking-led strategies is, wittingly or unwittingly, undermining broader social goals."
Trend #two: Development of rankings ‘by subject’
Anothre trend is the emergence and development of rankings ‘by subject’. The benefits of rankings ‘by subject’ seem to be so obvious that it is hard to understand why ranking institutions for so long ignored this group of rankings. It is quite natural that in every university there are some stronger and some weaker departments. In the overall rankings these differences get lost.
Several months ago, in an article for University World News, I wrote: “The era of rankings by subject is coming”. I am glad my prediction has proven to be right.
When it comes to ranking by subject, two questions emerge: how many disciplines and how many universities should be analysed?
We notice, with satisfaction, that the number of ranked subjects has been growing fast. Only this year, QS published a ranking of 43 disciplines, URAP a ranking of 41 and the US News Global Rankinga a ranking of 27. Even the Shanghai Ranking for the first time published a new ranking in seven engineering disciplines (in addition to its earlier ranking in five broad fields and five subjects).
Times Higher Education published a ranking in eight broad fields, but it has announced that in the future its ranking will include several dozen fields of study.
There is a growing recognition that national rankings need to include rankings ‘by subject’. In its current edition, Poland’s Perspektywy University Ranking includes rankings of 47 subjects; the German CHE University Ranking ranks 40 subjects. Traditionally, the largest number of disciplines, 70 in all, is covered in The Complete University Guide in the UK.
I think, it is realistic to assume that in the next few years rankings will include a minimum of 50 subjects and will cover no less than 500 institutions or faculties. This is both feasible and desirable.
In spite of progress in the sphere of rankings, there is still much to be done, especially regarding rankings ‘by subject’. The main challenge facing the authors of rankings is how to define the critical characteristics of a given discipline and then find indicators that best reflect those characteristics.
The professional literature on quality in higher education shows that international rankings are doing well only in the area of science. This is quite natural and intuitive as results in this area are in the form of publications. By comparing the number of publications and calculating the Hirsch Index, it is possible to compare institutions or faculties in such fields as mathematics, physics, chemistry or others falling into the ‘science’ group.
The use of indicators based on publications as the main criterion to assess quality in other fields of research appears to be less obvious, particularly when it comes to rankings aimed at prospective students.
If we want to build a dream house and are looking for a good architect, we do not ask him for the number of citations or his Hirsch Index. We are more likely to ask him to show what he has already built and to ask people if they are comfortable living in those houses.
The same is true in medicine. If we are looking for a good hospital, we are not interested in the publications and Hirsch Index of its doctors. Instead, we want to know patients' opinions and the assessment of a professional medical association.
What matters most is that each discipline has its own hierarchy of values. Building a new ranking ‘by subject’ is not an easy task, but if we want rankings ‘by subject’ to meet expectations, it has to be done.
Incidentally, the IREG Observatory on Academic Ranking and Excellence has recently established two working groups to analyse and prepare a set of new indicators suitable for rankings in the fields of engineering and medicine.
Trend #three: More regional rankings
There are more and more regional rankings coming. This is quite understandable as both student and staff mobility and academic cooperation first take place initially within a particular region.
Most attractive, from a marketing point of view, are regional rankings of Asian and Arab institutions. Also interesting is the Latin American region.
The main problem such rankings have currently has to do with their methodology. The regional rankings look like twin brothers of the global rankings as, in practical terms, they are extracts from the global rankings. It is difficult to consider them as autonomous, stand-alone rankings.
Trend #four: Renaissance of national rankings
Worth noting is the renaissance of national rankings. Every year a few new national rankings appear.
One such ranking has recently been published in India. The strength of such rankings comes from the fact that they can cover all institutions in the country. In addition, institutions can be evaluated through criteria and indicators that can be more accurately selected since all the institutions function in the same cultural and legal environment.
In 2015 and 2016 new rankings have appeared in Colombia, China (a ranking of 1,000 institutions), in India – prepared under the auspices of the Ministry of Human Resource Development – and a new ranking in the United States published jointly by the Wall Street Journal and Times Higher Education.
They increase to 60 the number of rankings listed in the IREG Inventory of National Rankings.
Trend #five: New dimensions
Another trend is the search for a way to include in the international rankings missions other than research. Especially important here are such important aspects as excellence in teaching and the so-called third mission or the university’s social mission.
This, perhaps, constitutes the biggest challenge facing rankings. So far, there are no easy answers. No internationally agreed standards exist. For example, ranking organisations addressing the excellence in teaching dimension mostly dance around the numbers related to teaching staff. However, some attempts to find possible solutions are being made.
Speaking of the search for ways to properly reflect this third mission in the rankings, it is worth mentioning the European Commission project called E3M, or European Indicators and Ranking Methodology for University Third Mission. The project did not lead to a new ranking but a number of findings and conclusions published in the Green Paper are worth studying. More information on the project can be found here.
Recently, the Russian Union of Rectors has come up with a proposal for a new international university ranking. The new ranking, to be produced by the Expert RA, is expected to be ready in autumn 2017. As its title, Three Missions of Universities, suggests, the ranking evaluating universities will go beyond traditional ranking criteria, adding some new, more socially oriented elements.
Similar initiatives will most likely be taken up by other academic centres in an attempt to overcome the Achilles' heel of the current rankings by broadening the range of criteria and increasing their relevance.
Where do we go from here?
Contemporary national rankings first appeared when the internet emerged. International rankings are contemporaries of Facebook. But would anybody dare to declare Facebook the last word in the development of our ‘networked’ civilisation? Or would they say that the information and communication revolution has finished?
For the same reason it is not possible to say how rankings will look years from now and how big an influence they will have on the academic world and beyond. Let’s hope rankings will, on balance, continue to play a positive and creative role.
Waldemar Siwinski is vice-president of the IREG Observatory on Academic Ranking and Excellence.