GLOBAL

How useful are rankings? Four questions to help you decide
Ranking season is upon us again, or maybe it is just the buzz surrounding the publication of the Shanghai ranking that creates this impression.Creating impressions and buzz does, indeed, seem to be one of the main functions of rankings. Even university leaders who are sceptical of rankings tend to comment positively when their own institutions do well. Leaders of institutions that do less may not feel well placed to voice criticism.
There is, however, good reason to raise questions of principle about rankings, unrelated to a specific ranking. Andrejs Rauhvargers’ reports for the European University Association from 2011 and 2013 still deserve to be read, even a decade later.
To help our reflections on rankings, we should ask four questions.
Are the results reliable?
The first question is whether we can trust the results of rankings. There is little reason to doubt the technical competence of most ‘rankers’. There is, however, greater reason to question some of the choices they make, and these questions reflect the fact that reducing the highly complex reality of our varied higher education landscape to a simple ranking is not a straightforward proposition.
Research performance in disciplines that depend little on specific national contexts is less difficult to measure than most other elements of higher education performance.
The number of Nobel Prizes or Fields Medals that researchers at, or graduates from, a given university have won over the years is an interesting piece of information, but what do prestigious prizes for research in medicine, chemistry, physics, mathematics and economics (strictly speaking not a Nobel Prize but an award “in memory of Alfred Nobel”) say about the broader quality of the institution?
We can assume that working with a Nobel Prize winner is highly stimulating for other researchers, including advanced students. But what do these prestigious awards, or other elements that measure research performance in ‘hard sciences’ and economics, say about the institutional learning environment for undergraduates, or for that matter about the institution’s performance in humanities, law or social sciences?
What do rankings say about the overall quality of a university if they focus on a limited range of disciplines and rely on data for publications published in English in international journals?
In disciplines such as those mentioned, this may be a relevant measure of research performance, even if limiting the scope to publications in English is not a trivial choice. In many other disciplines, this is a much less relevant measure.
Where are the comparable fora for publications in Armenian or Finnish in humanities that may be of high quality and raise issues of broad relevance? What is the relative value of fairly short journal articles with many co-authors, common in natural sciences and medicine, compared to book length publications by single authors, more typical of humanities and at least parts of social sciences?
The point is, of course, not to insinuate that longer publications mean higher quality but to illustrate that assessing research, learning and teaching, or the societal impact of higher education, may well be too complex a task to lend itself to the simplified world of rankings.
Are rankings relevant?
Rankings aim not only to say something broadly valid about the quality of institutions, but also to provide reliable information on their relative quality. It is better to be ranked as number 50 than as number 65 – or is it?
If the methodology of the rankings is reliable – and it is not obvious that it is – would the relative ranking be relevant? Provided the methodology is sound, it would make a difference whether an institution is among the top 100 or the top 1,000. But does it matter whether it is ranked as number 25 or 30?
My alma mater, the University of Oslo, provides good quality research and teaching in a broad range of disciplines, is certainly better in some academic areas than in others and plays an important role both nationally and internationally.
The fact that the University of Oslo was ranked as number 61 in the Shanghai ranking in 2021 and as number 67 this year does not say anything meaningful about the quality of the institution or about the student experience. It certainly does not, in any meaningful way, indicate that its quality has fallen, and if it were to be ranked as number 63 next year, it would also not indicate that its quality had risen.
Should rankings be used in decision-making?
The questions raised about the reliability and relevance of rankings should already encourage policy-makers and funders, whether public or private, to exercise caution in using rankings as a basis for their decisions. Beyond these questions, however, policies for the development of higher education and the funding that should in principle accompany these policies must be based on far broader criteria.
We need higher education institutions that aim for, and achieve, excellence. That excellence, however, cannot necessarily be expressed through rankings.
We need top-rated research in all disciplines, top-rate teaching and learning, institutions that are audible voices in the national and international debate about important societal issues such as democracy and sustainable development, institutions that work with their local communities and institutions that provide good learning environments for students whose intellectual potential exceeds the formal schooling they have been offered.
Educational quality is a complex phenomenon, and no education system can be good if it leaves students by the wayside.
Some institutions excel in several of these areas, but it is legitimate for institutions not to aim to do natural science research at top international level. Higher education institutions should be assessed according to the degree to which they fulfil their stated mission as long as that mission is legitimate and well developed. Public funding must take due account of the diversity of universities’ missions.
Is everything the fault of rankings?
It is perhaps understandable that both policy-makers and the public at large wish to have a simple and seemingly ‘objective’ measure to determine which is the ‘best’ university in their own country and how it compares with ‘the best in the world’.
Rankers may well claim that it is the press, political decision-makers and the public more broadly who use the results of their rankings for far more than they are worth and for purposes the rankers did not intend them to fulfil.
Ranking organisations have, however, not been very vocal in pointing out the limitations of their tools, and the Shanghai ranking, for all its limitations, does actually claim to be an “Academic Ranking of World Universities”, whatever that may mean.
One could have hoped that at least the more serious parts of the press would ask difficult questions rather than writing uncritical headlines on the basis of unreliable results. Ultimately, however, the higher education community itself carries an important part of the responsibility for developing a soundly critical approach to rankings.
Institutional leaders need to emphasise that rankings give only a very partial impression of the quality of higher education – even when their institutions do well. Higher education researchers could do more to identify the weaknesses of rankings and to assess whether meaningful rankings are a realistic prospect.
Not least, public authorities need to give more importance to developing and maintaining diversified higher education systems that fulfil all the major purposes of higher education, which the Council of Europe has defined as preparing for employment, preparing for life as active citizens in democratic societies, personal development and the development and maintenance of a broad and advanced knowledge base.
Giving importance to what can be measured is an understandable temptation, but measuring what is important and recognising that some things cannot be measured reliably is a responsibility that public authorities and the higher education community share. It is not one rankings are likely to help fulfil.
Sjur Bergan was head of the Council of Europe’s Education Department until the end of January 2022 and a long-time member of the Bologna Follow-Up Group. He remains a member of the EHEA (European Higher Education Area) Working Group on Fundamental Values and has written extensively on higher education, including as series editor of the Council of Europe Higher Education Series. In June 2022, Dublin City University awarded Sjur an honorary doctorate.