GLOBAL

Academic rankings: The tide begins to turn

In my 2018 book, The Soul of a University, a whole chapter is devoted to saying (in effect) that the phenomenon of ‘university world rankings’ is really just a global confidence trick. At the time, this was a minority opinion. Five years later, there is evidence that the tide is beginning to turn. This change should give pause for thought to all those university leaders who still fawn on the commercial rankers.

The methodological argument against ‘university world rankings’ is well known and has been made many times. Essentially, it boils down to this: in order to compile a ranking, you need to make so many arbitrary choices between equally plausible alternatives that the result becomes meaningless.

It is not difficult to construct a university ranking. What is needed is not so much any technical skill as enough blind self-confidence to tell the world that the arbitrary choices you have made in constructing your ranking actually represent reality.

First, there is the choice of which categories of activities to evaluate. This choice is often driven by expediency because some activities (like research outputs) are easier to measure than others (like societal engagement). Naturally, the choice you make of what to evaluate will advantage some universities and disadvantage others.

Second, you have to choose performance indicators in your chosen categories and how to measure them. Research performance, for example, has many plausible indicators and whatever selection you make could easily have been different, with different outcomes. Also, when choosing performance indicators, you have to choose the manner and extent to which you use indicators of opinion vis-à-vis indicators of fact. ‘Reputation’, for example, is a matter of opinion, as is ‘student satisfaction’.

Third, for each performance indicator you have to come up with a number that represents your measurement of that indicator. Actually, the term ‘measurement’ is a dubious suggestion of objectivity. In practice, the so-called ‘measurement’ again requires a number of choices. You need to choose, for example, which data set to use and what level of reliability of those data sets you will be content with.

You also need to choose whether you will deal with gross numbers (which will favour larger institutions) or normalise the numbers according to the size of the institution (which tends to favour smaller institutions). Even normalising your numbers ‘relative to size’ involves a level of choice because there is no generally agreed definition of what the size of a university is.

Fourth, having already made many choices to arrive at a number for each performance indicator, you still need to decide on a formula for combining those numbers into one number (which would then deliver your ranking).

You could, for example, take the average – either mean or median. Or you could assign weights to each performance indicator, which can, of course, be done in infinitely many ways. There are many different ways of combining a set of numbers to yield one number, but there is no strong reason, either mathematical or empirical, for choosing one such method above any other.

Any ranking of universities therefore reflects the choices made by the ranker at least as much as it might reflect any reality about those universities.

It is hard to escape the suspicion that rankers make their choices according to their own preconceived notions of which ‘the best’ universities are. If a ranking did not fit their preconceptions, they would change their parameters rather than adjust their preconceptions – as has, in fact, happened.

What this means is that rankings are normative, not descriptive. They create a reality at least as much as they reflect a reality.

A false narrative

The conceptual argument against rankings is even simpler than the methodological argument: all ‘university world rankings’ are conceived in sin. Any such ranking suffers from the original sin of purporting to capture something which there is no reason to believe exists: a one-dimensional ordering in terms of quality of all universities in the world.

What any so-called ‘university world ranking’ wants you to believe is that given any two universities – any two universities at all, anywhere in the world – one of them is in some objective sense better than the other. This is assumed to be the case no matter how much these two universities might differ from each other.

University A, located (say) in Asia, might have an engineering school and a business school, but neither a school of medicine nor a school of agriculture, whereas University B, located in (say) South America, might have both medicine and agriculture, but neither engineering nor a business school.

Or one university might be located in a big city and go about its business with no particular regard to its immediate surroundings, while the other might be a rural university doing its utmost to work with local disadvantaged communities. Or one university might be focused on entrepreneurship and spin-offs, while the other is strategically committed to responding to the United Nations Sustainable Development Goals.

No matter. The whole point of a ranking is that one of universities A or B must be pronounced to be better than the other.

It is difficult to see why any experienced academic would believe this kind of fantasy. You might as well justify ranking an apple against an orange on the grounds that both are fruit.

Which raises a rather disturbing possibility: that many university leaders do not actually believe that rankings capture reality, but they do believe that the public believes it, and therefore, on supposedly pragmatic grounds, they deliberately play along with what they know to be a false narrative. Doing so is, of course, dishonest and hypocritical, but pragmatism is not necessarily congruent with ethics.

The pragmatic argument for playing along goes like this: rankings are a reality that cannot be wished away; they powerfully influence public perception and student recruitment, and therefore, whatever their conceptual shortcomings, it is better to join them than to try and beat them. Conveniently, this line of argument also fits quite neatly with academic vanity.

Often, those universities that do well on the rankings – even just momentarily – simply cannot resist the temptation to boast about it in public, even when simultaneously expressing private misgivings. It is a cheap shot, but it gains a quick win, so it is hard to resist.

Those who have done less well, on the other hand, feel that they cannot speak out against rankings lest they be accused of sour grapes. In this manner, compliance follows in the wake of vanity, and the entire rankings-chasing exercise becomes self-perpetuating.

One sideline of the pragmatic line of reasoning sometimes heard is that it does not really matter if the rankings are normative rather than descriptive, because it is useful to have an independent arbiter of quality.

In response one might well ask: when and how did academics outsource the arbitration of academic quality to some commercial arithmeticians who endlessly recycle university data for profit?

The power of ranking

Consider the point we have reached. Despite fundamental flaws, the phenomenon of university rankings has grown within two decades to become the strongest single force in global higher education. Rankings have become big business.

What a small magazine in London originally called The Times Higher Education Supplement started as a curiosity in the early 2000s, for example, has become an international commercial enterprise, endlessly but profitably recycling data, much of which comes from the universities themselves.

Somehow the rankers have manoeuvred themselves into the advantageous position of being both auditor and consultant to universities worldwide. We now have commercial rankers offering, for a fee, ‘masterclasses’ to universities on how to conduct their academic affairs in order to improve in the rankings exercise that they themselves conduct.

Rankings have grown in influence to the point where they have global geopolitical consequences.

This assessment has been convincingly demonstrated by one of the foremost experts in the field, Professor Ellen Hazelkorn. Tellingly, her groundbreaking work is titled Rankings and the Reshaping of Higher Education: The battle for world-class excellence. The final chapter summarises how the reshaping of higher education has happened at three levels.

First, rankings have changed higher education institutions. Many universities have turned themselves into ranking-chasing machines, narrowly defining their institutional mission in terms of the ambition to rise in one or more of the university rankings.

Second, in many countries rankings have been instrumental in the reshaping of national higher education systems. Politicians have come to regard university rankings as a measure of international competitiveness, and have therefore restructured their national higher education systems, in various versions of an Exzellenzinitiative – the German Universities Excellence Initiative – with the declared intention of enabling a few ‘elite’ universities to rise to the top of the rankings.

Third, rankings have reshaped our understanding of knowledge itself. Hazelkorn speaks of rankings “reasserting the hierarchy of traditional knowledge production”, with a focus on a narrow definition of knowledge, traditional outputs and ‘impact’ defined as something which occurs primarily between academic peers.

There may well be people who honestly, though naively, believe that academic excellence is objectively represented by university rankings. The fact is, however, that the opposite is the case: the subjective and haphazard choices of the rankers have come to define what academic excellence is considered to be.

So, the situation is this. There is a force, external to academia, run as a global money-making business, based on a false premise and implemented by ad hoc choices, which is influencing the career choices of countless young people, affecting the modus operandi of many academics, demonstrably shaping the way universities operate, influencing national higher education policies, cementing in the public mind a simplistic narrative about academic quality and fundamentally affecting our understanding of the nature and purpose of knowledge production.

Any external force constraining higher education in such a manner must be regarded as a threat to institutional autonomy and academic freedom. That, ultimately, is why the so-called pragmatic argument in support of rankings fails. When compliance comes at the expense of autonomy the price is too high.

Positive signs

Fortunately, there are encouraging signs that the tide is beginning to turn.

One sign of change is the growing realisation that there are viable multi-dimensional alternatives to the simplistic one-dimensionality of a ranking. They typically arise by distinguishing a set of ratings from a ranking.

Rating qualitative concepts is very common. We often do it ourselves. It consists of breaking down the concept into a number of categories, and then assigning a rating – which could be a word or a number – to each of these categories.

Suppose, for example, a food critic decides to rate the quality of restaurants in a city. The critic might then break down ‘quality’ into (say) five dimensions: the quality of the ingredients, the quality of the preparation, the quality of the presentation, the quality of the service and the taste of the food.

On each of these five dimensions the critic might further assign an evaluation, say ‘awful’ or ‘mediocre’ or ‘fair’ or ‘good’ or ‘wonderful’. It makes no difference if the critic decides to use numbers as shorthand, say zero for ‘awful’ and up to four for ‘wonderful’. The point is that each restaurant gets an evaluation which consists of five ratings.

So, following the order in which the five dimensions are listed, Restaurant A might get an evaluation that says: ‘ingredients fair, preparation good, presentation good, service awful, taste good’, or ‘2-3-3-0-3’ for short. Restaurant B, on the other hand, might by the same method get an evaluation that says ‘1-4-0-2-4’, which indicates a different kind of dining experience.

It would be perfectly possible (indeed, easy) for the food critic to turn each of these two sets of ratings into a single number, and thus get a ranking. For this purpose she could employ any one of a number of methods, all equally plausible but yielding different results. (Take the mean, then A = B; take the median, then A is better than B; take the mode, then A is worse than B.)

However, no matter how the critic does it the ranking process would involve loss of information. Moreover, whatever ranking method the critic uses the customer could use as well. In fact, the customer is perfectly capable of deciding for themself where to go and have dinner on the basis of the given ratings combined with their own individual preferences. The ratings would suffice perfectly well – indeed, better than the ranking – for individual decision-making.

The ranking produced at the end suffers from a grievous loss of information compared to the initial set of ratings – so what is the point of doing it at all? Why not simply retain the multidimensionality, and present the rating results as they are, rather than arbitrarily compressing them into a single number?*

Such restraint is not impossible. The Research Excellence Framework in the United Kingdom, for example, is a major national exercise that evaluates research at each university and presents the results in terms of ‘quality profiles’. Essentially, a quality profile is a picture which shows ratings under various different headings. What it is not, is a single number.

As ever, these quality profiles can indeed be turned into rankings (and again in various ways), and indeed the rankers lose no time in doing so. But the primary results – available in full on the internet – are quite deliberately given as sets of ratings, not as a ranking.

A second sign of change is that the accumulated weight of expert opinion against rankings has become significant enough to be noticed and consistent enough to defy refutation.

Already in 2013 Simon Marginson called the ranking of universities ‘bad science’. “These rankings get a lot of airplay. In social science terms they are rubbish,” The Australian reported him as saying.

Ellen Hazelkorn’s Rankings and the Reshaping of Higher Education of 2015 has already been mentioned; this was followed in 2016 by the edited volume Global Rankings and the Geopolitics of Higher Education, and in 2018 by the Research Handbook on Quality, Performance and Accountability in Higher Education.

In 2017 Hazelkorn joined Philip Altbach – founding director of the Boston College Center for International Higher Education in the United States – in dispensing some advice: “We have one simple argument: universities around the world, many more than will ever publicly admit it, are currently obsessed with gaining status in one or more national or global rankings of universities. They should quit now.”

Altbach himself is co-editor of The Global Academic Rankings Game: Changing institutional policy, practice, and academic life, providing “an in-depth examination of the impact that rankings have played on policy, practice and academic life in Australia, Chile, China, Germany, Malaysia, the Netherlands, Poland, Russia, Turkey, the United Kingdom and the United States”.

Michael Thaddeus, the mathematics professor (and former chair of the maths department) at Columbia University who was the whistleblower in exposing false data submitted by Columbia to the US News & World Report’s Best Colleges Ranking in 2022, commented after the event: “I’ve long believed that all university rankings are essentially worthless. They’re based on data that have very little to do with the academic merit of an institution and that the data might not be accurate in the first place.

“It was never my objective to knock Columbia down the rankings. A better outcome would be if the rankings themselves are knocked down and people just stop reading them, stop taking them as seriously as they have.”

As a final example of expert opinion, in 2020 Australian National University Vice-Chancellor Brian Schmidt (a Nobel laureate in physics) publicly questioned the validity of global rankings systems, saying they mislead students and distort universities’ research priorities. With nice understatement he added: “It’s a shame they really aren’t very good.”

A revolt from the top

A third indicator of change lies at the institutional level. Increasingly, there are reports of influential universities refusing to play along any more with the rankings game.

Many academics will have pricked up their ears, for example, at the news that the Harvard and Yale law schools – soon followed by the University of California, Berkeley, and then others – pulled out of the US News & World Report rankings in 2022. “We have reached a point,” said the dean of law at Yale at the time, “where the rankings process is undermining the core commitments of the law profession”.

Not long afterwards the same thing happened with medical schools: Harvard, Stanford, Columbia and the University of Pennsylvania all pulled out of the US News rankings for similar reasons as the law schools. Something similar had also happened in China in 2022, when three highly regarded universities – Renmin, Nanjing and Lanzhou – all pulled out of all overseas rankings, citing considerations of autonomy.

These were not the first universities to take a principled stand against participating in rankings. What is different now is that, for the first time, a revolt against a major ranking has come from top-ranked institutions. The effect has been commensurate with the prestige of the institutions pulling out – as indeed, has been the case in the past, in inverse ratio.

For example, in 1995 and 1996 Reed College, a small private liberal arts college in Portland in Oregon, became the first educational institution in the United States to refuse to participate in higher education rankings, and it has stuck to that refusal ever since.

Commenting on the recent withdrawal of top-tier institutions, Colin Diver, former president of Reed College, said: “The point is that you can dismiss Reed College dropping out, but you can’t dismiss Yale Law School dropping out. You can’t dismiss Harvard Medical School dropping out.”

On a more fundamental issue, Diver in effect gives a summary of what I called above the conceptual argument against rankings: “My objection is focused primarily overwhelmingly on what I call ‘Best College’ rankings, which take multiple criteria of educational performance and excellence and smush them together, formulaically into a single number, and purport to claim that number and the ranking that goes with that number is the key to determining relative quality.”

“I don’t care what formula you use, what data you use, what criteria you use; that approach seems to me to be just so fundamentally flawed. And the reason is because there are so many different kinds of institutions,” said Diver.

“The genius of American higher education is that it’s a bottom-up system that is grown up to meet multiple demands. It features institutions with all kinds of different missions, goals, and characters, and it serves a constituency that has an enormous variety of needs and wants and preferences in terms of what they’re looking for in college. So a single template, a single measure, is just impossible. And that’s my objection,” explained Diver.

One might add that the point about “so many different kinds of institutions” applies even more at a global level.

A culture change

A fourth indication of change lies at the systemic level, with a recent example coming from the Netherlands.

Earlier this year, the national representative body Universities of The Netherlands – Universiteiten van Nederlandreceived a report from an expert group on rankings, commissioned a year earlier because of concerns about the effect of rankings on a nationally agreed strategic initiative called Recognition and Rewards.

In its analysis, this expert group came to the same kind of conclusions as outlined above.

The conceptual and methodological arguments against ranking are (once again) briefly summarised: “Our opinion shows that league tables are unjustified in claiming to be able to sum up a university’s performance in the broadest sense in a single score. There is no universally accepted criterion for quantifying a university’s overall performance, and a generic weighing tool cannot do justice to a university’s strategic choice to excel in specific areas.

“Research, education and impact achievements cannot be meaningfully combined to produce a one-dimensional overall score. Any attempt to do so will run into arbitrary and debatable decisions about how performance in these three core tasks should be weighted.”

This report, however, goes further than earlier reports elsewhere that have carried out similar analyses (and have come to similar conclusions). It also delves, with honesty, into the Janus-faced nature of the pragmatic argument for playing along with the rankings game.

“League tables present universities with a dilemma. On the one hand, university administrators experience pressure for their institution to perform well in league tables. In addition, many universities regard league tables as an important means of recruiting international students,” explains the report.

“On the other hand, league tables use performance indicators that are often at odds with universities’ strategic priorities … Moreover, the questionable methodology of league tables is difficult to reconcile with the scientific values advocated by universities.

“Universities often struggle with this dilemma. On the one hand, for example, administrators are expressing criticism of league tables, while at the same time universities are embracing league tables in their marketing activities. This pragmatic approach feels uncomfortable to many, including the members of the expert group. At the same time, this approach is understandable given the complex national and international playing field in which universities operate,” reads the report.

However, the report continues, “we as an expert group believe that this pragmatic approach is increasingly difficult to defend”.

The remedy, the expert group proposes, is nothing less than a complete culture change. It then proceeds to outline an action plan for effecting such a culture change at three levels.

“We propose a strategy in which universities develop initiatives at three levels to bring about a change in culture with regard to league tables: initiatives at the level of individual universities; coordinated initiatives at the national level [and] coordinated initiatives at the international level, particularly the European level,” it states.

In its response to the recommendations of the expert group, the board of Universities of The Netherlands (UNL) says: “The expert group has made proposals to bring about a culture change surrounding the use of league tables. This is indeed the direction we, the Dutch universities, wish to move in.

“The UNL board endorses the analysis that the use of league tables is problematic and largely embraces the recommendations put forth in the expert group’s paper. Dutch universities will therefore begin taking steps to achieve a culture change in the use of league table rankings.”

This is a significant indicator of change. To my knowledge, Universities of The Netherlands is the first national association of universities that has moved beyond rhetoric towards action to counteract the well-known concerns about rankings.

Individual as well as collective responsibility

The three levels of action proposed by the Dutch expert group make sense, as far as they go. It is noticeable, however, that all three levels are of a collective nature. The difficulty with leaving things only at the collective level is that what is considered to be everybody’s problem usually ends up being nobody’s problem. It is worth thinking, in addition, about action at the level of the individual academic.

Here, therefore, are a few pertinent questions for the individual professor. Would you remain silent if your university hosted a conference on health sponsored by a tobacco company? Would you be content if your university paid for ‘masterclasses’ on global warming offered by an oil cartel? Would you ignore it if your president took part in a seminar on world peace chaired by the CEO of an arms manufacturer?

If your answer to any of these questions is no, then you might wish to reflect on the further question as to whether you should just let it go by if your university hosts or pays for activities regarding academic matters offered by a commercial rankings company.

Furthermore, if you are a university leader, and you do not yet feel able to discard the pragmatic argument for playing along with the rankings, here’s a thought: perhaps now is a good time for you to start fading into the background.

As a leader, you will be conscious of your legacy. So consider the odds. If there really is a growing revolt against rankings, and if there is a chance that peddling rankings may come to be viewed somewhat like smoking or digging coal or selling arms, do you really still wish to be seen in the company of the rankers?

Next time you get an invitation to speak at a rankings conference, or for your university to host a rankings conference, or to pay for ‘masterclasses’ from a rankings organisation, perhaps you should think twice. Even if you have no ambition to become a hero of the resistance, consider the possibility that 10 years from now you may be pleased that you were prudent enough to avoid the tag of collaborator.

To pre-empt misunderstanding I end with two disclaimers. First, I am not advocating a boycott. That is because I am generally lukewarm about the principle of an academic boycott, and also because I think they usually do not work. I do, however, advocate individual and collective responsibility.

Second, I do not think that we have reached, quote, ‘the beginning of the end’ of rankings. Human nature being what it is, I believe there will always be an appetite for rankings, just like there will always be a market for cheap jewellery.

What I do think is that the development of rankings has reached an inflection point. An inflection point is reached when a curve is still bending upwards, but the rate of increase begins to decrease. When that happens, the curve will either peak or plateau. And that is where I think the phenomenon of rankings is heading.

Chris Brink is emeritus vice-chancellor (president) of Newcastle University in England, former rector of Stellenbosch University in South Africa, former pro vice-chancellor (research) at the University of Wollongong in Australia, former head of mathematics and applied mathematics at the University of Cape Town in South Africa, and former senior research fellow at the Australian National University. The opinions expressed in this article are his own, and are not intended to represent any views of any of his former employers.

* This example (and other content used in this article) comes from: Chris Brink, “Academic freedom and university rankings”, in Frédéric Mégret and Nandini Ramanujam (Eds),
Academic Freedom in a Plural World: Global critical perspectives (Central European University Press, forthcoming in 2024).