Is the switch in rankings’ focus masking the West’s decline?recent commentary in The Lancet by Richard Horton presented criticism of international university rankings, including a briefing paper from the United Nations University (UNU) in Kuala Lumpur, Malaysia.
Horton makes some relevant comments on the rankings, although his survey is very limited and incomplete, and he argues that they need to be reformed to hold universities accountable for their social responsibilities. He notes that the UNU report suggests doing away with rankings altogether.
The status of The Lancet is such that this article provides insight into the collective thinking of the Western academic and scientific establishment and it therefore needs some attention.
To start with, getting rid of rankings, as posited in the article, sounds like a good idea, but it is not really feasible. Bureaucrats and faculty in the big brand universities are not suggesting that every university is as good as any other, that their salaries or tuition fees be reduced to the industry average, that research grants be allocated randomly or that their students are no more employable or intelligent than those at other places.
Until they do so, calls for the abolition or radical restructuring of rankings should be regarded with suspicion. The end of formal ranking or any kind of external comparative assessment would just be a return to the age of deference when the anointed elite could impose its self-perceptions with little regard for any real merit or relevance.
We would be back in a world where everybody knows that Oxbridge, the Ivy League and the Sorbonne are superior, perhaps in ways that only superior people can see, to everywhere else.
I recall somebody recently saying that Yale was doing just fine before the US News rankings came along and that it doesn’t really need them. That is probably true and explains the apparent paradox of Yale Law School being so critical of a ranking in which it has always held first place.
Focus on the ‘big three’
The commentary in The Lancet focuses on the so-called ‘big three’ global rankings: QS, Shanghai and Times Higher Education (or THE). This is the viewpoint of the mainstream media for whom these are usually the only rankings that matter, although sometimes they add on the US News Best Global Universities.
But there are in fact others, some produced by universities or research centres, that are usually overlooked by the media, governments, companies, journals and institutions. Some of these are as rigorous and technically competent as the big three, often more so, and some are, in various ways, much more inclusive. The Lancet editorial ironically reinforces the media hegemony of the big three by ignoring the others.
For example, CWUR (Center for World University Rankings), based in the United Arab Emirates, publishes world rankings that include a measure of graduate employability. Webometrics (Ranking Web of Universities), published out of Spain, measures research and web activity for over 30,000 institutions that have the slightest pretension to university status.
The Leiden Ranking now includes metrics of gender distribution in research and open access publishing. The Scimago Institutions Rankings have indicators for societal impact and innovation.
There is also a Russian ranking, MosIUR, that looks at a variety of social issues and another, Round University Ranking, now relocated to Georgia, that includes a broad range of financial and internationalisation data. U-Multirank, produced by a consortium of European universities, covers a great many academic, environmental and social indicators while avoiding the publication of a single composite score or rank.
Flaws, improbabilities and misleading observations
Oddly enough, Horton is rather lenient on the THE World University Rankings. He does not mention how the citations indicator can produce bizarre results, with Arak University of Medical Sciences, Cankaya University and Duy Tan University supposedly leading the world for research impact. And no, this is not THE doing its humble bit for global diversity: it is the result of what I consider to be a hopelessly flawed methodology that is now, so we are told, about to be corrected.
Then there are the unusual and implausible results produced by the industry income indicator that purports to measure contributions to innovation. THE is revamping its world rankings this year, although its continuing complexity and opacity are such that it is hard to figure out exactly what will happen.
To be fair to THE and the Shanghai rankings, we should note that Horton makes a couple of misleading observations in his discussion. He says Elsevier provides reputation data to THE: actually, THE brought its reputation survey in-house in 2021.
Then, he asserts that data quality is not independently validated for either ranking. The THE rankings are audited by PricewaterhouseCoopers, although I would agree that the audit is limited, while the Shanghai data is derived from public sources and checking them is very easy.
A new global balance of power
The assertion that the big three rankings privilege the Global North is true up to a point. The current rankings, even those outside the magic triangle, do include a disproportionate number of institutions from Western Europe and North America.
THE does rely too much on income data, Shanghai on long-gone or ageing Nobel and Fields award winners, and QS on surveys. THE and QS use a lot of self-processed data on student-teacher ratios and international faculty and students, the validity of which may not always be as robust as it might be.
If, however, we look at the less well-known rankings, especially those with stable methodologies, and peel away the various metrics, we can see some significant changes over the last few years. A new global balance of scientific and academic power is emerging, one that is moving steadily eastwards and away from the old strongholds of perceived excellence.
It is widely known that the research capabilities of Chinese universities have grown rapidly over the last two decades. Digging a bit deeper we find that engineering, physics and computer science are turning into Chinese fiefdoms.
Look at the CWTS Leiden Ranking default indicator, number of publications, for physical sciences and engineering. Of the top 30 universities, 27 are Chinese, two Japanese and one Singaporean. Even if we move up the quality ladder to publications in the top 10% of journals, China still dominates, although somewhat less so.
For mathematics and computer science the situation is much the same: in the top 30, there are 28 Chinese universities, one South Korean and one Singaporean. It is not quite as stark when we move to the top 10% indicator where MIT, Stanford and Imperial College London are still hanging on in the top 30.
You might say that this is just a reflection of quantity and that if you look at the very top of the scientific world, Nobel prizes, papers in Science, Nature, Cell and perhaps The Lancet, or at the arts and humanities, then the United States and the larger Anglosphere still reign supreme. Perhaps, but it does seem though that the general trend is that eventually quantity will become quality.
Private and government sector research
There is something else that the rankings reveal. The Scimago Institutions Rankings, which distinctively include hospitals, government agencies, non-profit organisations and companies, show that research and innovation, especially in engineering and related fields, are shifting away from universities to the private and government sectors.
So, the top ranks for engineering now include the French National Centre for Scientific Research, Google, the State Grid Corporation of China, Facebook, Microsoft and the Howard Hughes Medical Institute in addition to a few Chinese universities. Western universities are losing ground not only to the universities of East Asia but also to multinational and nominally American and European corporations.
It is not disputed that there is bias in global rankings, but if you probe specific indicators, look sceptically into the methodological depths and venture away from the QS and THE rankings, it is noticeable that rankings indicate that some university systems outside the Global North are performing well or even better.
In addition to China, universities and other institutions from South Korea, Brazil, Singapore, Iran, Taiwan and Saudi Arabia are beginning to make their presence felt. No doubt there is also a fair amount of plagiarism, manipulation of data and gaming of rankings going on, but the elite schools of the West have never been sea green incorruptibles in that regard either.
The Lancet and the UNU are suggesting that universities should be assessed according to their contributions to diversity, equity, inclusion and sustainability.
That sounds good, but we need to be cautious here. Universities are having problems with their usual tasks of instruction, research and innovation. It is asking a lot for them to venture into areas where they have little experience or knowledge and which are deeply influenced by political, cultural and class biases.
It is also questionable whether commercial rankers are the ones who can provide objective, accurate and critical assessment of universities and their role in promoting desired social goals.
The causes of sustainability and social accountability have in fact been taken up enthusiastically by the commercial rankers, THE and QS. THE became an advocate of promoting the Sustainable Development Goals (SDGs) of the United Nations back in 2019 when it started its Impact Rankings, to assess compliance with and commitment to the goals.
Phil Baty, THE’s chief global affairs officer, declared that its new rankings were a “celebration of all the wonderful, complex, inter-related, boundary-defying, diverse, multi-faceted ways that universities of all colours and stripes, of all missions and types, from all corners of our planet, make the world a better place [his emphasis]”.
That will no doubt be welcomed by many universities, at least by the more naïve of them, but it is unlikely to lead to meaningful insights for the public or stakeholders.
Giving up on critical assessment
THE has been assiduously promoting these rankings and it looks as though, for some universities, their scores in these tables are more significant than those in the conventional global rankings. However, we all need to be careful when a ranking agency gives up on critical assessment and starts proclaiming, without any embarrassment, that its mission is celebrating the wonder, complexity and so on of universities wherever they can be found.
THE has built a thriving business on exploiting the status anxieties of academics and administrations and turning them into monetised data. The Impact Rankings are a case in point.
Technically they are deficient, supposedly comparing very different and sometimes contradictory metrics, and they produce volatile and implausible results. They rely too much on questionable institutional data and impose additional burdens on university staff who could be better employed teaching or doing research.
They are biased against energy-producing countries and those who are sceptical of the latest Western progressive trends. It is noteworthy that the THE Impact Rankings have largely been ignored by Chinese universities.
The worst thing is that these rankings allow universities to choose at least three goals on which to be ranked plus a mandatory one. That provides a massive incentive for universities to focus on just a few goals and to abandon all the others. The overall impact of this is that higher education could become a net negative contributor to sustainability.
To digress a little, let us note here that Universitas Indonesia has been publishing its GreenMetric rankings of environmental sustainability since 2009, although that has scarcely been noticed by the mainstream media.
QS now has its own sustainability rankings. These at least avoid the problems of selective submission of data since universities are not allowed to pick the indicators on which they wish to be assessed. They also correlate quite well with the results of their standard world rankings and are likely to be less volatile.
It is, however, noticeable that Chinese universities do very poorly compared with the QS world and Asian rankings or other research-based rankings. The top Chinese performer in the sustainability rankings is Tsinghua University in 118th place. In the QS World University Rankings, it is 14th. In University Ranking by Academic Performance (URAP) published by Middle East Technical University, another ranking neglected by the mainstream media, it is ninth.
This year’s QS world rankings will see a new methodology that will probably lead to a downgrading of Chinese and other Asian universities. It is almost as though THE and QS were covering up the problems of Western universities by finding new fields where they can excel and bypass the problems of their declining research and innovation capabilities.
Science in danger
Basic research, innovation and advanced instruction in academic and professional fields are in serious danger across the West. Avoiding the noise of the commercial rankers, we can see a steady and general decline in scientific and academic performance. It is beginning to look as though current concerns for sustainability and equity are, in part at least, designed to cover up this decline and the rise of universities throughout Asia.
The issues raised in The Lancet and by the UNU are important. But the remedies offered – creating safe spaces, assessing social responsibility or outright abolition of rankings – are unlikely to provide much benefit for the institutions themselves or to the public.
Richard Holmes is an independent writer and consultant and the producer of the blog University Ranking Watch.