GLOBAL

Are global university rankings a badge of shame?

The problems with international university rankings essentially concern data – in particular the lack of it. National rankings don’t raise the same issues. There are no data available on the basis of which to reach any conclusions about comparative quality other than about research: pretty well all the data used in the QS, Times Higher Education or THE, and Academic Ranking of World Universities or ARWU world rankings are research-related.

Universities have many functions, of which education is arguably the most important – and is certainly the one that concerns most potential students. But internationally comparable data about education simply don’t exist.

So in casting about for indicators of education, these rankings use measures like the proportion of PhD students or of international faculty, faculty to student ratios – in the case of AWRU even the number of Nobel Prize winners! – all of which are essentially measures of research, not education.

Added to the direct measures of research activity, more than 85% of the indicators used in these rankings (100% in the case of AWRU) are research-related.

So given that these rankings solely measure research, the only way a university can improve its position is by focusing on research. That’s a conclusion that many universities, preoccupied with improving their position in rankings – as well as governments concerned about the prestige of their university systems – have acted upon.

Given limited resources, such decisions must be at the expense of teaching and the other activities of universities. These rankings positively harm higher education around the world.

Certainly, research is an important function of many universities (though not the only or even most important), and it’s true that rankings provide benchmarks of research quality against which they can measure themselves. That’s pretty well their only benefit. They certainly don’t provide useful information to potential undergraduate students – a benefit claimed by rankers.

Poor data

Even if rankings are taken at their face value, the quality of the data on which they are based is lamentable. Both QS and THE continue to rely on self-supplied data from universities and there’s no audit of the data that are supplied.

This has lead to such fiascos as Trinity College Dublin this year discovering that it had for the past two years misplaced a decimal point in a data return. This error – by a factor of 10 – was not picked up by THE which, RTE reported, had already sent out press releases detailing Irish results, saying that they would "send shockwaves". The results had clearly been signed off on and communicated before the university discovered the error.

QS’s practice of ‘data scraping’ – where it lacks data from a university, it takes it from other sources like a university’s website, without proper regard to its quality – also leads to problems such as the recent one with Sultan Qaboos University from whose website QS had taken the number of all staff and mistakenly used that as the number of faculty.

Both QS and THE boast that their rankings have been independently audited. But these audits are of their internal processes – not of the data used in the calculations. Without proper audit of the data themselves, such fiascos will continue.

Inadequate opinion surveys

It’s this absence of data that drives QS to base 50% (yes, 50%) and THE 33% of their rankings on surveys of opinion, as if such surveys provide any objective basis for judgments about whether one university is better than another. For example, the Sorbonne is in the top 100 in THE’s opinion survey, but 350-400 overall. So is the Sorbonne a ‘best’ university? Or just old and famous?

Not only are opinion surveys a wholly inadequate measure of 'the best' universities: in its anxiety to boost the number of responses, QS (though not THE) counts responses received as long as five years ago, regardless of whether the respondent is still alive! This undermines the other main claim made for rankings – that they contribute to transparency.

Other problems concern the choice of indicators and the weights attached to them, which are subjective and lead to quite different outcomes depending on the indicators and weights chosen.

The result is that a university that is among the ‘best’ in one ranking doesn’t even appear in another – the rankings aren’t just misleading but are the subjective constructions of the ranking bodies. And the form of presentation – ordinal lists from best to worst – misleadingly exaggerates differences in the performance of institutions that may only be a small number of points apart. These are important weaknesses, but there’s no time to discuss them here.

Lack of audit

THE has pointed out that it is making great efforts to improve its data handling. True, but these efforts don’t address the fundamental flaws – the rankings are uni-dimensional and the data manifestly unreliable and not audited.

QS and THE also complain that my report, International University Rankings: For good or ill?, contains nothing new. That’s THE’s excuse for making no mention of it in its magazine – uniquely among the 100 or so reports that the Higher Education Policy Institute has produced (The Irish Times thought it sufficiently important to devote a leading article to it!). That reflects no credit on THE and undermines its claim to keep the paper editorially independent of the rankings.

But even if the report does contain nothing new, that’s hardly a criticism of its content. If it’s true that the ranking bodies already knew about the shortcomings, then they will have known that they’re publishing misinformation, produced on the basis of unreliable data, misleading students into believing that their rankings identify the world's ‘best’ universities.

It may be over-optimistic to hope that the rankings will improve to the point of acceptability, or even that they will go away: there’s too much commercial interest in maintaining these money-spinners and doing rankings properly might be too big an undertaking for a single commercial enterprise.

But what is fervently to be hoped is that governments, universities and the public – and especially potential students who are badly served by these rankings – understand them for what they are: essentially measures of research activity; and recognise also that the data on which they are based are unaudited and of doubtful quality.

Improving their position in the rankings would for the vast majority of universities amount to a badge of shame – it will have meant prioritising research at the expense of their students and other activities.

Bahram Bekhradnia is president of the Higher Education Policy Institute or HEPI, United Kingdom. He is author of International University Rankings: For good or ill? published by HEPI in December.