23 April 2017 Register to receive our free newsletter by email each week
Advanced Search
View Printable Version
GLOBAL
The 2012 rankings season – What have we learned?
The publication of Times Higher Education’s World University Rankings on 3 October marks the close of what has now become an annual rankings season consisting of three major releases.

Shanghai Jiaotong University’s Academic Ranking of World Universities (ARWU) and the QS World University Rankings are in their tenth and ninth editions respectively, while the THE exercise is now in its third.

On the face of it, these rankings – with their widely differing methodologies, aims and results – provide the consumer with conflicting information. But there are a few issues on which we can largely agree.

We all agree that the best university in the world is American – we just can’t agree which one it is.

According to ARWU it is, and always has been, Harvard. QS prefers its neighbour in Cambridge, the research powerhouse Massachusetts Institute of Technology (MIT). It tops the QS ranking for the first time this year, with Harvard slipping to third. And for the second year, THE has made the California Institute of Technology (Caltech) number one, ahead of Oxford and Stanford.

We can all agree that Cambridge and Oxford are in the top 10, though two of us think Imperial should join them, and one of us (QS) thinks University College London should be in there too.

Both QS and ARWU place Cambridge above its historic rival Oxford, while THE ranks Oxford joint second in the world.

Seven universities make the top 10 in all three exercises: Harvard, MIT, Cambridge, Oxford, Caltech, Princeton and Chicago. The other universities that either should or shouldn’t make the top 10 – depending whom you ask – are Stanford, Yale, Columbia, Berkeley, Imperial and UCL.

This high level of agreement suggests that there is a definite elite of top institutions that any reasonable methodology will find.

Rankings volatility

Another thing the rankings have in common is that they have all been the subject of criticism and controversy.

One much-debated issue is annual volatility. Critics say, sometimes correctly, that volatility reflects methodological tinkering or unreliable measures rather than genuine change.

One way of avoiding this is the ARWU method of employing measures, such as the number of Nobel prize winners to have graduated from a university, that never really change. This means that levels of yearly volatility are negligible.

But using measures such as this produces a ranking that doesn’t really tell us anything about how the international landscape is changing.

And it is changing.

In both the QS and THE rankings, for instance, there has been a trend for Asian universities – and especially younger, technology-focused institutions – to improve their position.

There are 33 Asian universities in the QS top 200 this year, compared to 17 included in THE. And in terms of technology-focused universities, QS places MIT in top position with Imperial, Caltech and ETH Zurich not far behind.

Nine of the top 10 tech-focused institutions maintain or improve their position, with relative newcomers Nanyang Technological University (NTU) in Singapore and Korea Advanced Institute of Science and Technology (KAIST) among the biggest climbers in the top 100.

Too much of a good thing?

So yearly variation in rankings can be a good thing if it shows genuine change. But you can also have too much of a good thing.

THE has been keen to point out that its rankings this year use the same methodology as in 2011, allowing for valid year-on-year comparison – unlike the previous year, when methodological changes led to higher volatility.

Yet the average movement in the THE top 200 this year is some 18.5 places, more than double the QS figure of 8.1 and way in excess of most serious national rankings. Indeed, a remarkable nine institutions have moved more than 50 places in THE’s top 200.

There are just three movements of comparable magnitude in the QS top 200 and none in the ARWU. Of the nine universities to which these seismic changes are attributed, only two moved more than five places in this year’s QS rankings: NTU (+11) and Birkbeck (+10). By comparison, NTU moved up 83 and Birkbeck dropped 51 places respectively in this year’s THE ranking.

Teaching quality a blind spot

Another criticism of rankings in some quarters is their over-reliance on measures reflecting research. Teaching quality is a notorious blind spot, and THE is the only ranking that claims to measure it.

Of the three measures that THE groups under the category ‘the learning environment’, the only one that could be said to relate directly to teaching quality is the teaching reputation survey.

QS’s use of surveys is well known, yet we have always opted against measuring teaching in this way. This is largely because our own reasoning – shared by just about everyone we have ever asked – is that academics simply do not have any basis for making meaningful judgments of teaching standards at universities at which they have never studied or taught.

This is backed up by the THE results. The variance between the results of the teaching reputation and academic reputation surveys is statistically negligible, correlating at around 0.89. It may be that institutional performance in research and teaching are closely matched. But it seems more likely that the two surveys are essentially measuring the same thing.

Rankings have become indispensable

All ranking systems have their weaknesses. ARWU tells us virtually nothing about the arts and humanities, while some people have never approved of QS’s emphasis on reputation surveys of academics and employers.

Any intelligent observer will recognise that rankings can never hope to capture the full spectrum of university activity. But rankings are nonetheless an indispensable source of information for millions of prospective students that otherwise would not be available.

As the statistician George EP Box famously said: “All models are wrong. Some are useful.”

Some 4.1 million students every year now travel abroad for their education, and many are charged eye-watering tuition fees for the privilege. Giving students a basis for comparing universities in broad areas of interest is clearly of vital importance.

The way that we as compilers of rankings can be of genuine use is to educate students on how to apply rankings intelligently to their own situation. No ranking can ever be definitive, but all three of the major rankings can help make smarter decisions if they are used correctly.

When it comes to comparative information, more is more.

* Danny Byrne is editor of http://TopUniversities.com, which publishes the QS World University Rankings.
Receive UWN's free weekly e-newsletters

Email address *
First name *
Last name *
Post code / Zip code *
Country *
Organisation / institution *
Job title *
Please send me UWN’s Global Edition      Africa Edition     Both
I receive my email on my mobile phone
I have read the Terms & Conditions *