The ranking of universities in order of 'excellence' has become a popular and well-established feature of higher education and is clearly set to continue. But the League of European Research Universities, whose members comprise 22 leading universities in Europe, questions whether rankings have any real value.
Launching a new advice paper, University Rankings: Diversity, excellence and the European initiative, in Brussels last Wednesday, the league said one fundamental defect was that most rankings sought to capture characteristics that could not be measured directly, and so required indirect proxies, while different universities fulfilled different roles a single monotonic scale could not capture.
In an interview with University World News, the main author of the paper, Professor Geoffrey Boulton of the University of Edinburgh, said the league was sceptical of the ranking enterprise: "We think that both the provision and the benefits are over-rated and the proxies that are used are highly questionable," Boulton said.
He said everyone could think of things that could be measured such as student-staff ratios, annual expenditure on books, laboratory facilities and so on, "but in the experience of many of us that doesn't get close to the underlying reality of a really good educational environment, the thing you can't measure of course is the ethos, the effort".
Boulton said he was a scientist and was used to creating hypotheses but there were two basic uncertainties: were the measurements correct and how could the hypothesis be tested?
The only way of testing the hypothesis of a ranking system was if you knew how good the universities were to begin with: "So at a very fundamental level ranking fails a series of tests."
There was also a question of whether people in universities, who above all should be concerned with veracity, verification and the like, should be giving credence to things which in a basic scientific field simply could not be accepted.
The league also had a gripe with the monotonic approach where there was simply a continuous ranking: "Of course you don't say that Cambridge is a hundred times better than Anglia Ruskin University because they do very different things," he said.
Boulton said universities were extraordinarily good at playing games, perverting fund mechanisms to their own purposes and so on because they had their own view of where they wanted to get to.
This was happening with rankings. He drew attention to what he said was the "very well-known phenomenon in education world-wide where testing has been introduced". In the first three or four years after introduction of tests, the scores improved but after that they flattened out.
"What's happening is that schools are playing the tests, concentrating on the things the tests are attempting to establish, and that's what universities are doing in relation to ranking and will continue to do. It perverts our sense of ourselves and undermines what we're trying to do."
* See this week's Features section for a report on rankings and research
An important component in the debate on rankings is the values placed on rankings by members of the public and more so by politicians. Many governments are under pressure to react to public outcry when results of rankings are published. The reality of the situation is that universities are no more the ivory towers it used to be but rather a glass house open to public scrutiny. Rankings will be here to stay but the methods should be improved.
Receive UWN's free weekly e-newsletters