GLOBAL

Overall outcomes of university rankings are ‘junk’
The overall outcome of some of the best known international university rankings is “junk”, according to a leading higher education expert, especially if they are multi-indicator rankings using weighting.In addition, most international rankings are influenced by an agenda rather than being geared to the user, he said.
Simon Marginson, director of the Centre for Global Higher Education at the UCL Institute of Education, London, was speaking at the tenth anniversary of the Complete University Guide becoming an internet-based guide, on 27 September.
“I believe the way to go is disaggregation of multi-indicator league tables. To break down the single composite index into its components will establish genuine validity and provide much more information for users,” Marginson said.
“We should respect the fact that higher education has several purposes, there is no ‘one best’ model, different users have different purposes, and they need different league tables. Comparison should reflect this plurality.
“Give users a research league table, a resources league table, a selectivity league table, an employability league table, a widening participation league table, and so on.”
He said at the global level, there is a need to think more about ‘world-class systems’, not just world-class or leading universities. In some national systems, all choices are good choices. The floor is sufficiently high. But that is not true of all countries, he said.
In the case of first degree students and their families, national rankings are more useful than global rankings, while the reverse is true for some researchers and doctoral students.
Once established, comparisons and rankings created a new form of accountability.
“People have a right to comparative data – though they also have a right to know the limits, for it isn’t always done well. Institutions often wish they were not accountable in this way, especially when they score badly, or the ranking is inaccurate, or it is not appropriate to their mission. But they are locked in,” he said.
“In many universities, meeting one or more comparative performance indicators – the [United Kingdom’s] Research Excellence Framework, National Student Survey, Teaching Excellence Framework, or national or global rankings, or some combination – is the driving strategic objective.”
Making comparisons better
But if rankings are to give users the power to map choices and make life decisions, it is all the more important to make the comparisons better, he said.
He said most rankings had limited purposes.
“The ARWU [Academic Ranking of World Universities] in Shanghai in 2003 was a benchmarking exercise to demonstrate the gap in science between China and America that China had to make up. So it focused just on research and its indicators represented key features of US research universities,” he said.
“The Times Higher Education rankings in 2004 wanted a ranking that would differ from ARWU, service the global student market in education and position British universities well. The survey and the internationalisation indicators advanced those goals. The culmination was this year when Oxford and Cambridge were placed at the top, though outside the UK almost everyone thinks Harvard is number one.
“The purpose of QS is more directly commercial – on the back of its loss-leader ranking the company runs a global business in consulting, conferences, marketing and ranking-related services in higher education. It’s a clever business model. Ranking puts the QS brand on university websites without cost to the company and opens university doorways. Universities need QS as much as it needs their business,” Marginson said.
Firing a shot at multi-indicator rankings, he warned that rankings must be credible and cited an example some years ago where a research team working on an unnamed new ranking decided its numbers would not survive the "laugh test" because on their figures the Chinese were second-last, even though they knew research in China was climbing fast.
“What to do? ‘I know’ said the statistician. ‘I’ll increase one of the indicators, total number of publications, from 5% weighting to 20%, and adjust the other weights. That’ll bump the Chinese up to halfway’. And that’s what happened,” Marginson said. “No one laughed at the results. This is a true story.”
He complained that “fascination with competition and hierarchy” has led to the dominance of the league table format despite the problem of universities being compared even though they had different missions.
Yet more complex comparisons “such as U-Multirank”, tend to struggle, he said.
Particularly problematic are rankings that draw on reputational surveys, since survey respondents “do not know which university is the ‘best’, especially in teaching, and go with the established big names”, he said.
Marginson said many rankings are oversold as measures of the holistic ‘best university’, when each league table is fashioned on a different ideal model. For instance, ARWU is 100% grounded in science publishing and prizes; Webometrics largely in web presence and traffic, and so on.
Marginson said as a social scientist he feels uncomfortable when tables combine objective qualities such as money spent or published papers with subjective qualities such as survey returns.
“For example, student satisfaction varies on the basis of certain factors that are not necessarily connected to the real, material quality of provision. For example, third generation students tend to be more critical than first generation students.”
A bigger problem is the use of proxy indicators because there aren't credible comparative measures of student learning achievement or teaching quality, he argued.
“Everybody knows that student-staff ratios, student satisfaction scores and other proxies have no necessary relationship to student learning, but they are widely used. This has slowed the development of valid comparative measures.”
Multi-indicators and weightings
Where he is most critical of current league tables, except for Leiden, Scimago and U-Multirank, is in relation to multi-indicators and weightings.
“Most national and global league tables are multi-indicator rankings that collect data in several areas of institutional performance, and then combine those data into a single indicator and rank. The problem is not the rich, varied data sets – a great source of comparative information – it lies in the way they are combined using weightings.”
He said this creates two problems. First, the link between performance in each area, and the ranked outcome, is no longer transparent – the specific data are buried in the single overall score. The drivers of improvement are blunted.
Second, the weightings are arbitrary. There is no basis for choosing one set of weights over others, and changing the weightings changes both absolute and comparative university ‘performance’, often dramatically.
“This problem cannot be overcome. No set of weights is unquestionably correct in an objective sense.”
But both of these problems can be avoided if the multilevel ranking is disaggregated into its separate components, creating several league tables rather than one.
“The ranking then becomes a valid source of comparative data. But that means giving up the claim to a holistic ‘best university ranking’,” Marginson said.
He said he believes the overall outcome in both the Times Higher Education or THE and QS rankings is “junk” because they both are “multi-indicator rankings that use arbitrary weights and freely mix objective and subjective data in an incoherent fashion”.
But their individual indicators can be “very valuable”.
“If THE and QS released several league tables with each based on one single indicator, in the manner of, say, Leiden, my verdict would be different.”
He said ARWU’s ranking was less open to being influenced by negotiations between rankers and universities that affect the outcome, but its downside is its 30% reliance on Nobel Prize data, which is not an accurate measure of merit – as it is affected by lobbying.
Marginson suggested that national rankings based on local knowledge were a better source of data “than any global league table can be”. But the prestige of an institution is important and at the top end of higher education the most important factor determining prestige is research.
“This is conducted and measured on a world basis. This guarantees that global rankings focused on research performance will continue to play a key role.”