GLOBAL

Rankings – A useful barometer of universities’ standing
University rankings are not perfect. Indeed, for years I have found myself persistently drawing on a quote from United States statistician George Box: “Essentially, all models are wrong, but some are useful.”We are open and honest at every opportunity about the shortcomings of rankings in general and the specific limitations of ours [QS World University Rankings]. However, we believe that they continue both to provide a useful early-stage reference for prospective students and to serve as a useful catalyst for discussion and development around what quality means in higher education and how it can be achieved and managed.
The recently published Higher Education Policy Institute or HEPI report reiterates some frequently recycled criticisms of rankings, but fails to do so with nuance.
The author uses 132 words on page nine to explore the positive impact of rankings, yet over 2,850 words on subsequent pages establishing and expanding upon his arguments against them. The report is heralded as “new research”, but fails to put forth any new findings or provide evidence of much research having been undertaken.
We work tirelessly to improve the quality of our data and processes every year, and the notion that we don’t audit the data returns is absurd. We certainly do have several automated checks in place as a first line of control, but their existence doesn’t remove the requirement for the human audit of every data point submitted, a responsibility that we take seriously.
This is among the costliest and most time-consuming components of the process and is in a state of continuous improvement.
Of course, in an initiative of this scale, errors can and do occur, but these remain very isolated issues and we encourage institutions to engage with us to resolve them.
Reputation survey
Our reputation surveys yield increasingly stable results, well correlated with other measures, and the full details about how we screen quality of respondents, adjust for discipline and country and so forth could not be derived without speaking to us, yet suppositions are made to that effect in this report.
The effectiveness of reputation surveys to distinguish between institutions does diminish as the list progresses, which is one of the reasons we refer to ranges at this stage, but given that we are only assessing the world’s top 5% of institutions, we find they work quite effectively.
In a diminishing minority of cases, after every alternate avenue has been exhausted, we do rely on data presented on institutions’ websites for one simple reason: it’s more accurate than assuming zero, and if leading universities in Malaysia or Argentina, for example, are absent, then the ranking of weaker universities is inaccurately inflated.
Our primary and predominant mechanism for data acquisition is direct submissions from institutions validated against our historical records and central statistics where available.
Common-sense intuition
It is true that international rankings place more emphasis on research than they might choose to if more diverse international datasets were available. QS asks academics to comment on other institutions based on their awareness of their research capabilities, so there’s no attempt to hide this reality.
It is also clearly true that rankings are fundamentally simplistic and reductionist and do not reflect everything important that universities do. That said, we believe they serve as a useful barometer and one which we now allow users to calibrate, personalise and combine through our apps.
In the HEPI report, Bahram Bekhradnia argues that rankings fail to identify the best universities in the world because they do not measure every single function a university serves. This approach towards university rankings should be rejected for reasons both pragmatic and theoretical. I explain the pragmatic reasons – the dearth of diverse datasets that would allow for valid comparisons – above.
Furthermore, our research supports the common-sense intuition that certain aspects matter more than others to students (our primary stakeholder). Rankings are not released into a vacuum; they reach an audience, and it is our obligation to provide information that is of use to that audience.
For students, measures of subject-specific and overall reputation, an institution’s ability to move them towards the career of their choice, their teaching capabilities and research strength are of primary importance. We would be keen to hear which indicators – ones apparently central to a university’s mission statement, yet overlooked by those who make it their job to measure university performance – have been omitted.
Taking rankings for what they are
There are other important considerations that Bekhradnia ignores. He writes that “the extent to which they influence the decisions of governments and universities themselves” can only be construed as a negative consequence of rankings. However, he provides little-to-no evidence that this is the case, writing only that rankings have encouraged governments to devote money to research “explicitly in response to the performance of their universities in international rankings”.
He argues that this is undesirable because “money that could have been used elsewhere in the system to improve other aspects of university performance [is being devoted to improving research performance]. In other words, there is an opportunity cost as well as a cash cost.”
This rings true as a hypothetical objection, but relies on a series of claims that are, in essence, empirical. Bekhradnia does not:
- • Identify aspects of institutional performance that should be preferred over research investment, with a justification for preferring them;
- • Prove that the opportunity cost of funding research activity outweighs the benefits that are derived from institutional research activity;
- • Prove that unequal distribution of government research funding – a limited resource – is conducive to sub-optimal consequences.
In my experience, institutions’ reaction to rankings can be categorised into three clear groups:
- • Total ambivalence: This group tend to ignore rankings altogether, particularly at a leadership level, even when they do well.
- • Epiphany: This group seem to forget themselves and decide that pursuing a higher ranking is their new-found mission.
- • Informed integration: This group tend to take a holistic approach to performance, using rankings only for the aspects that they have the power to inform but designing their own metrics otherwise.
QS is more than happy to support the general message – that rankings in their current form ought to be understood with nuance – but I believe the author of this report has underestimated his audience in compiling an evidence-light opinion piece, masquerading as an independent research report.
Many of his objections indicate a failure to understand the context in which rankings are compiled and the context into which they are released.
We will continue to do our utmost to ensure that the millions of students that rely on our rankings each year to differentiate between institutions are receiving valuable, reliable information – and we are fully prepared to enter into productive discourse with those who are prepared to raise evidence-based objections.
Ben Sowter is head of research at QS Quacquarelli Symonds.