GLOBAL

Good rankings are not easy but they can be produced
From its very beginning U-Multirank has tried to design and implement an approach to global university ranking that is radically different to existing international rankings.We have formulated a set of epistemological and methodological points of view – user-driven, no league tables, no composite indicators, multi-dimensionality, as described on our website – and we have introduced new performance dimensions.
We have deliberately sought to analyse what users judge to be relevant rather than only what is easily available. We knew this would not be an easy task.
We appreciate public discussion of our approach and are happy with the Coimbra Group’s views of U-Multirank as being “conceptually superior” including “basic concepts that are considered valuable and the best feasible” and of it being an effective warning “against the shortcomings and dangers of simplistic league tables”.
U-Multirank strives to offer what the Coimbra Group of European universities, or CG, wants: we are developing a high-quality global database that can be used by different stakeholders to compare and benchmark higher education institutions and their performances.
With a multi-dimensional approach and by introducing new and innovative indicators, we show the diversity of higher education institutions and their many different profiles.
Certainly this is a challenge, and yes, it is not simple to find valid and reliable data for outcomes that have not been analysed before. After two successful U-Multirank publications and the near completion of the third, we are confident that “better rankings” can be produced.
To avoid misunderstandings of our results we would however like to address a number of issues raised in the CG position paper on U-Multirank today, which was reported in University World News last week.
Issues raised
CLAIM: “Relatively unknown institutions emerge ahead of internationally reputable counterparts.”
U-Multirank does not produce league tables so institutions do not “emerge ahead” of others. Our rankings do produce new insights that may challenge current beliefs on institutional reputation that are often based on hearsay and halo effects.
If we look for example at “traditional” research indicators (for example, citation impact) U-Multirank shows the “usual suspects” as top performers. Other indicators produce different, sometimes unexpected results; for example, a university of applied sciences which publishes almost all of its publications in collaboration with industry will perform well on co-publications with industry even if its total output is not extensive.
U-Multirank aims to present fair pictures of institutional performances showing specific strengths which may be surprising to those who do not think beyond traditional research reputations.
CLAIM: U-Multirank should not work “on its own” in improving its database.
We do not work alone: our consortium brings together a wide range of expertise, including university groupings. We are continuously discussing opportunities to improve data and indicators with stakeholders across the world. This year a 'network of institutional coordinators' will further enhance the exchange of views on our database with participating institutions.
On some issues the CG is not fully informed. Regarding the European Tertiary Education Register or ETER, for instance, U-Multirank is already doing what CG suggests: in close cooperation with the ETER-consortium U-Multirank is currently exploring options to use ETER data for U-Multirank. ETER itself is still an emerging database including a range of basic data on higher education institutions but cannot yet provide the full set of indicators.
CLAIM: “There is a lack of transparency in some indicators.”
This is simply not true. U-Multirank is completely transparent about its indicators. Our website includes 85 pages of precise indicator definitions: See this link.
CLAIM: CG is concerned about a “lack of comparable definitions for several indicators in the framework of national systems”.
This is a major challenge for all international comparisons. Nevertheless, this challenge is the reason why U-Multirank puts so much effort into discussions and feedback loops with participating universities to understand the challenges of data definition and to try to develop joint solutions.
CLAIM: The CG paper mentions “weak proxies for quality”, apparently referring to our teaching and learning indicators.
As we all know, there is still no systematic international measurement of learning outcomes – hence all indicators have to be proxies. Our view is that a valid and comprehensive picture is provided by the broad scope of our 21 teaching and learning indicators – which focus mainly at the subject and not the university level.
CLAIM: “Requested data are not available or difficult to come by, especially concerning graduate employment.”
For some items, including “graduate employment”, data are only used descriptively precisely because the data do not allow valid comparisons. Stakeholders asked us to retain this indicator to signal that it is highly relevant and that efforts to collect this information should continue.
This example also illustrates our general data-policy: if there is any doubt about reliability or comparability of an indicator we will not use it in the rankings. The upside of this is that indicators that proved difficult are now beginning to be integrated into university data collection – and that is excellent for transparency.
CLAIM: CG finds our new indicator “regional publications” to be a “vague item”.
The definition of the indicator, which uses Thomson Reuters bibliometric data, is clear and exact (“regional” is defined in terms of distance – 50km – between co-authors’ institutions). We could have used a different distance, but users felt this was the most meaningful definition of the region for this purpose.
CLAIM: U-Multirank data is “unverifiable”.
This is not correct. Our data collection process involves several feedback loops with the universities for verification. If doubts about particular data items remain we exclude these from our final results.
U-Multirank is a public – non-commercial – tool developed inside and with the advice and support of the academic community. Reflections from academic institutions and groupings help U-Multirank as a learning system to continue to work to improve the global data situation in higher education. U-Multirank is the world’s largest database of its kind.
Frans van Vught and Frank Ziegele are joint project leaders of U-Multirank.