EUROPE

Coimbra Group tells U-Multirank to revamp its data
U-Multirank results are based on “unverifiable data” and “imprecise definitions” and its indicators remain “weak proxies of quality for valid international comparison of institutions”, according to the Coimbra Group of European universities, or CG, in a stinging public criticism of the way the U-Multirank ranking system is being implemented.U-Multirank should “take a step back” and “instead of focusing on ranking tools, all efforts should be placed on improving the database”, the group said.
This is despite U-Multirank being “conceptually superior” among the ever growing supply of commercial benchmarking tools and ranking systems.
In a position paper adopted by its executive board, the group called on the European Commission, which sponsors the benchmarking system, to invest European taxpayers’ money in the years to come in the development of a “high-quality and publicly accessible database of relevant basic data that can be used for meaningful benchmarking of higher education institutions”.
However, Gero Federkeil, U-Multirank project coordinator, denied there was any lack of transparency. “Our methodology and indicators are completely transparent; they are available for everyone to see on the U-Multirank website.”
He told University World News that U-Multirank has developed the “largest international database on higher education in the world” by combining international bibliometric and patent data-sets and allowing individual higher education institutions to provide their own data, which are then carefully checked.
“Unlike other rankings we do not use reputation judgments, which we know create major biases.”
Simplistic league tables
In its paper, CG said its rectors do not hesitate to warn against the shortcomings and dangers of simplistic league tables, especially when these rank mainly based on simplistic reputation or citation databases, “as the more popular ranking methodologies do”.
Yet rectors also look anxiously to the rankings in which their own university scores best and are happy to advertise any step forward in such rankings and report them prominently on the front page of their websites.
“This somewhat schizophrenic attitude is mainly explained by the outside pressure of press and politicians, who like simple value assessments of their public universities.”
But national prestige in rankings, or the lack of it, can have drastic consequences, the group said. Institutions have been obliged to merge with the sole purpose of moving up the ranking lists.
“Financial incentives have been made available by governments with the sole purpose of creating top universities and to promote them into the top of the league tables.
“The most outspoken initiative is probably the 5/100 initiative in Russia where financial incentives are given to a selected number of institutions, with the direct aim at bringing five Russian universities into the international top 100.”
Rankings also have financial consequences for universities in the way they affect international student recruitment, as foreign students, especially from Asia, largely base their choice of European university on the league tables, the CG said.
Counteract shortcomings
The paper said U-Multirank was created to counteract the shortcomings of the existing rankings and sought to do so by allowing inclusion of comparable institutions only; focusing not just on research but also on teaching, knowledge exchange and internationalisation; allowing users to choose the indicators used for their own purpose, allowing comparison of both entire institutions and individual study fields (multi-level).
It went live in 2013 and published its third edition in 2015, nominally involving 1,212 universities in 82 countries – although in many cases the data listed is very limited, and only a few universities outside Europe are included – covering eight broad subject areas and seven study fields.
In preparatory meetings in 2010, the CG voiced concern that indicators were being chosen by a public ballot rather than through “a sound a priori fundamental reflection” and about half its member universities abstain from active participation.
The League of European Research Universities also criticised the methodology – in particular the "imprecise proxies, the problematic data comparability and the potential for game playing" – and withdrew from any further collaboration as a group, the paper noted.
However, Gero told University World News that “unlike as CG seems to think, we do not provide league tables; U-Multirank only allows comparisons per indicator to be selected by the user”.
Regarding the use of proxies, he said: “It is well known that, unfortunately, so far nobody has been able to come up with valid outcome indicators for 'teaching and learning'. U-Multirank is therefore forced to make use of proxy indicators; but it has also developed a new approach to teaching and learning quality by focusing on the subject level.”
Rising pressure
However, CG complained that U-Multirank included many of its universities by using publicly available data from their websites and from the Web of Science, and CG says U-Multirank, backed by the European Commission, which invested heavily in the development of the product, is applying “rising pressure” on national rectors' conferences and individual institutions to participate.
In its paper, the CG complains that the main difficulty is a “lack of comparable definitions for several indicators in the framework of very different national systems and types of institutions".
“Even the meaning of basic concepts such as ‘number of faculty’ and ‘number of students’ clearly depends on national interpretations for some special categories. Not to speak of intrinsically vague items such as ‘regional joint publications’ and ‘spin-off companies’."
The requested data are often not available or difficult to come by, especially concerning graduate employment, the paper said.
“Because of the lack of transparency in some indicators, there is room for manipulation or at least for interpretations that lead to the best results for one’s own institutions,” the CG said. “There is a clear need for a supporting international body that provides more elaborate data and definitions with respect to different national systems and is better placed to check the validity of the data provided.
“The trust in U-Multirank’s definitions and collected data is not supported either, by [virtue of] the fact that U-Multirank gives rise to some very surprising results. The fact that relatively unknown institutions emerge ahead of internationally reputable counterparts is regarded by many with mistrust in the system rather than with respect for emerging new leaders in the field,” the paper said.
“This mistrust will undoubtedly remain for as long as the robustness of U-Multirank is not beyond doubt."
Site visits
The CG said it was surprising that no effort is being made to include the results of site visits by international panels of peers in the framework of national and international quality assurance and accreditation agencies, in U-Multirank.
Such agencies increasingly check on the quality of provision and publish qualitative judgments about teaching and learning, including outcomes, in the assessed study field.
“Such evaluations are common practice on the ‘Bolognised’ European educational scene, and are potentially so much more useful for a judgment on the quality of teaching and learning in a particular study field than a limited survey among the institution’s own students only, as performed by U-Multirank,” the CG said.
“However, the gulf between the quality assurance agencies and the rankers is clearly still too deep.”
There is a need for a European initiative to bring together the two groups, both supported by the European Commission, the CG argued.
Therefore, U-Multirank should take a step back and instead of focusing on ranking tools, should concentrate on improving the database. But it should not do this on its own, it should work with the various agencies guarding quality assurance in higher education and their international networks such as the European Association for Quality Assurance in Higher Education, or ENQA, and the European Consortium for Accreditation in higher education, or ECA.
In conclusion, CG said U-Multirank’s basic concepts are considered valuable and the best feasible but “to gain confidence and respect, U-Multirank must take a step back from its emphasis on ranking and divert its efforts towards improving and unlocking its database and switching to an Open Science approach that stimulates evolution of more sophisticated indicators and benchmark tools”.
CG added: “It should offer an alternative to commercial providers driven by business interests and one-dimensional, often simplistic and deforming definitions of academic quality and excellence.”
'Statistical misunderstandings'
For U-Multirank, Gero said: “We appreciate the Coimbra Group's position that U-Multirank is superior to any other existing ranking because it addresses the weaknesses of these other rankings and offers an innovative approach to making the diversity in higher education transparent.
“Unfortunately there also appear to be some misunderstandings in the CG's policy paper, that are largely methodological and statistical of nature.”
He said these required time to address in detail, and U-Multirank would be making them clear in a position paper this coming week.
The CG paper was drafted by an expert group comprising Guido Langouche (KU Leuven University); Vera Št’astná (Charles University in Prague), Stanislaw Kistryn (Jagiellonian University, Krakow) and Jules van Rooij (University of Groningen).
The Coimbra Group, founded in 1985, is an association of several dozen long-established European multidisciplinary universities of high international standard. Its members include Aarhus University, the universities of Barcelona, Edinburgh, Geneva, Groningen, Istanbul, Prague and Saint Petersburg, and Trinity College Dublin.