The Europe-driven global benchmarking system U-Multirank, which allows universities to create personalised rankings using an array of indicators, was unveiled in Brussels last week. It is "a new user-driven, multi-dimensional and multi-level ranking tool in higher education and research," said its creators.
Because of these characteristics, U-Multirank differs substantially from existing rankings and meets the needs of various higher education stakeholders, said its designers Frans van Vught and Frank Ziegele in a presentation on Thursday.
They stressed that the new benchmarking system would only be finalised later this year, after further testing of its feasibility, fine-tuning and feedback from last week's seminar.
A student, a university administrator and a business representative demonstrated the personalised online ranking system at the seminar.
U-Multirank, a two-year project initiated and funded by the European Commission, has been developed by the Consortium for Higher Education and Research Performance Assessment, CHERPA, led by the Center for Higher Education Policy Studies at the University of Twente in the Netherlands and the Center for Higher Education Development in Germany.
It is seen as continental Europe's answer to the three major existing rankings produced by China's Shanghai Jiao Tong University, and Times Higher Education and QS in Britain. These rankings focus significantly on research and English-language publication, and are seen by many in Europe as providing a distorted picture of global university quality and purpose.
Van Vught and Ziegele contended that existing global rankings were "largely one-dimensional" in orientation towards research, had triggered a "reputation race", were undermining higher education diversity and increasing academic stratification.
Further, existing rankings were insufficiently responsive to the different needs of higher education stakeholders, misinformed policy-makers, focused on "what is measurable instead of what is relevant", influenced university strategies and contained methodological flaws and biases.
They argued that there was no such thing as an objective rankings system. Rankings should rather be based on the interests and priorities of their users - they should be user-driven - should reflect the multi-dimensionality and internal diversity of institutions, should compare only comparable institutions, and be methodologically sound.
Unveiling the results of a two-year feasibility study, Van Vught and Ziegele said U-Multirank adopted a consultative approach and two rankings levels, institutional and field-based. Five dimensions were employed - teaching and learning, research, knowledge transfer, international orientation and regional engagement - using nearly 40 indicators.
"There was a long list of indicators to be tested in the pilot project," they added. Data collection tools and processes were developed including questionnaires, definitions, frequently asked questions, and communication and feedback processes, along with methods for "building ranking groups instead of league tables".
A global sample of higher education institutions was gathered, on a voluntary basis, comprising 159 universities, two-thirds of them in Europe. There was a focus on two fields, business studies and engineering.
A total of 109 universities completed online questionnaires providing information about their institutions, departments and students, and international database, bibliographic and patent information was also gathered.
The exercise, said its designers, found that multi-dimensionality was "useful and attractive", that institutions scored quite differently on different dimensions and indicators and that "performance profiles became transparent".
The experience of institutional self-reporting was positive, although there were some problems with data availability and definitions and the workload required. In bibliometrics, four new and sophisticated indicators were developed. Processes for checking the plausibility of self-reported data were needed, and patent analysis was only feasible at institutional level.
Van Vught and Ziegele reported no technical doubts about upscaling U-Multirank. There had been a lot of interest from European institutions and "encouraging interest from beyond Europe" with the exceptions of America and China, where very few universities participated - five in the US and one in Hong Kong.
"A global 'roll-out' is possible, starting from Europe with an open approach," they said. "There are some gaps to close, and some further work on a few indicators - but in general all instruments and processes are described, tested and feasible."
Still needed were a flexible web tool for users to create their own rankings, alongside "authoritative rankings by special groups", improvements to the system's user-friendliness and its implementation.
Phil Baty, editor of the Times Higher Education World University Rankings, welcomed the new initiative and said global rankings were "here to stay because higher education increasingly operates in a global marketplace, and they fill a crucial information gap.
"U-Multirank's attempt to provide multi-dimensional rankings, reflecting specific university missions and strategies, instead of producing a single hierarchical list, is also welcome," he added, and was complementary to the THE system.
However, said Baty, it seemed that "a great deal of European taxpayers' money has been spent on an initiative that does not bring a great deal that is new or different to the table", and that the project had struggled to engage institutions outside Europe. There was a risk that it could be seen as "inward looking and self-serving".
* Professor Frans van Vught is a leading international higher education researcher and former rector of the University of Twente where he remains an honorary professor, and Professor Frank Ziegele is head of Germany's Centre for Higher Education Development.
Receive UWN's free weekly e-newsletters