Towards a fairer and more robust university ranking

In the paper International University Rankings: For good or ill?, published by the United Kingdom’s Higher Education Policy Institute or HEPI, Bahram Bekhradnia conducts a thorough and convincing critique of university rankings and highlights significant concerns.

In the analysis, Bekhradnia reveals seven major flaws of global rankings. But, as he also recognises, there is a solution to all these problems and U-Multirank, which HEPI describes as “fairer” and “more robust”, is the answer.

Looking at each of the seven issues mentioned, this is how U-Multirank tackles them:
  • ‘Rankings mainly measure research (but pretend to represent overall performance)’ : U-Multirank is multi-dimensional: indicators are shown separately for teaching and learning, research, knowledge transfer and regional engagement.

  • ‘There are no attempts to audit the quality of data submitted’: U-Multirank puts a lot of effort into quality checks such as statistical checks, plausibility checks, analysis of outliers and changes over time, individual communication with universities, etc. If a data point is still not reliable, U-Multirank just leaves a gap in the data set (which is not too problematic for the U-Multirank analyses because no league table is created).

  • ‘In some rankings there is no size adjustment’: U-Multirank relies on size-normalised indicators.

  • ‘Reputation surveys are not reliable’: U-Multirank does not use reputation surveys (but includes methodologically sound student surveys to measure teaching and learning).

  • ‘League tables exaggerate differences’: U-Multirank uses an ordinal list with bands, just like the author suggests.

  • ‘League tables are sensitive to weights’: U-Multirank does not aggregate data to a composite score, so it doesn’t need any weighting. Users choose the indicators to use.

  • ‘Performance is not presented in a multidimensional way’: U-Multirank developed a variety of illustrations to present performance profiles in different dimensions. For an example of the U-Multirank’s unique sunburst infographics that represent diverse performance profiles, click here.
In its paper, HEPI highlights the challenge for U-Multirank to encourage as many universities as possible to participate, particularly in the UK. By March 2017, U-Multirank will feature 1,600 universities, making it the largest international data comparison of universities.

In many European countries, almost every university participates. Unfortunately, the UK has been slower than most in engaging with the alternative to what HEPI decries as “flawed rankings”.

However, for the 2017 release, U-Multirank will make better use of the available official UK databases, so UK universities will be part of U-Multirank, showing results on the majority of the performance indicators even without having to provide data directly. Using ‘prefilling’ will minimise the burden on universities in terms of the engagement required to achieve fair comparisons and benchmarking for their institution.

In future, through U-Multirank’s approach, it will be possible to have the benefits of fair ranking while at the same time embracing the diversity of global higher education that brings strength to education and research around the world.

Professor Frank Ziegele and Professor Frans van Vught are project leaders of U-Multirank.