GLOBAL

How to survive changes in ranking methodology

Ranking organisations have a serious problem with methodological changes. Rankers take pride in producing reliable, consistent and trusted league tables that can be used to compare departments and institutions and to check year to year progress.

It is sometimes necessary, however, to make changes in response to attempts to manipulate and influence the rankings, the availability of an expanding body of data, advice from bibliometric and scientometric experts or the needs of stakeholders.

Equally, methodological changes create serious problems for universities. How will a university leader explain to the public the university’s fall in the rankings when his or her institution has actually improved? Will the board, faculty and students blame the president or vice-chancellor rather than the changes in ranking methodology?

Changes in methodology on the agenda

Why have methodological changes suddenly become so important? The reason is two-fold. On the one hand, over the last decade or so there has been, thanks to massive digitalisation, a spectacular increase in available data. On the other, rankings have become more sophisticated.

Ranking organisations over the past decade have gained considerable experience. Now, facing criticism and competition, they have decided to go back to the drawing board and make improvements. The question is, what to change and how to change the ranking methodology?

Towards the end of last year, four major global rankings introduced changes in their methodology.

Quacquarelli Symonds, or QS, announced a major change in their citations per faculty indicator introducing a moderate degree of normalisation. They treated each of the five main subject groups as having the same impact on the total citation count. QS also extended the lifespan of unchanged survey responses, making the academic and employer survey indicators more stable, and stopped counting papers, mainly in physics, with more than 10 institutional affiliations.

These moves helped universities strong in social sciences, such as the London School of Economics and Political Science, or in engineering, such as Nanyang Technological University in Singapore. In Italy the polytechnic universities of Milan and Turin rose several places in the rankings while some venerable universities declined.

Times Higher Education, or THE, also confronted the problem of papers with huge numbers of authors and affiliated institutions and citations, but used a different approach, not counting those with over a thousand authors.

They also switched from Thomson Reuters to Scopus as their data provider while bringing analysis in house and made their academic survey more inclusive. Another change concerned the ‘regional modification’ of citations data which rewarded universities for being located in countries with low citation counts. This year THE applied a 50% modification.

These changes had a huge effect on many universities. At the top Oxford and Cambridge moved ahead of Harvard University. University College Dublin, University of Twente, Moscow State University and the Karolinska Institute all rose, but many institutions in France, Japan, Korea and Turkey suffered ignominious falls.

At the same time US News added two new indicators, books and citations, to its Best Global Universities while the Shanghai Ranking Consultancy had to modify its Highly Cited Researchers indicator because of the compilation of a new list by Thomson Reuters.

The offside trap

The dramatic impact of such methodological changes provided the background to a high level seminar at Ural Federal University in the Russian city of Ekaterinburg in February attended by international experts representing, among others, Times Higher Education, Quacquarelli Symonds, the IREG Observatory on Academic Ranking and Excellence and Perspektywy Education Foundation.

The hosting of the seminar by Russia reflected the interest the country has shown in improving international competitiveness and the standing of its universities in the international rankings. Not surprisingly, the seminar was organised by the Russian excellence initiative ‘5 Top 100’.

In designing this initiative, the Russians have chosen to use rankings as a yardstick to measure progress in the modernisation of higher education. Choosing a proper instrument to measure a university is never easy and the task becomes more complicated when rankings are involved. Rankings are subject to the ‘observer effect’ when the act of observing can influence the phenomenon observed.

A university determined to improve its standing in the global rankings will set its priorities in order to achieve its aims. An institution can, for example, focus on improving its internationalisation score by recruiting more international students or encourage its staff to increase publications involving international authors.

But what if the changes in methodology go another way? They can easily upset the university’s plans and render its efforts futile. Institutions have to be very careful to avoid the ‘offside trap’, to use soccer terminology.

Distortions in the measuring process are inevitable, but we must learn to live with them. Heraclitus, the Greek philosopher, maintained that “the only constant is change". Academic rankings are no exception to the rule.

Changes in ranking methodologies can take a variety of forms: new indicators can be added or dropped or the weighting of the indicators changed. Equally important can be a change in the source of data (THE has moved from Web of Science to Scopus) or data collection procedures.

This does not mean, however, that ranking organisations should be allowed to do as they please. The success of rankings should bring responsibility. Hopefully, ranking organisations have begun to realise this.

The Ekaterinburg recommendations

Changes in the methodology of rankings need to be introduced in a civilised manner. They should not take universities by surprise and there should be some rules for the game. Universities often act like soccer teams. To compete in the championships, to advance in the league table, they want to know that they are playing on a level field and are not caught in an offside trap.

A degree of fairness can be achieved by ranking organisations adhering to a set of voluntary rules, which some ranking organisations have already done. In order not to leave ranked universities off guard, changes in methodology should be announced to the stakeholders beforehand.

Universities should know of the changes before they are asked to respond to the data requests sent out by rankers. At the same time, rankers should explain the methodological changes and the reason behind them.

Secondly, to be fair, the changes should be introduced gradually and should not be excessive. If the changes in methodology are too large, we have what is effectively a new ranking, and not a continuation of the existing one. This applies equally to global, regional and national university rankings.

The participants at the Ekaterinburg seminar were in favour of a set of principles regarding changes in ranking methodology, believing such principles can serve both universities and ranking organisations.

There is no way to avoid or ignore changes in academic rankings. Change, as noted above, is the only constant in life and Eric Schmidt of Google has declared that “every two days now, we create as much information as we did from the dawn of civilisation up until 2003”. The changes, however, must be done in a fair and orderly way.

Richard Holmes is editor of University Ranking Watch blog and Waldemar Siwinski is vice-president of the IREG Observatory on Academic Ranking and Excellence.