GLOBAL
bookmark

International rankings: A poisoned choice

Many university presidents and vice-chancellors greet the publication of the various global and regional rankings with a mixture of excitement and trepidation, some referring to rankings as an unavoidable ‘poison’ that is potentially ‘fatal’ in terms of some promising academic careers.

Rankings are a wonderfully accurate measure of progress when your university is rising in global prominence, but contain a criminally inaccurate set of non-representative criteria when you are heading in the other direction.

Global competition and institutions

Academics continue to debate the nature of rankings for higher education institutions, usually concentrating on the validity of the ranking criteria and, with a few exceptions, ignoring the question of whether ranking is of benefit to the global higher education sector.

Yet rankings already exert substantial influence on the long-term development of higher education across the world, with three ranking systems currently in positions of global dominance.

The oldest system, starting in 2003, is the Academic Ranking of World Universities (ARWU) prepared by Shanghai Jiao Tong University. It was followed in 2004 by the World University Rankings of Quacquarelli Symonds (QS) with Times Higher Education as media partners. In 2010 Times Higher Education (THE) published its own set of World University Rankings for the first time.

Rankings bodies acknowledge the growing impact of the global environment on higher education systems and institutions and the importance placed on some means of identifying institutional and regional excellence by prospective ‘consumers’.

While rankings might not always provide information about the particular strengths and weaknesses of the disciplines and departments encompassed within any given higher education institution, it is often the reputation and ranking of the institution that will encourage further investigation.

As students become more globally mobile, the reputation of any higher education institution or region, contributed to by its ranking comparative to others, will continue to grow in importance.

While no academic or ranking body would suggest that the criteria used for any or all of the big three ranking systems are perfect, most would agree with Professor Philip Altbach, director of the Centre for International Higher Education at Boston College, that they have arisen to meet a demand for more information from an increasingly wide range of consumers.

Altbach’s references to the major advantages for universities and systems located in the world’s traditional English-speaking knowledge centres are not in dispute, and the use of proxies such as faculty-student ratios, are an inevitable consequence of trying to include some ‘measure’ of teaching quality into a global ranking system.

ARWU versus QS-WUR

It is the absence of any teaching criterion in the ARWU that makes it a tool for a specific purpose. Its criteria focus on research productivity, but this leads to a system where older universities with more established reputations are unduly favoured, and where there is very little movement in the top 200 universities.

The stability of the ARWU could be interpreted as a sign of face validity, but this ignores the possibility that there are ambitious younger universities which lie outside of the traditional ‘knowledge centres’ referred to by Altbach.

For these often world-class universities, many currently based in Asia, nothing is gained in the short- or medium-term by involvement in the ARWU. They are not yet at a stage where they can attract Nobel prize-winning researchers, not because they are inferior institutions but because they are young and relatively unknown on the global stage.

In comparison, the QS-WUR provides these institutions with a platform that allows them to compete with some of the more established players in higher education.

An analysis of this ranking system shows that as we move progressively down the scale from the top 50 institutions, there is an increase in volatility, providing ambitious, often younger, institutions with an opportunity to take a more prominent role on the global higher education stage.

There are numerous examples of this in Asia.

In the past four years South Korea has invested heavily in its higher education provision and actively promoted the benefits of internationalisation. Consequently, there are now five Korean universities featuring in the QS-WUR top 200, compared with just two five years ago.

Hong Kong now has five of its eight government-funded institutions in the QS top 200. Hong Kong University of Science and Technology and City University of Hong Kong have made notable progress, the latter moving from 198th to 110th over the eight-year period and the former now firmly established in the top 50.

Both universities are around 30 years old, but like the vibrant economy they are based in, they are competitively pursuing investment in excellent faculty and facilities to continuously improve the quality of their research, learning and teaching.

Times Higher Education WUR

In 2010, Times Higher Education produced its own new set of rankings and, like any new system, it is taking time to get it right. Perhaps one of the most problematic issues facing THE is their incompleteness, with a number of universities choosing not to submit data to the new rankings until the criteria, scaling and weighting issues are more stable.

In the words of Kristi Fisher, director of the office of information management and analysis at the University of Texas, Austin: “…the Times Higher survey was using new methodology for the first time, and there was talk it might be suspect. The last thing we wanted to do was spend a lot of resources to participate in a survey that might have flawed methodology behind it.”

This was a view echoed in Asia, where two of the eight Hong Kong government-funded institutions decided not to submit data in 2010, leaving Hong Kong Baptist University ranked just outside the top 100, while Chinese University and City University of Hong Kong did not appear at all.

Where are we now?

There is room for many types of ranking systems and criteria, but as interest in rankings rises around the globe, the stakes are raised for those charged with running universities.

Rankings do impact on global reputation, as well as attempt to measure it in one form or another, and few can ignore the potential impact of an institution’s reputation on a graduate’s ability to get a job or be accepted for postgraduate study at a top university.

An institution’s global ranking can also impact on its ability to lobby for funding, form strategic partnerships, recruit quality international faculty and attract internationally mobile students. So it is little wonder that so many heads of institutions take such an interest in both the annual results of, and methodology behind, the various rankings systems.

A look at the criteria and the results from the various 2011 exercises suggests that the THE system remains potentially more poisonous to some universities than the other two because (at present) it is less predictable and transparent, and will inevitably remain so for the time being as new universities join and leave and criteria, weighting and scaling are amended and adjusted.

In the case of the other two rankings systems, the ARWU with its exclusive research focus clearly favours older, well-established and research-intensive universities, while the QS-WUR remains more suited to young ambitious universities eager to establish their credentials on the global stage.

It seems consumers do have choice after all, not just in terms of parents and students trying to decide which university to entrust their futures to, but also in terms of universities as consumers themselves. Students, parents, presidents, vice-chancellors, ladies and gentlemen, please choose your poison carefully.

* Dr Kevin Downing is director, knowledge enterprise and analysis, at City University of Hong Kong.


Comment
I’m surprised that Downing, who is Chair of QS’s Middle East and Africa Professional Leaders in Education (MAPLE) academic conference committee, quotes Philip Altbach, but does not include perhaps his most decisive comment on the global university ranking systems he has made. Altbach’s recent article, The Globalisation of College and University Rankings, in Change magazine, Jan/Feb 2012, says: “The QS World University Rankings are the most problematical. From the beginning, the QS has relied on reputational indicators for half of its analysis… it probably accounts for the significant variability in the QS rankings over the years. In addition, QS queries employers, introducing even more variability and unreliability into the mix. Whether the QS rankings should be taken seriously by the higher education community is questionable.”
Phil Baty, Editor, Times Higher Education Rankings