AUSTRALIA: The power of rankings
Likewise the Shanghai Jiao Tong University research rankings focus national government attention on policies designed to concentrate research activity in a small number of universities. At the same time rankings encourage the flow of doctoral students, elite researchers and the philanthropic and corporate dollar into the top ranked institutions at the expense of the rest.
Both the Jiao Tong and the THES rankings encourage individual universities to do anything and everything to lift their rankings position, though they use differing criteria and point universities in somewhat divergent directions. These rankings date only from 2003 and 2004 respectively but already they are everywhere in the sector and beyond. They set university reputations.
Rankings function as a meta-performance indicator. They do more than ‘reflect’ a university’s profile and quality. The criteria used to determine a university’s position in the ranking system become meta-outputs that every university is duty bound to place on priority. Rankings begin to define what quality means and, by shaping university and system behaviours, they begin to shape university mission and the balance of activity.
In the world according to Shanghai Jiao Tong University rankings, higher education is about scientific research and Nobel Prizes. It is not teaching or community building or solutions to local or global problems.
In the world according to the THES, higher education is primarily about building reputation as an end in itself, and about international marketing, because it is these metrics that drive the index. It is not about teaching, and not so much about research and scholarship which are only 20% of the THES index.
Rankings as a meta-performance indicator have the potential to redefine and reify the core purposes of universities. They shape patterns of activity and priorities for development, as shown by the history of the US News rankings in the United States. They cut deeply into the authority of universities over mission and identity. Rankings can also be capricious and destructive.
There is much at stake. In 2004 the oldest public university in Malaysia, the University of Malaya, was ranked by the THES at 89. The newspapers in Kuala Lumpur celebrated. The vice-chancellor ordered huge banners declaring ‘UM a world’s top 100 university’, which were placed around the city and on the edge of the campus facing the main freeway to the airport where every foreign visitor to Malaysia would see them.
But the next year, in 2005, the identity of Chinese and Indian students at the University of Malaya was corrected from international to national, and there were shifts in other parts of the THES composite indicator. Malaya dropped from 89 to 169 and it seemed the university’s reputation abroad and at home was in free fall. The VC was pilloried in the Malaysian media and when his position came up for renewal by the government in March 2006 he was replaced.
But it wasn’t just the vice-chancellor whose reputation had been trashed by the THES, it was also the University of Malaya, long established and one of the two strongest universities in an emerging knowledge economy with real virtues and strengths. The University of Malaya had dropped 80 places without any decline in its real performance (aside from spending too much on hubristic banners).
In the drama of UM’s decline there was no positive relationship between performance, competition and outcome. This does not generate useful incentives for better policy and management or better education or research provision. This is simply perverse. But, as in Kuala Lumpur, so in every other capital.
Higher education retains national and local dimensions but is now also a global system. No single nation can ignore global rankings, with the exception of the United States. Precisely because the United States is globally hegemonic in higher education its institutions are solely focused on national not global rankings. Best in the US automatically means ‘best in the world’ and what happens in the US washes over the rest of the world even while most American institutions are indifferent to it.
Otherwise, only Europe with its multi-national Bologna process has enough combined critical mass to change the geo-politics of rankings. Significantly, it is the Europeans who are pioneering a new and very different approach to comparing university performance, that developed by the German Centre for Higher Education Development.
Rankings raise questions about the validity and utility of both the process of comparison and data used in that process. Given that all comparisons can only ever focus on some elements, not the whole university, are the elements being used for comparison the right ones to use? Are the hierarchies of institutions contained in league tables accurate and representative of the higher education sector?
League tables generate clear-cut winners and losers. Do we want to elevate these winners and downgrade these losers? Are the outcomes fair and regarded as fair across the higher education world? And are the rankings systems useful in terms of outcomes? Do league tables help knowledge economies to develop faster or better? Do they provide data helpful to students?
In many quarters there is a sense that all is not well with rankings. Perhaps this is inevitable given that in league tables there are few winners and many losers, but there is more at stake here than self-interest. Rankings change higher education and it is a question of what kind of global higher education system we want to have.
There are widespread desires to modify the downsides of university rankings, which tend to close off options, and to provide better data in relation to teaching quality. In Europe these concerns underpin the wide support for the German approach to comparison, and for a typology of institutions with a diverse set of university missions, as in the Carnegie classification in America.
Then there is the diversity issue. Systems and universities represent a broader range of national and educational traditions than those of Cambridge Massachusetts and Cambridge UK. How can this broader range of traditions be encompassed?
These issues are the cutting edge of discussion. There is a strong sense of reflexivity in the discussions about rankings, within the rankings community itself, especially the discussions conducted by Jiao Tong University, and in Europe at UNESCO and OECD/IMHE meetings.
The Jiao Tong Institute of Higher Education is constantly tuning its rankings and invites open collaboration. It is a strength of the academically rigorous and globally inclusive Jiao Tong approach. On the whole the THES has been less transparent and inclusive, though that process too has been opening up to some extent, which is welcome.
Global rankings have entered a ‘second stage’ in which systems and approaches are being criticised and alternative approaches are being canvassed. It is not yet clear where this reflexivity is taking universities. We could go further down the path mapped by the US News and World Report, the THES and/or the holistic university research performance measures used by Jiao Tong. Or these could be significantly modified.
We could emphasise discipline rankings rather than whole of institution rankings – a direction encouraged by the release of the discipline data from Jiao Tong – or create more plural league tables including lists of specialist institutions. Or we could move in a different direction entirely.
There is also the question of who decides the future of rankings: publishing and market research companies, governments, international agencies, universities, social science scholars? Or some mix of the above?
Most countries with large higher education systems have rankings of one kind or another. Countries with rankings devised by newspapers and magazines include China and Hong Kong, Japan, India, the Ukraine, Romania, Poland, Portugal, Italy, Spain, Germany, Sweden, Switzerland, France, the UK, US and Canada.
In Thailand, Malaysia, Pakistan, India, Kazakhstan, Korea, Tunisia, Nigeria, the Netherlands, the UK, Brazil and Argentina, rankings have been instigated by ministries of education, grants councils or accreditation agencies. In China, Japan, Australia, Kazakhstan, Slovakia, Romania, Russia, the Ukraine, Germany, Spain, Switzerland, the UK and Canada, rankings have been initiated by universities, professional associations or other organisations.
For the most part national rankings consist of a single table but in America and Canada, higher education institutions have been divided into groups according to mission and other characteristics, creating a set of mini league-tables within which the category of comprehensive research universities has highest status.
Specialist rankings focus on characteristics ranging from research output, to student services, to MBA programs, to Yahoo Magazine’s ratings of ‘connectivity’, the university contribution to social diversity and other features.
In America, the annual US News and World Report survey focuses on aspects of institutions seen to contribute to the quality of teaching and the student experience, rather than research and scholarship. The categories of institutions are drawn from the classification in 2000 by the Carnegie Foundation for the Advancement of Teaching.
* Simon Marginson is a professor of higher education at the University of Melbourne. This is an extract from a paper, Global university rankings: where to from here? he delivered at a conference of the Asia-Pacific Association for International Education in Singapore in March.
Full report on the University of Melbourne site