We have one simple argument: universities around the world, many more than will ever publicly admit it, are currently obsessed with gaining status in one or more national or global ranking of universities. They should quit now.
Although some may succeed in becoming ranked or may improve their numerical scores marginally, it is almost never worth either the resources required or the substantial changes in mission or academic programmes necessary. Indeed, most ‘gains’ are due to methodological changes, introduced by the various rankings to remain in the media and public headlines, and thus commercially lucrative.
Our advice is particularly pertinent for mid-range national, regional and specialist universities and colleges, and their stakeholders and governments. Today, these institutions constitute the overwhelming majority of higher education institutions worldwide, due to a combination of demographic demand for participation in higher education, and societal and economic requirements for a more highly educated citizenship.
Indeed, projections suggest the number of students enrolled in higher education is forecast to rise from 99.4 million in 2000 to 414.2 million in 2030, an increase of 416%. Accommodating these additional students will require more than four major universities (30,000 students) to open every week for the next 15 years.
These higher education institutions are the real backbone of society and their locales. They serve as the anchor institution, the mainstay for social and economic growth and development. They will develop some research focus, but are unlikely to become globally prominent.
However, our advice extends even to those universities that adopt the mantle of ‘flagship’ – those at the top of the hierarchy in their country or state. This is because rankings pervert one of the main purposes of higher education, which is to ensure that students and graduates acquire the knowledge and skills needed for a successful, satisfying and active life throughout one’s increasingly longer life span.
What global rankings measure – and don’t measure
It is by now well-known that the three main global rankings – Academic Ranking of World Universities or ARWU (also known as the Shanghai Ranking), the Times Higher Education or THE World University Rankings, and the QS World University Rankings – mainly assess two things: research productivity and (except for ARWU) reputation among peers, employers and students.
THE devotes 90% and QS 70% to measuring research, while, respectively, they assign 33% and 50% to reputation. THE uses a subjective reputational survey to measure teaching quality, but it is unclear how anyone can rate teaching ability without being in the classroom. Internationalisation incentivises quantity over quality, and often reflects a country’s geographic position. Switzerland is one good example.
U-Multirank, developed by the European Union, uses a broader set of indicators, but has struggled to gain wide acceptance, while others, such as the Leiden Ranking, are more narrowly focused in scope and coverage.
There are a growing number of national and specialist versions, ranging from those done by such publications as US News and World Report in the United States, Macleans in Canada, Der Spiegel in Germany, the Asahi Shimbun in Japan, to Global MBA Rankings from the Financial Times and the GreenMetric World University Ranking from Indonesia. The former have access to a broader dataset, but they all suffer from methodological problems.
Why universities should forget about rankings
There are more than 18,000 higher education institutions worldwide, according to the World Higher Education Database. However, only a small minority will ever appear in the rankings, no matter how much they try and how many resources are devoted to the task.
Indeed, the top 100 universities represent only 0.5% of higher education institutions or 0.4% of students worldwide. No doubt being ranked is itself an accomplishment, but maintaining position and even climbing in the rankings is not easy. There are rising expectations and slippage is a constant problem – bringing inevitable negative publicity.
This is because competition is fierce, and those in the upper reaches of the rankings have considerable resources, financial and human, to devote to the effort. Furthermore, rankings favour universities with strength in the sciences, engineering and medicine.
Newer and smaller universities, especially in developing economies, and institutions without these specialisations, have limited opportunities. At the same time, universities already at the top of the rankings continue to improve.
Thus, without massive financial and other resources, it is almost impossible for academic institutions to improve their ranking status.
Lessons from rankings
Rankings have had an outsized impact on higher education and policy. International evidence from the past decade and more shows how they influence decision-making, academic behaviour and resource allocation; research priorities and disciplinary practices, including publication in English-language and internationally ranked journals; recruitment and promotional criteria; and organisational structures and institutional mergers.
Today, many universities have a rankings strategy and institutional research units that benchmark rankings performance.
Because of the overemphasis on research, international experience highlights emergent tensions between a university’s mission and values, and efforts to enter and-or climb in the rankings.
Teaching and undergraduate students, as well as the arts, humanities and social sciences, often take a backseat when decisions are made or resources are allocated. Some universities report preferential attention and benefit being given to research ‘stars’ over longer-employed or domestic faculty.
Other examples show how universities have attempted to refocus student entry criteria and become more selective and exclusive to better meet outcome indicators such as completion rates, graduate employment or salary levels, alumni donations, etc. However, in making such changes, universities can significantly alter their mission and purpose.
Other examples highlight the huge financial costs associated with attempting to make statistically insignificant change in their ranked order – leading to huge debt.
Focus on mission, not rankings
Our combined recent experiences highlight the fact that rankings have become a major factor influencing all higher education. Yale recently announced it can no longer ignore them – while a university in the midst of a war zone, concerned about its position in the rankings, recently approached one of the authors of this article.
This experience is not unique. At a time when universities seek to promote and protect academic autonomy from all kinds of interference, it is remarkable that some universities willingly allow their decisions to become vulnerable to an agenda set by others.
Prestige and reputation have become dominant drivers rather than pursuance of quality and student achievement, intensifying social stratification and reputational differentiation. There is a big assumption that the choice of indicators and associated weightings are meaningful measures, but there is no international research evidence that this is true.
The problem is particularly acute – and concerning – for the overwhelming majority of middle- and lower-ranked universities and colleges that have got caught up in the rankings maelstrom.
To these universities, and their governments, we say: concentrate on what matters – helping the majority of students earn credentials for sustainable living and employment, rather than ensuring that your institution matches criteria established by different rankings. Even if much attention and resources are so expended, the results will not be favourable.
Philip G Altbach is research professor and founding director of the Center for International Higher Education at Boston College, United States. Email: firstname.lastname@example.org. Ellen Hazelkorn is policy advisor to the Higher Education Authority (Ireland), and emerita professor and director, Higher Education Policy Research Unit, Dublin Institute of Technology, Ireland. Email: email@example.com. Insights from M. Yudkevich, P Altbach, and L Rumbley, eds, The Global Academic Rankings Game (Routledge 2016) and E Hazelkorn, ed, Global Rankings and the Geopolitics of Higher Education (Routledge 2016) inform this article.
’Global university rankings data are flawed’ – HEPI
Are global university rankings a badge of shame?
Rankings: a useful barometer of universities’ standing
University rankings are innovating and improving
Towards a fairer and more robust university ranking
Receive UWN's free weekly e-newsletters