EUROPE

University rankings schizophrenia? Europe impact study
It has been a decade since global rankings first burst onto the world stage in 2003. Reaction was swift; political and higher education leaders took immediate notice. While an original objective was to enhance student choice, nowadays their influence is more about geo-political positioning for nations and universities.In a simple and stark way, rankings have succeeded in setting the parameters for what constitutes quality.
The arrival of global rankings coincided with the inclusion of education as an internationally traded service within the World Trade Organization’s General Agreement on Trade in Services – GATS – negotiations.
By placing higher education within a wider comparative and international framework, the higher education world became immediately and visibly more competitive and multi-polar.
As countries compete for a greater share of mobile capital and talent, the quality and status of universities has become a top policy and political issue. Today, university performance is measured using rankings that have been developed by governmental and-or commercial agencies.
Governments across Europe, as across other world regions, have – to varying degrees – become enthralled with and concerned about global rankings. In a barely concealed reference to rankings, Europe 2020 – the European Union’s flagship strategy – argued for “more world-class universities” which can “raise skill levels and attract top talent from abroad”.
Earlier this year, the European Commission launched U-Multirank as a counter-ranking using a broader range of indicators than other rankings and drawing upon the benefits of interactive web technologies.
Given the importance now being attached to rankings, how much do we understand about how rankings are impacting on higher education? To what extent have institutional strategies or processes been affected or changed because of rankings?
To what extent have rankings influenced institutional priorities or activities or led to some areas being given more emphasis than others so as to improve an institution’s ranking position?
Impact on European universities
Rankings in Institutional Strategies and Processes, RISP, is the first pan-European study of the impact and influence of rankings on European higher education.
Co-funded by the European Commission and led by the European University Association, the project provides an insight into how rankings impact and influence European universities’ behaviour and decision-making.
What’s clear is that, despite high levels of criticism, European higher education institutions are also avid users of rankings. Of the 171 higher education institutions that responded to the survey, over 60% used rankings to inform strategic decision-making – specifically with regard to setting a strategic target.
Over 70% of respondents said they used rankings to inform strategic, organisational, managerial or academic actions.
This may involve giving a new focus to particular areas (26%), changing research priorities (23%), altering recruitment and promotional criteria (21%), informing resource allocation (14%), revising student entry criteria (9%) or closing or merging departments (8%).
Not surprisingly perhaps, 80% of respondents used rankings for publicity or marketing purposes.
Some 86% of respondents monitored their own position in rankings. In 54% of circumstances, this was undertaken by the rector or vice-chancellor.
In 33% of cases, monitoring was undertaken by a specialist unit within the institution, while 54% of respondents said they had several dedicated people at institutional level who undertook this role – often in addition to other strategic or institutional research activities.
A very significant majority (75%) used rankings to monitor the performance of peer or competitor universities, while 5% said they intended to do so. The overwhelming majority of respondents said they did so for benchmarking purposes, while others said rankings were useful for identifying potential collaborators.
According to respondents, prospective students, researchers and partner institutions and government ministries or higher education authorities were among the top users of rankings. Institutional leaders and the academic community were also important users.
Of the students, international non-European students were the most likely users: 86% of those seeking admission to a masters level programme and 81% of those entering a doctoral programme. Overall, students seeking a masters programme were most likely to use rankings – which provides a level of detail not widely known.
While local students were considered less likely to use rankings, nonetheless 50% of respondents identified local students seeking a masters programme as an important user-group.
What have we learned?
While higher education institutions use and refer regularly to rankings, they often do so for a variety of reasons. Indeed, the term ‘ranking’ may be used to refer to any formal and public evaluation of higher education performance, especially when the results are published in an ordinal format.
Despite the significance of global rankings, national rankings retain importance, sometimes more so than global rankings because of the direct link to resources and student choice.
Rankings play a significant role with respect to defining strategy and goals, but they are one source of information among others.
The way in which higher education institutions study or reflect upon rankings is neither systematic nor coherent, and they often use ad hoc monitoring patterns in response to strategic needs. Hence, there is no clear pattern as to how institutions respond and not all institutions, or all institutions with a similar profile, react in the same way.
Acknowledging that cross-national comparisons will inevitably increase over the years, the report urges institutions to develop their institutional and strategic research capacity. It concludes with a check-list to guide institutional responses to rankings.
The results are broadly in line with experiences internationally and highlight ambiguities and complexities around attitudes towards and uses of rankings.
Higher education institutions have a schizophrenic attitude towards rankings – not least because their main stakeholders use rankings. Effectively, the research shows that institutions are “learning to live with rankings” and to use them in a sophisticated way, often as part of a basket of indicators.
Yet, it is also clear that rankings are influencing the types of decisions that higher education institutions are making.
Given concerns about the meaningfulness of the indicators, this prompts further investigation. Likewise, the study preceded U-Multirank and it would be worth exploring its influence, and attitudes towards it as compared with other rankings.
* Ellen Hazelkorn is policy advisor to the Higher Education Authority, Ireland, and director of the Higher Education Policy Research Unit, or HEPRU, at the Dublin Institute of Technology, Ireland. She is co-author of the EUA report on rankings, "Rankings in Institutional Strategies and Processes". Copies of the report are available here.