Rankings create ‘perverse incentives’ – Hazelkorn
The Times Higher Education-QS World University Rankings soon followed, thus setting "the cat among the pigeons", says Ellen Hazelkorn, author of the recently released (March) second edition of Rankings and the Reshaping of Higher Education: The battle for world-class excellence.
Today, no less than 10 major outlets – most of them commercial – are publishing global rankings and another 150 or so rankings focus on a subset of countries, institutions or disciplines.
As Hazelkorn says in her book, universities and governments are increasingly using rankings to heighten their status, attract foreign students, professors and investments, and in many cases, set policy designed to improve their standing in the rankings. While rankings purport to measure quality, for all intents and purposes, they largely capture institutional wealth, wealth accumulated over time or expressed as socio-economic wealth, she says.
Hazelkorn, who also is policy advisor to Ireland's Higher Education Authority and director of the Higher Education Policy Research Unit at the Dublin Institute of Technology in Ireland, spoke with University World News about how a preoccupation with rankings can create what she calls "a lot of perverse incentives".
Your survey found that more institutions were unhappy with their ranking in 2014 than they were in 2006 (83% vs 58%), that more of them monitor their peers worldwide (77% vs nearly 50%) and that an "overwhelming majority" use rankings to inform strategic decisions. What does this upswing across such measures tell you?
That the rankings phenomenon has just continued apace. It tells us the extent to which institutions are very tied into what the rankings tell them about themselves. It's about positioning and visibility.
It's not a huge database of institutions – 109 responded in 2014 – so I look at what it is saying in broad brushstrokes rather than specifics. The institutions that responded were ranked and ranked fairly high, and all [respondents] were quite pleased with the rankings. There's a lot of criticism [of rankings], don't get me wrong. But the institutions that tended to respond, most of them had positive views, most thought rankings were more helpful than unhelpful. My guess on that is, even if you [don't like where you're ranked], it's better to be noticed than not noticed. It's better to be invited to the party than be sitting at home.
Why should this preoccupation with rankings be of concern?
A few rankings are government-sponsored but by and large they're created by commercial organisations. Everyone says, 'I know this is very foolish, but...' And it's all the more remarkable that people are tied to a set of indicators that continue to evolve from year to year and over which they have no control. You look at all this and it's kind of remarkable as to the impact and influence that these rankings have had.
To flip this completely, rankings have managed to raise the focus on higher education quality and policy, focus attention on investment, and place higher education into an international and comparative perspective.
In some countries it's a useful mechanism – I say that with huge reservations because of questions about whether they actually measure quality. But particularly in societies or institutions that have not been open to peer or external review of any sort, the chill winds of competition have blown through them.
But are the lessons being taken out of it the right ones? And are people making appropriate decisions or are they simply seeking to move up the rankings? What happens when the rankings become the strategy as opposed to an outcome of your own strategy? Rankings serve some good, but they create a lot of perverse incentives.
Not everybody needs or wants to go to Harvard. We have to have a playing field where other institutions are equally, mutually seen as representing important options in people's lives. They're not second rate. I guess my argument is, if you have mutual respect for different types of institutions you'll have a different dynamic.
Speaking of Harvard, to what extent are the 74 US universities that show up on global rankings driving higher education policy in the rest of the world?
There's no doubt the United States dominates. There is lots that is really good about US higher education that other countries could learn from – its diversity, the breadth of its institutions, even forgetting the Harvards and Princetons and all of that crowd, the state universities, liberal arts colleges, community colleges, city colleges, the mobility across the system, the ability to transfer credits, to move from one part of country to another. By and large that's all positive.
Many of the other countries, certainly in Europe, have got these very rigid binaries – it's either university or non-university.
But this is the other side of it: You've got this incredible top echelon and then you've got these incredible failures. US completion rates are shocking, attrition is shocking, the levels of student debt, the cost, affordability, it's shocking.
This is where this kind of focus on the narrow – we focus on this very narrow stratum and think this is great. The contradictions, the extremes are very extreme. The whole equity issue in the United States is just enormous.
As you note, the top 100 rankings captures a mere 0.5% of the 18,000 institutions that enrol students worldwide. You also calculate that those institutions as a group enrol about 0.4% of the world's 200 million college students. How do those statistics skew our understanding of institutions and students?
We're obsessed with this incredibly small percentage of students. What about the other 99.6%? No one is for poor standards or poor academic research, but what are we doing about everyone else? If we were ranking hospitals and only dealt with a few people who went to one hospital, it would be outlawed.
Is the approach taken by U-Multirank any better?
It's got some good attributes but it is full of contradictions. U-Multirank challenges the notion that there's one kind of excellence. However it has many of the same difficulties. The indicators are methodologically problematic, not least because they mix quality with quantity.
It’s a crowd-sourcing tool – in other words any institution can be ranked just by providing data – which makes it very democratic. But this also means that the results are only as good as what's in the database.
It’s also clear its original ambitions have become diluted over time. Ultimately, I think as a European project, it should focus in the first instance on developing a tool that can enhance and sell European higher education; make that a success, and then expand the horizons.
How about the concept of world-class systems instead of world-class universities?
Universitas 21 is a move in that direction. Finland is an interesting country. It excels across the board in most indicators. And it's a very equitable society that [strives to deliver] a high-quality education across the country. They don't focus just on Helsinki.
Ireland has adopted a similar approach.
Some of these kinds of things are happening elsewhere as different societies begin to consider a much more managed approach for looking at all their institutions and the role that they all play.
So I see some promising things. But it's a real conundrum. I think we have yet to come up with the optimum approach. Certainly broadening out the range of options and opportunities is key to where we 're going.
Ellen Hazelkorn’s Rankings and the Reshaping of Higher Education: The battle for world-class excellence, second edition, is published by Palgrave (March, 2015).