GLOBAL

Concerns growing over ‘gaming’ in university rankings

Universities determined to rise up international rankings are increasingly ‘playing’ the methodology, Shaun Curtis of the University of Exeter in the UK told the “Worldviews 2013” conference last week. One way is to seek support from colleagues in other institutions who are answering rankings questionnaires, and another is to game the data.

Some universities, said Curtis, who is director of 'International Exeter', were encouraging people to support their institutions in reputation surveys. Recently he received an email from a colleague at a partner university reminding him that a rankings questionnaire was on the horizon.

“The colleague listed the university’s achievements in recent years and the trajectory it had travelled – and a quite useful link to the questionnaire was given as well. There was no direct approach, but you could see what was happening.”

It was also possible to play the data. “I was amazed to see an advert from an Australian university that was looking to employ rankings managers on incredibly high salaries. And why did they want to do that? Basically, you can play the rankings game.

“Perhaps a university can rise up the rankings because they have world-class data crunchers.”

Curtis was a panellist in a session on the relevance and rise of rankings, along with Bob Morse, director of data research at US News and World Report, Mary Dwyer, senior editor at Maclean’s magazine in Canada – they both produce national university and college rankings – and Phil Baty, editor of the Times Higher Education world rankings.

Curtis said that 5,000 of Exeter University’s 18,000 students were from outside the UK. “Rankings therefore play a very important role in what we do.” Rankings were explicit in the university’s strategy. While rankings had flaws, that was no excuse for poor performance.

National rankings, Curtis contended, were more influential than global rankings. “Context is key.” For students, parents and recruitment agencies, it was more important to understand how Exeter was doing in relation to other UK universities, than in relation to institutions in other countries that people did not know.

Domestic rankings were more data driven, international rankings more perception driven. However, international rankings had an important effect on prestige and so universities had to pay attention or risk being caught unawares.

British universities were starting to play the rankings game, sometimes quite blatantly, with attempts to exploit some of the methodologies. “And that’s especially true for the international rankings, which are more perception driven.”

Curtis was also concerned about rankings affecting policy, with governments apparently wanting to concentrate funding in rankings winners. “The media is influencing this policy debate.”

Obscenely powerful

Phil Baty of Times Higher Education, or THE, said, in a disembodied voice over an audio link from London, that rankings had become “obscenely powerful”.

Brazil was sending 100,000 students to study abroad only at ranked institutions, Russia was giving special recognition to degrees from top ranked universities, and India was only allowing in institutions that were globally ranked. Another “rising power” – Twitter – was using the international rankings to decide where it would set up a research centre.

A measure of the importance of rankings were studies that had shown that they were the number one factor for students making university choices – more important than fees and, remarkably, course content.

“This reflects the huge investments made.” If a student was spending six figures on a qualification, it was a bigger investment decision than buying a car or even a house. “It’s about a brand, a lifelong career.”

While all university rankings had serious limitations and their power was not justified, they nevertheless had a very important role to play – but only if they were transparent and honest about inherent weaknesses.

Baty argued that global rankings were “more responsible” than national ones. The thrust of his argument was that international rankings only tried to compare large, research-intensive universities around the world. “We are only interested in selecting the global elite.”

Therefore, international rankings avoided the pitfall of national rankings, which compared big and small institutions, putting diverse groups of institutions in the same hierarchical list, “condemning outstanding local institutions as failures or somehow inferior”.

The same fate befell the THE’s competitors, which ranked large numbers of universities – 700 to 800 – rather than the 200 ranked by THE. This did sound a little as if THE was making a strength out of a weakness.

The national rankers

However, US News and Maclean’s said that their rankings had quickly realised that in a sector with diverse institutions, it was not useful to have all institutions measured against one another. So both introduced categories for different types of institutions, in which it was possible to do ‘apples for apples’ comparisons.

Mary Dwyer of Maclean’s said three categories had been created for different types and sizes of institutions in Canada. “What has changed is the amount of info available. Now every university has its own website and there is a wealth of data online from other organisations.”

But while there was a lot more information available, it was “clear that students and parents are still looking to the media and rankings” – although rankings tended to be a starting point for people in choosing where to study.

The media played a crucial role, said Bob Morse of US News. In the United States, the government would not undertake rankings, although research councils did comparisons, and “higher education would never rank itself”.

The media in America was seen as credible – this might not be the case in other countries. There were dangers when rankings were connected to governments. They could be seen as a direct tool to decide policies through the data. “That’s a different kind of process than rankings whose purpose is to serve consumers.”

Dwyer said that universities and governments tended to hold off from rankings because there were lots of different interests jostling to be served. Curtis agreed, voicing concern over the new international U-Multirank exercise being underwritten by a supranational government, the European Union, rather than being produced by the media.

Countering gaming

Regarding universities gaming, Baty said rankings needed to be held up to scrutiny. THE’s reputation survey was not open, it was distributed in various languages, and it made sure that people in enough countries were asked to join surveys. “So we are working hard to iron out some of these biases.”

Dwyer said that Maclean’s got all its data from third-party sources, for example, research councils. When the reputation survey was sent to universities, they all got the same number. “With those types of controls, there is not enough data that universities can affect that much.”

Morse argued that rankings organisations should not believe their data were perfect. “This is not the position. We must be realistic in saying that it will be a battle to get the correct data and build a culture of data standards, because when the stakes are high people will make the effort.”

What everybody agreed on was that rankings were not going away.