GLOBAL RANKINGS: Thousands respond to THE survey

The opinions of more than 13,000 academics will be used to build a picture of the standard of teaching and research in the world's universities for the 2010 Times Higher Education World University Ranking.

Despite an increased sample size, the findings will account for 20% of final scores, compared with 40% under the methodology used from 2004 to 2009.

Meanwhile its main rival, QS, is introducing a rating system to better reflect the diversity of institutions by measuring their broader missions.

The inclusion of both research and teaching means THE can claim its 2010 rankings will include the first worldwide reputation-based indicator of teaching quality.

Thomson Reuters, the exclusive data supplier for the rankings, confirmed that 13,388 people had responded to its Academic Reputation Survey launched in March.

"It is an excellent response in terms of volume," THE Editor Ann Mroz said. "But it is not just size that matters. The respondents were carefully targeted as experienced scholars by an invitation-only survey to ensure they are representative of their region and subject areas.

"We have a very high-quality sample that is much more representative and rigorous than anything the rankings have used before."

Most respondents (38%) are from the Americas, 30% from the Asia Pacific and Middle East regions and 28% from Europe. Engineering and technology supplied marginally more respondents (23%) than physical sciences (21%) and social sciences (18%).

THE Deputy Editor Phil Baty told University World News the broad published methodology was a draft subject to detailed consultation and that the weightings given to individual indicators had yet to be set.

"We are consulting with a group of around 50 senior university staff and institutional researchers from around the world, including the THE editorial board," said Baty.

"The key feature of the new approach is that by using a far larger number of indicators (around 13 compared with the six used in the old methodology) we improve accuracy and increase stability. With the old system, some indicators of very limited value and with too much potential for manipulation by universities were given a very high weighting, increasing the instability of the tables," he said.

"The old system also gave too much weight to opinion surveys. We are planning to reduce the weighting given to subjective measures to around 20%."

Last May, THE and its former partner QS engaged in a numbers battle over the size of the sample used for the 2009 rankings, the last to be published before the two parted company. Baty claimed in The Straits Times that QS achieved only 3,500 responses but the company's Managing Director, Nunzio Quacquerelli, immediately stated the rankings were based on 9,386.

"QS received statistically significant numbers of academic respondents from all major OECD countries," he said.

Ben Sowter, Head of the QS Intelligence Unit, said a record response had been received in the 2010 academic survey, including a "great many" from university presidents. "We will exceed the 9,386 academic respondents included in our 2009 ranking."

THE's reputation survey feeds into 13 separate performance indicators that will be used to compile the league tables for 2010 and beyond (see table). Six measures were used under the methodology employed between 2004 and 2009.

Richard Holmes, author of the influential blog University Ranking Watch, wrote: "The new methodology is less diverse than appears from a simple count of the number of indicators. It is heavily research orientated...More than half of the weighting goes to a bundle of research indicators. However, economic activity/innovation is for this year nothing more than research income.

"Adding to the emphasis on research, the institutional indicators include the number of doctorates awarded and the ratio of doctorate to bachelor degrees awarded. Under institutional indicators there is a survey of teaching but the respondents are largely selected on the basis of their being authors of academic articles published in ISI indexed journals.

"There seems to be no evidence that the respondents do very much teaching and if Thomson Reuters includes researchers with non-university affiliations, of whom there are many in medicine and engineering, then it is likely that many of those called upon to evaluate teaching have never done any teaching at all. Meanwhile student to faculty ratio, a crude measure of teaching quality, has been removed.

"So, if you want rankings that emphasise research and funding then THE and Thomson Reuters may be heading, somewhat uncertainly, in the right direction but perhaps at the price of neglecting other aspects of university quality."

QS indicators and weightings remain unchanged for 2010. The company rejected a number of proposed rankings criteria - for example, financial metrics such as research income - because they cannot be independently validated and are subject to exchange rate and business cycle fluctuations.

Instead, its advisory board has consistently argued in favour of maintaining a strong emphasis on its academic reputation survey which retains a weighting of 40%.

There are, however, a number of innovations for 2010, including QS Stars - a rating system the company says will better reflect the diversity of higher education institutions around the world by measuring the broader missions of universities, such as knowledge transfer, community activities, infrastructure and the presence of specialist centres of excellence.

I think it is important to clear up any confusion about the response rate to QS's 2009 survey. In 2009 QS collected around 3,500 responses, as I have correctly stated. But for the 2009 rankings, QS used 9,386 responses in total, as they aggregated three years worth of responses. So I am correct in my criticism. It would be helpful if QS could publish a breakdown of their 2009 response numbers.

Phil Baty,
Deputy Editor,
Times Higher Education