ASIA

ASIA: The new university rankings

The new Asian University Rankings, produced by QS Quacquarelli Symonds, will provoke heated discussion throughout Asia.

There are several surprises: not only are there changes from last year's Asian rankings but they are in some ways quite different from the QS's World University Rankings even though there is an overlap of data.

The National University of Singapore has risen dramatically from 10th to 3rd place while Peking University, which was on equal terms with NUS last year, has slipped out of the top 10 altogether.

There will surely be some complaints about the University of Hong Kong occupying the top spot, followed by the Hong Kong University of Science and Technology, whereas the University of Tokyo, fifth in these rankings, was the top Asian university in last year's THE-QS world rankings.

Overall, the rankings are dominated by Hong Kong and Japan with mainland China and Korea, surprisingly perhaps in view of their high research output, trailing behind. The Indian institutes of technology do quite well but the performance of universities in Thailand, Malaysia and the Philippines is generally mediocre.

For five years, QS produced world university rankings in collaboration with the Times Higher Education. That partnership came to an acrimonious end and QS and the THE are producing their own independent world rankings.

QS also seems set to develop a stable of regional league tables: the first Asian university rankings came out last year and the company is now preparing rankings for the Arab world and Latin America.

Publication of the second edition of the Asian university rankings will be welcomed by many in Asia. Rankings have their faults but they can provide a modicum of easily digestible information for students, administrators and other stakeholders.

Over the last few years, attitudes to university rankings have changed in Asia. The reluctance of many universities to participate contributed to the demise of the Asiaweek rankings of Asian universities in 2001.

Now we find that they are eagerly lining up for conferences organised by QS and monitoring their progress, or lack of it, in the rankings produced by QS, the Shanghai Jiao Tong University and, to a lesser extent, Webometrics.

The current Asian rankings are based on QS's world university rankings, using a broadly similar approach although with some significant differences.

Briefly, the differences are the weighting given to the survey of academic opinion, which QS insists on calling a peer review, reduced from 40% to 30%, the introduction of two measures of internationalisation, inbound and outbound exchange students, to supplement the two usual ones of international academics and international students, and the use of two new measures of research excellence, papers per academic and citations per paper.

The first is a measure of the output of the average academic and the second of the significance of the average paper produced in institutions. These two account for 30% of the total score compared with 20% in the world rankings for citations per paper.

One very welcome feature of these rankings is that the score for each indicator is given and also the score for 2009, which allows for some detailed analysis.

Many people will be perplexed by the University of Hong Kong being the best university in these Asian rankings while it trailed behind the University of Tokyo in the THE-QS world university rankings last year.

This apparent contradiction is the result of what happens when the same data is used but benchmarked to different top scores. Basically, Tokyo is better than Hong Kong in the "academic peer review," recruiter review, student-academic ratio, papers per academic, and slightly behind on citations per paper but a long way behind on the internationalisation indicators.

The latter measures command only a 10th of the overall score, just as they did in the world rankings. But the effect of benchmarking to different top scores was to give them greater importance.

The result is that Hong Kong takes first place, not because of any real improvement since the end of last year, but because of a quirk in the normalisation system used by QS.

Why did the National University of Singapore do so well this year? It is quite simple: in the 2009 and earlier world rankings and the Asian rankings it did as well as any Asian university in every indicator but one.

That was the student-acadenuc ratio, with a not very brilliant 46 out of 100 in the world university rankings and 51 in the Asian. But in the 2010 Asian rankings, NUS went up to 92.7.

To simplify, what happened was that up to last year, NUS reported a student-academic ratio of about 14. But this year the number of academics used to calculate this ratio doubled because of the inclusion of research staff in the total and so the student faculty ratio came down to 6.25, one of the lowest in Asia.

It must be stressed the university was doing nothing unethical: QS states quite clearly that research-only staff are to be included in the total. Still, NUS's rise this year had nothing to do with any improvement just as a relatively poor showing last year did not reflect any real deficiencies.

Some indicators have been extremely volatile between 2009 and this year. Some schools, for example, have done well because of better scores for internationalisation. If these rankings are to be believed, there has been an extraordinary expansion of student exchange programs, both inbound and outbound, over the last year all over Asia but especially in Korea and Japan.

Looking at outbound exchange students, we can see that Daejin University's score for this indicator grew from 23.1 to 99.9, the University of Seoul's from 48 to 94.9 and Yonsei University's from 43.1 to 90.4. But this improvements pales beside that of Tokyo University of Agriculture and Technology for inbound students: from 3.9 to 100.

If there has indeed been such an extraordinary upsurge of exchange programmes one must ask whether, at a time of world recession, this is the best use of government funds or tuition payments.

There is no doubt the remarkable growth in the reported numbers of exchange students in Asia is largely because of its inclusion as an indicator in the QS rankings.

It is noticeable that Chinese universities do quite well in these rankings but not outstandingly so. In the light of the much publicised expansion of Chinese research reported by Thomson Reuters, which now supply the data for QS's former partner the THE, this seems surprising.

But it looks like China is following a bottom-up policy towards research development with an emphasis on the large scale production of research that is not always of a high quality.

There seems to be a scarcity of researchers of the highest caliber in China. The whole of mainland China has just four highly cited researchers listed by ISI compared with 20 in Hong Kong.

In comparison, three universities in Saudi Arabia have 20 highly cited researchers between them. Accounts of plagiarism and claims the quality of research is declining may be exaggerated but it could be that China's research expansion is running out of steam and this is reflected in the lackluster showing in these rankings.

To conclude, these rankings have the flaws of the World University Rankings and are in many ways misleading. They should not be treated as impeccable measures of quality.

The publication and citation indicators are biased towards technological and medical disciplines while others are easily manipulated. The "peer review" and recruiter review are subjective and unrepresentative.

Methodological quirks can lead to rises and falls that do not reflect any real change. Nonetheless, these rankings remain and will remain for the foreseeable future the only source of comparative information for more than a trivial number of Asian universities.

* Richard Holmes teaches at the Universiti Teknologi MARA, Shah Alam, Malaysia and produces the University Ranking Watch blog.