AUSTRALIA

AUSTRALIA: Jiao Tong superior but not ideal
The Jiao Tong system has to be judged clearly superior to the THES system. In emphasising research it focuses on one of the essential functions of a university and, in contrast to the THES, which gives great weight to peer review, Jiao Tong is concerned with genuine criteria rather than mere symptoms of excellence; it also aims to confine itself to relatively objective criteria indicating demonstrable and measurable differences between universities.While the Jiao Tong system was devised in a country currently viewed, at least in the education context, as a consumer, the THES system was developed in a country that exports education. Professor Simon Marginson has suggested the THES system may have been designed to promote British universities in relation to US institutions, with Australian universities an accidental beneficiary. Jiao Tong reflects the interests of those buying an education, and as such aims to give an accurate picture of university quality.
Comparing the actual rankings of the THES with Jiao Tong, we find very similar results among the top 10 universities, with eight of Jiao Tong’s top 10 also in the THES top 10. But further down the lists, the two systems produce very divergent results.
This may be because the achievements of universities which are at the very top in terms of research productivity, and are therefore given top ranking by Jiao Tong, are also so famous that the THES’ peer review system consistently awards them top scores. Further down the order among the very large number of lesser-known but respected universities, peer review is not able to produce results that consistently indicate actual achievement.
That sharp divergences occur lower down is dramatically shown if we compare the different rankings given by Jiao Tong and THES to Australian universities which, in terms of the world order belong mostly in the “lesser known but respected” category. THES places seven Australian universities in the world’s top 100 whereas Jiao Tong rates only two.
Why do Australia’s universities do much better on the THES than on the Jiao Tong ranking? One reason is the credit awarded by the THES for percentages of overseas students – and Australasian universities attract very large numbers of students from Asia. Another is that Jiao Tong emphasises scientific research and so a relatively moderate research output in the sciences brings Australian universities down.
But the most important reason is that the THES has a strong regional bias because its peer review assessors are not asked to rate universities across the world but only those in their own region. Australian universities probably benefit because the system does not require them to compete with the world’s best, only with those in the Asia-Pacific.
There is no such regional bias in the Jiao Tong, with the result – we are suggesting – that Australia’s universities do much better under the THES.
Though superior to the THES, Jiao Tong falls well short of an ideal ranking system. It shows a commendable reluctance to use subjective criteria of excellence and consistently attends to real features of excellence rather than mere symptoms. It avoids peer review and excludes criteria that give credit for reputation, “internationalisation”, and other unreliable indicators.
In focusing on research it attends to a fundamental mark of university excellence; its methods for assessing research performance are admirable in their objectivity and the way they attend to quantity and quality.
An ideal ranking system would give significant credit, as Jiao Tong does, for the number of academic articles a university’s departments places in peer-reviewed academic journals. However, unlike Jiao Tong, it would not have a bias towards science or any other academic field.
For the system to be fair it would also have to find a way of evaluating the varying weight of articles (for example in terms of length and substance) and the varying contributions individuals make to a publication (recognising, for example, the difference between individual authorship and multi-authorship). It would also need to find a way of crediting the authorship of academic monographs – another form of academic output which falls through the cracks of the Jiao Tong system.
An ideal system would follow Jiao Tong in giving credit for high quality research, though perhaps giving less emphasis to rare forms of achievement such as winning Nobel and Fields prizes, while expanding its list of prestige journals to accommodate more than just two – Science and Nature. There are analogues of these journals in non-science fields which should also be acknowledged, and there are other highly prestigious science journals.
Again, an ideal system would follow Jiao Tong in crediting institutions for their highly cited researchers, since the presence of research leaders is obviously a distinctive feature of excellent universities. But a weakness of Jiao Tong is that it gives insufficient attention to per capita research output: while absolute output does contribute to excellence, per capita productivity indicates consistent research involvement – something that would be given recognition in an ideal ranking system.
Jiao Tong has been widely criticised for giving too little attention to teaching excellence. An ideal ranking system would address this. Liu and Cheng point out that Jiao Tong gives relatively little weight to teaching because it is difficult to find objective and internationally comparable measures of teaching quality.
But class sizes can be measured, and small class sizes give at least some indication of teaching quality. The THES recognises this by giving points for low staff-student ratios but it would be even better simply to credit universities directly for low average class sizes.
Another plausible indicator of students’ active participation in learning is their library borrowing practices – something that could be objectively measured. Further measurable criteria of teaching/learning activity need to be investigated.
One way or another, the fundamental question of teaching excellence would be given its due weight in an ideal ranking system, even if ingenuity is required in finding appropriate criteria.
Both THES and Jiao Tong produce lists of top universities but both recognise there is a limited value to this, and gesture towards improvements where more detailed information regarding areas of academic activity is provided. Both systems are trying to build in more specific information regarding particular subject areas.
The THES has “top 50” ratings for universities in the areas of science, technology, biomedicine, arts and humanities, and the social sciences while the Jiao Tong compilers have promised to list the top universities in the areas of engineering, sciences, social sciences, life sciences and medicine, though this information has not yet appeared.
More detailed information along these lines would certainly add to the value of these ranking systems. Universities vary in strength from area to area and potential students are likely to be interested in excellence in their chosen area of study.
But even categories such as “science” and “arts and humanities” are too general to suit the needs of a student interested in a particular discipline, and at postgraduate level information on particular departments becomes crucial. An ideal ranking system would give scores for teaching and research output on a department-by-department basis.
A model for this already exists: Germany’s Centre for Higher Education Development has a system which deliberately avoids general rankings in favour of evaluations of specific university departments according to such features as “teaching” and “research reputation”.
You could ask whether we want ranking systems at all but they are a fait accompli, and the real question is how best to make use of them. Essentially, they embody information and their value depends on what that information is.
If their criteria capture useful information, then the systems themselves are useful. Even if a system is not ideal, it will have some value if used in an informed way. An uninformed use of a system is simply to take its rankings at face value. An informed approach is to look carefully at its criteria, and then read its rankings as a measurement of achievement according to those criteria in particular.
We know Jiao Tong measures research output and pays little attention to teaching, and we know it emphasises science at the expense of the humanities. Thus when it ranks a far larger number of Japanese universities than Australian universities among the world’s top 100, we may disagree with those rankings. But given its biases, it is still telling us something useful, namely, that Australian universities are probably relatively uncompetitive producers of scientific research measured in terms of absolute output.
Universities can do themselves harm if they uncritically accept a system’s criteria and aim simply for a higher ranking as an end in itself. In this way a system designed to measure performance dictates what that performance should be – a case of the tail wagging the dog.
A university trying to improve its performance according to Jiao Tong might channel funds into science at the expense of the humanities, when in fact it may not have the resources to become competitive in science research. If it was traditionally strong in the humanities it may meanwhile have threatened one of its important assets.
Ranking systems should not dictate university policy, either at a national or institutional level, but should be used as a source of information for guiding policies that are decided according to the needs of the university’s own community, traditions, market niche and national role.
The kind of information provided by ranking systems can be grouped according to its value at an individual, institutional and national level. At the individual level the information is mainly a resource for prospective students, providing comparisons of institutional performance that facilitate the choice of a university.
At the institutional level academics can use ranking systems to compare the performances of their own departments with corresponding departments in other universities, and administrators can use them to judge their university’s comparative strengths and weaknesses in particular academic areas.
At the national level ranking systems can provide useful information to government and other higher education leaders and decision makers, and also to ordinary citizens wishing to assess the success or wisdom of state or national higher education policies.
* Paul Taylor is a research assistant and Associate Professor Richard Braddock is director of international relations in the Asia-Pacific Research Institute at Macquarie University in Sydney.
This is an edited extract from International University Ranking Systems and the Idea of University Excellence, published in the Australian Association for Tertiary Education Management’s Journal of Higher Education Policy and Management, Vol 29, No.3, November 2007.
View the paper