RANKINGS 2: Research-oriented will be favoured
After consultation with the sector,THE has now released its methodology and proposes to use 13 indicators although the precise weighting of each is not yet determined. The broad breakdown is: four measures of institutional income, two measures of international diversity, three measures of student numbers, two measures of research performance and two reputational surveys.
The 11 quantitative measures are all capable of external verification even though the data are to be provided directly by universities. The data requests to institutions by Thomson Reuters are accompanied by detailed explanatory material. Institutions can be relied upon to closely monitor the data provided by rivals. Thomson Reuters will also conduct the surveys and supply the research citation data.
In the new rankings THE proposes to scale all the quantitative variables in some way, usually by academic staff numbers. Scaling represents a major departure from past practice. The Shanghai Jiao Tong rankings give only a 10% weight to scaled performance. Scaling shifts the measures to ones of productivity rather than of total contribution to research and teaching.
There are also definitional problems in measuring staff numbers but THE is attempting to impose a common definition. Difficulties arise, for example, in the treatment of staff employed in affiliated research institutions or hospitals, and those teaching in offshore campuses.
The Thomson Reuters request for institutions at least to match staff to the scope of other variables used is probably the best that can be done. In Australia academic staff are classified as teaching only, teaching-and-research, and research only. Different measures of staff should be used for scaling performance in teaching and in research.
In work I did with Nina Van Dyke** we found the results from reputational surveys were more highly correlated with total performance measures than with those measures scaled by institutional size.
It is important that THE publish the data on staff numbers to enable this sort of analysis to be redone - and in the interests of transparency. In previous THE-QS rankings the correlation between citations per staff and the survey of academics was quite low.
Turning to the individual indicators, the three research income measures will favour institutions with a science/medical bias. International diversity of staff is a slippery concept as it is fraught with issues of how nationality is defined. A measure foreshadowed for future considerations is the number of research papers co-authored with international partners. This is worth exploring.
The number of undergraduate entrants scaled by academic staff is presumably a way of obtaining a measure of student-staff ratios but it can obviously be distorted by national differences in the length of courses. The ratio of international to domestic students is retained from previous rankings.
Research performance is measured by the number of papers and citation impact. As in other international rankings, the omission of books and other forms of output necessarily imparts bias towards institutions that are strong in the sciences although at present the databases do not allow much else.
The reputational surveys now have much less weight than in the former THE-QS methodology. Separate surveys of teaching and research are being conducted. Measures of teaching performance will be keenly awaited as it is a notoriously difficult area within countries, let alone across countries.
The debatable areas include whose responses should be sought: if students are to be included; should it be while they are studying, immediately after graduation or a few years after graduation? In the THE survey it seems that a measure of student performance will be obtained by questions asked of academics.
This is likely to give an emphasis on the quality of students going on to postgraduate work. But, as a distinguishing feature of the best universities (other than the US liberal arts colleges) is a strong postgraduate programme, this bias is no bad thing.
THE recognises that discipline matters, specifically in the citation impact measure and the surveys. Discipline-specific information is being sought and we must wait to see how this has been used.
In principle, institutions should be evaluated on the basis of whether they are good at what they do. This implies that all indicators used in rankings should be done on a discipline basis and then aggregated up using appropriate weights, such as the shares of student or staff numbers in each discipline.
Although this makes ranking more complex only then will specialised institutions, such as the London School of Economics, be ranked appropriately.
The rankings of Australian universities, with the possible exception of the Australian National University, are likely to fall. But the new methodology will provide a better indicator of international academic standing.
* Ross Williams is a professorial fellow and professor emeritus in the Melbourne Institute at the University of Melbourne and has compiled rankings of Australian universities
** Ross Williams and Nina Van Dyke (2007) "Measuring the International Standing of Universities with an Application to Australian Universities", in Higher Education, Vol 53, No 6.