AUSTRALIA

AUSTRALIA: Protests force research policy change

The biggest change relates to how academic journals were assessed last year as part of the government's Excellence in Research for Australia scheme, introduced to shape the allocation of research infrastructure funds.
In the first round of assessments, academic journals were ranked according to their alleged 'quality' on a scale from 0% to 100%: the top 5% of journals were designated 'A*', the next 15% were designated 'A', with 'B' and 'C' journals taking up the final 30% and 50% respectively.
As Dr Ian Dobson writes in our Commentary section in this issue, the underlying assumption of the journal ranking scheme was that the top-rated journals (A* and A) would contain scholarly papers of a higher standard than those ranked B and C. Dobson, one of the scheme's fiercest critics, notes there is not necessarily any link between the quality of a paper and the journal it is published in.
The amount of criticism and the level of intensity this part of the assessment scheme aroused finally forced Carr to act. In a release explaining the change, he said the amended system would introduce a "journal quality profile" instead.
This would show the most frequently published journals for each unit of evaluation, thus increasing the capacity to accommodate multi-disciplinary research to allow articles with significant content from a given discipline to be assigned to that discipline, regardless of where it was published.
Carr said this method had been successfully trialled last year within mathematical sciences: "As with some other aspects of ERA, the rankings themselves were inherited from the discontinued Research Quality Framework process of the previous government, and were developed on the basis of expert bibliometric advice."
But he said "patterns of their utilisation" and detailed analysis of their performance last year made it clear the journal lists themselves were the key contributor to the judgements made, not the rankings within them.
"There is clear and consistent evidence the rankings were being deployed inappropriately within some quarters of the sector, in ways that could produce harmful outcomes, and based on a poor understanding of the actual role of the rankings. One common example was the setting of targets for publication in A and A* journals by institutional research managers.
"In light of these two factors - that [the research assessment exercise] could work perfectly well without the rankings and that their existence was focusing ill-informed, undesirable behaviour in the management of research - I have made the decision to remove the rankings, based on the ARC's expert advice."
He said the journal lists would still be of great importance but the removal of rankings and provision of a publication profile would ensure they would be used descriptively rather than prescriptively.
The National Tertiary Education Union said Carr's decision would be welcomed by the majority of Australian researchers. NTEU National President Jeannie Rea said the union had informed Carr of the "very real concerns of its members" about various aspects of the ERA, in particular, the journal rankings.
"Dropping the controversial ERA journal rankings will be a great relief to many researchers, especially those whose research has a strong interdisciplinary, indigenous, public policy or local community focus. Nonetheless, university staff will need to remain vigilant about other ways in which ERA has been inappropriately used," Rea said.