24 November 2014 Register to receive our free newsletter by email each week
Advanced Search
View Printable VersionEmail Article To a Friend
AUSTRALIA
AUSTRALIA: The new ERA of journal ranking
It was recently announced that the Excellence in Research for Australia initiative will remain largely unchanged in the coming year, and will remain as an instrument used by the Australian government to determine the level of research funding available to Australian universities. While there has been some unease about the ERA among academics, many seem resigned to the process.

Perhaps some have simply accepted the onset of the audit regime and have bunkered down. Others perhaps welcome the chance to operate within the competitive environment the ERA brings, having discarded (or perhaps never subscribed to) the older cultures of collegiality that, as we shall see, are hollowed out by cultures of audit.

Others may simply believe that the ERA provides a relatively neutral way to measure and determine quality, thus accepting the benign if somewhat unspecific assurances from Senator Kim Carr and Australian Research Council chief Professor Margaret Shiel that academics who stick to what they are good at will be supported by the ERA.

The ERA represents a full-scale transformation of Australian universities into a culture of audit. While aspects of auditing have been part of the Australian context for some time, Australian universities have not faced anything like, say, the UK situation, where intensive and complex research assessment exercises have been occurring for over two decades. Until now that is, and a glance at the state of higher education in the UK ought to give pause.

Responding to the ERA requires more than tinkering with various criteria for measuring quality. Instead we suggest the need to return to 'basics' and discuss how any comprehensive auditing regime threatens to alter and in all likelihood undermine the capacity for universities to produce innovative research and critical thought. To say this is not to argue that these things will no longer exist, but that they will decline as careers, research decisions, cultures of academic debate and reading are distorted by the ERA.

The essential 'dysfunctionality' of the ERA for institutions and individual researchers is the focus of this article.

In discussing the pernicious impacts of auditing schemes, we are concerned in particular about the journal ranking process that forms a significant part of the ERA. While the ERA will eventually rank other research activities such as conferences, publishers and so on, the specifics of this process remain uncertain, while journals have been ranked and remain the focal point of discussions concerning the ERA.

We have studied the arbitrary nature of any attempt to 'rank' journals and the critiques levelled at both metrics and peer review criteria. We also question the assumption that audit systems are here to stay and the best option remains being attentive to the 'gaps' in techniques that measure academic research, redressing them where possible. Instead we have looked at how activities such as ranking journals are not only flawed but more significantly erode the very contexts that produce 'quality' research.

We argue that collegiality, networks of international research, the socio-cultural role of the academic journal, as well as the way academics research in the digital era, are either ignored or negatively impacted upon by ranking exercises such as the ERA.

As an alternative we suggest relocating the question of research quality outside the auditing framework to a context once more governed by discourses of 'professionalism' and 'scholarly autonomy'.

In 2008 the Australian Labor Party introduced the ERA, replacing the previous government's RQF (Research Quality Framework), a scheme that relied upon a fairly labour-intensive process of peer review, the establishment of disciplinary clusters, panels of experts, extensive submission processes and the like.

In an article headlined "A new ERA for Australian research quality assessment" in 2008, Carr argued that the old scheme was "cumbersome and resource greedy", that it "lacked transparency" and failed to "win the confidence of the university sector".

Carr claimed that the ERA would be a more streamlined process that would "reflect world's best practice". Arguing that Australia's university researchers are "highly valued...and highly respected", Carr claimed that the ERA would enable researchers to be more recognised and have their achievements made more visible.

If we took Carr's statements about the ERA at face value we would expect the following. The ERA would value Australian researchers by making their achievements 'more visible'. The ERA would reflect 'world's best practice' and reveal 'how Australian university researchers stack up against the best in the world'. Finally, the ERA would gain the confidence of researchers by being a transparent process. All this would confer an appropriate degree of respect for what academics do.

However, our analysis explores why this is not the case.

In the concluding chapter of The Audit Explosion, Michael Power diagnosed a key problem resulting from the rise of audit culture: "We seem to have lost an ability to be publicly sceptical about the fashion for audit and quality assurance; they appear as 'natural' solutions to the problems we face."

Many academics remain privately sceptical about research auditing schemes, but are unwilling to openly challenge them. As Power observed 16 years ago, we lack the language to voice concerns about the audit culture's focus on quality and performance, despite the fact that in the higher education sector we have very strong professional and disciplinary understandings of how these terms relate to the work we do which are already 'benchmarked' internationally.

In light of this and the serious unintended outcomes which will stem from dysfunctional reactions to the ERA, we suggest that rather than try and lobby for small changes or tinker with the auditing mechanism, academics in the humanities need to take ownership of their own positions and traditions around the idea of professionalism and autonomy which inform existing understandings of research quality.

Reclaiming these terms means not merely adopting a discourse of opposition or concern about the impact of procedures such as the ERA (often placed alongside attempts to cooperate with the process), but also adopting a stance that might more effectively contribute to the very outcomes of quality and innovation that ministers and governments (as well as academics) desire.

Power's suggestion is that "concepts of trust and autonomy will need to be partially rehabilitated into managerial languages in some way", and we may well begin with a task such as this.

As Margit Osterloh and Bruno Frey have demonstrated, if academics are permitted to work informed by their professional motivations - intrinsic curiosity, symbolic recognition via collegial networks, employment and promotion - governments will be more likely to find innovation and research that, in Kim Carr's words, you could be 'proud of'.

* Simon Cooper teaches in the School of Humanities, Communications & Social Sciences and Anna Poletti teaches in the School of English, Communications & Performance Studies at Monash University in Victoria, Australia.

* This is a much-abridged version of Simon Cooper and Anna Poletti's article 'The New ERA of Journal Ranking' in the current issue of the Australian Universities Review.

To download the full article, click here
Disclaimer
All reader responses posted on this site are those of the reader ONLY and NOT those of University World News or Higher Education Web Publishing, their associated trademarks, websites and services. University World News or Higher Education Web Publishing does not necessarily endorse, support, sanction, encourage, verify or agree with any comments, opinions or statements or other content provided by readers.