Research elite warns against obsessive ‘bean-counting’ culture
Their concern has been highlighted by a report from the League of European Research Universities (LERU) authored by Dr Mary Philips, formerly director of research planning at University College London.
At a launch seminar in Brussels on 19 June she spelled out the various pitfalls of assessment, including the shortcomings of the peer review system, which is costly, time-consuming and subject to bias.
For governments and universities the task in straitened times is to “assess assessment” – that is, find what works in different environments and research cultures globally.
She added: “With few exceptions European universities don’t do as well in rankings when compared with counterparts in the US.”
But figures do not always paint an accurate picture, as the tools currently used to integrate differing information systems are unsatisfactory, she told the seminar.
However, Philips conceded that research assessment is now considered a vital aspect of any university’s activity, so much so that a number of external agencies are developing review methods that will allow the farming out of much research assessment.
Professor Wiljan van den Akker, dean of humanities at Utrecht University, said: “Peer review is the worst form of review – except all the other methods.
“And we should remember that in every kind of world there is some kind of misbehaviour. That includes the academic world.”
He reminded the audience that any kind of assessment – peer review, bibliometric data or other criteria – touches on questions of identity, and that language is the foremost identifier, the language of what has been researched and that of subsequent discourse.
His advice was to “leave language alone otherwise we are inflicting a lot of problems for ourselves”.
In bibliometric data there was much 'grey literature', a difficulty that is compounded by different patterns and norms across the European Union.
All the speakers agreed on the need for higher standards of research assessment.
“We’re working with pretty lousy tools but we still want to win the battle with China, India and others,” Professor Paul Wouters of Leiden University said.
To that end, it is urgent for European institutions to rid themselves of low-quality databases to clear the way for much higher levels of assessment, not necessarily through commercial specialists.
“Research assessment has become a commercial enterprise,” he said.
“But we would be wise to maintain a sceptical attitude towards it. If it sounds too good to be true, it probably is.”
Dr David Sweeney, director for research, innovation and skills at the Higher Education Funding Council for England, said that “just as the [UK’s] National Health Service is primarily about patients (not doctors), universities should mainly be judged on their contributions to society”.
Higher learning is not just about researchers, he added, and professionals must try to escape “a research-centric mode”.
Universities should look at themselves from the outside to avoid a purely internal focus.
He questioned whether universities were being tempted to over-invest in biometric data as a research assessment means, suggesting that “predicting the future impact of research is perhaps a foolish thing to do”.
He criticised a tendency to judge PhDs by length over excellence because there are doubts about the term “excellence” being used.
“We’re not looking for perfection but to be able to ask the right questions," Sweeney said.
LERU has recommended that all researchers be encouraged or compelled when publishing to use a unique personal and institutional designation so that they can be easily traced.
Moreover, researchers should deposit all their publications into a university’s publications database.
Administrators should ensure collected information can be used for multiple purposes, internal and external, to avoid duplication of effort.