SOUTH AFRICA

SOUTH AFRICA: Researcher rating system should stay

A review of South Africa's long-established researcher rating system has found that it is credible and should be retained. But the review has called for ratings to be re-linked directly to funding for self-initiated research by academics, for aspects of the ratings categories and processes to be changed, and for new tools to be developed to assess teams, innovation, multi-disciplinary work and management capacity. The board of the National Research Foundation, which operates the rating system, will decide on the system's future next month.

A system for evaluating and rating researchers was introduced by the predecessor of the National Research Foundation (NRF) in 1984, to recognise outstanding researchers and support their work with funding. There were three main objectives - to identify leading researchers and channel funds to their self-initiated research, to encourage the development of a new generation of researchers, and to counteract a serious brain drain.

South Africa uses a rating system similar to those in New Zealand and Mexico. Researchers apply to the NRF for a rating and their research over the preceding seven years is evaluated by peer assessment panels. Successful researchers are rated across six categories that cover those who are experienced, young, 'disadvantaged' or have returned to academia after working elsewhere. The top category is 'A rated' - researchers who are world leaders in their fields.

An in-depth review of the rating system was convened by Higher Education South Africa (HESA), the vice-chancellors' association, and the NRF to investigate the system's purpose and utility. A 10-person review committee was appointed in September 2006, chaired by Professor Loyiso Nongxa, vice-chancellor of the University of the Witwatersrand.

The committee commissioned five studies: an historical review and analysis of the rating system, its use by institutions, its impact on scholarly productivity within specific disciplines, a comparative study of other national evaluations agencies, and a review of processes used to manage the rating system in the past five years.

Based on the studies, the review committee compiled findings and recommendations that were presented to HESA and the NRF in late 2007 and published in a Review of the NRF system for the evaluation and rating of individual researchers at the end of January 2008.

The NRF's response has been prepared and now requires endorsement by the NRF board "especially when key policies may be affected", said Dr Andrew M Kaniki, the NRF's executive director of knowledge management and evaluation. The NRF board will meet on 27 June and the response plus a list of actions to be taken will be published after that.

Some recommendations - such as linking ratings and funding - are already being implemented.

In its report, the committee pointed out that while South Africa's rating system was created to support self-initiated research, over time the NRF had shifted funds to other programmes. The budget allocation to support the rating system and funding of researchers was never adequate, and there were other problems such as "inability of universities to meet their end of the deal in terms of finance for C-rated, emerging and non-rated researchers, the continuing crisis with funding equipment, institutional change in a rapidly changing political climate etc."

The decoupling of rating from funding undermined the credibility and applicability of the rating system, and discouraged researchers from applying for ratings. Over time, it came to be seen as an honorific system - "a recognition of excellence".

But one of the reasons for introducing the system was to recognise the achievements of researchers and, the committee concluded, it had been successful in that regard "despite some criticism, scepticism and varied perceptions". The rating system is used by universities as a management tool for promotions, retentions, remuneration, awards and research funding. Industry uses it too, but not all science councils do and nor does government.

Also, one of the studies commissioned by the committee indicated "a positive relationship between rating and productivity and, moreover, a relationship between the level of rating and productivity, that is, the higher the rating, the higher the productivity in terms of outputs".

There is evidence that being rated has benefited the careers of researchers. Also, says the report: "Evidence indicates the number of rated researchers at universities has become one of the indicators of excellence of universities", and rating is used as a benchmarking tool.

One criticism of the system is its complexity and that being rated in a category, such as 'C', can be felt to be demeaning. A possible resolution could be to give descriptors to categories rather than letters, such as "international leader" instead of 'A rated'. And although universities use the system, the number of academics who apply to be rated is quite low.

A major problem has been ambivalence in the NRF itself about the system. Some big funding programmes run by the NRF have even ignored the ratings and some NRF facilities do not appear to use them. "The NRF board and the NRF executive therefore need to make a clear statement regarding the value of the NRF rating system to the organisation."

Also, the committee found: "There is not sufficient alignment between the rating and the Department of Education publication subsidy systems. Different elements of these systems address different objectives but they should not be contradictory to each other."

The committee said there was no reason to discontinue the evaluation and rating system, primarily because it was being used by universities and had a high degree of credibility among them. However, the system should be used for its intended purpose - "the rating of individual performance to determine the financial support for self-initiated research to be given to individual researchers".

Further, the NRF should "develop new appropriate tools to assess teams, innovation, multi-disciplinary work, management capability etc", disseminate information about the system to correct misconceptions about it, and integrate it with other NRF activities where appropriate.

One reason for de-linking the rating system from funding was political. There was great pressure from the 1990s on to create programmes that encouraged research among institutions and individuals disadvantaged under apartheid. But this eroded a rating system that also had a key role to play in promoting research in South Africa - throwing the baby out with the bathwater, as the saying goes.

It was critical, the review committee found, to re-link rating to funding. To this end, it urged the NRF and HESA to lobby for levels of funding sufficient to support a reformed rating system.

karen.macgregor@uw-news.com