Performance indicators for South African universities have been developed by the Centre for Higher Education Transformation and they are starting to be used to inform government decisions about the higher education system and individual institutions, said Dr Ian Bunting of the national Department of Education. The indicators will be used to set different targets for universities – encouraging diversity in the sector – as well as to measure the performance of institutions over time and to hold them accountable.
Bunting and CHET director Dr Nico Cloete have been developing indicators for South African higher education since 2000. University councils can also use the information to ascertain the progress of the institutions they are entrusted to govern, Bunting said in a presentation to the CHET seminar titled Performance indicators for different purposes.
University profiles provide information on, among other things, actual and planned student enrolments by number, qualification type and field of study, student success and graduation rates, staff numbers, ratios, qualifications and research outputs, and university income and expenditure, income sources and costs per graduate.
Using the information, the government and institutions are able to establish a benchmark profile for each university and measure progress from there over time, based on mutually agreed targets.
“The government will assess a university relative to its target,” Bunting explained. “If a university does not reach its target, it will be asked why and adjustments will be made. Once an agreement has been reached, the government can hold an institution to account.
“Differentiation comes in with the setting of different targets for different institutions,” he said. For example, ‘traditional’ universities have higher ratios of research outputs than universities of technology, and this difference – or diversity – will be retained.
In his presentation, titled Diversity and differentiation in the changing South African higher education landscape, Cloete said: “We are now playing around with different uses for indicators – for planning, for governance, and for developing a ‘new pact’ for higher education.”
Bunting used a leading research institution, the University of Pretoria, to illustrate how performance indicators will be used to facilitate government steering of higher education, and programme differentiation.
The Minister of Education has approved Pretoria’s planned enrolment of 54,000 students by 2010. That university will grow by 4% between 2006 and 2010 – which is above planned systemic growth of 2.5%.
“Pretoria has been allowed to grow quickly because the government believes that it can cope,” said Bunting. “The targets differ sharply between institutions. Some have a negative growth agreement regarding student enrolment, and will be given a safety net of funding to enable them to move slowly down to their lower student number target”.
Also approved are Pretoria’s plans to grow postgraduate student proportions to 29% of all enrolments, up from 19% in 2006, to increase enrolments in science and technology, and to improve student success rates from just over the national target of 80% in 2006 to 84% in 2010, among other things.
The University of Pretoria’s plan supports government goals to produce more graduates from quality programmes, especially in areas of skills shortage, and more postgraduates. “The targets are set and the Department of Education will annually monitor the university”, Bunting said.
University profiles are based on data extracted from the Department of Education’s national higher education management information system (HEMIS), which is itself obtained from the production databases of each institution.
Data can also be extracted to compare universities with their peers in three groupings, by institutional category – such as university, university of technology or ‘comprehensive’ – by financial resources using the percentage of private income as the classifier, and by notional competitors, institutions that are regarded by each other as competitive or comparable.
The idea of peer groupings is to enable more refined targets and comparisons. The peer group analyses are based on six targets – student to staff ratios, staff qualification levels, average student success rate and average graduate output, staff research outputs (including postgraduate student per academic), and research publication units.
Concerns were expressed at the CHET seminar about whether the South African government’s steering of higher education through institutional plans and performance indicators would encourage differentiation. The University of Twente’s Frans van Vught worried that indicators might lead to less rather than more diversity “because they are all applied across the board, regardless of the character of the institution”.
In Hong Kong, he said, universities were provided with different sets of performance indicators that they could choose in accordance with their missions and profiles. The emphasis was on comparison not with each other but with other international institutions having similar missions or profiles. Institutions were thus offered different policy contexts.
Cloete argued that qualitative as well as quantitative indicators needed to be employed: “If it is not in HEMIS, it doesn’t exist. This is a serious problem in our context”.
He added: “How useful can this set of data really be, given that indicators are to a large extent self-referential and that targets are not internationally benchmarked? We are doing a comparative study of seven institutions in seven African countries: those statistics may be useful once we have them.
“We can’t expect the Department of Education, with its sophisticated data system, to cope with other kinds of data. So we need a broader discussion and to look at other surveys, like the household survey, to pick up other data on higher education such as student absorption.”
Professor Rolf Stumph, outgoing vice-chancellor of Nelson Mandela Metropolitan University, suggested that the next level of sophistication for the indicator system “has to be relative and not absolute norms – relative norms for peer groups would make more sense.”
Cloete stressed that this first phase of indicators is developmental. The next phase would focus on peer groupings, and a third phase would enable higher levels of sophistication.
Receive UWN's free weekly e-newsletters