AFRICA

African Quality Rating Mechanism – Pilot results

The African Quality Rating Mechanism, or AQRM, was developed by the African Union Commission as part of the African Union’s strategy for harmonising higher education, and was adopted by the Conference of Ministers of Education of Africa in 2007.

Its aim is to revitalise and strengthen African higher education institutions to ensure that they are globally competitive and attractive while being locally relevant.

It is also intended as a tool to facilitate benchmarking of quality and to promote a culture of ongoing quality improvement in higher education.

The plan was also to help in selecting institutions to benefit from the Mwalimu Nyerere Scholarships and in the establishment of Pan African University networks.

The AQRM was launched in 2010. Based on a recent report* published by the African Union Commission (AUC), this article examines the results of that initiative and especially considers the challenges of extending it beyond the pilot phase.

Questionnaire

The survey instrument used was an extensive, 37-page questionnaire comprising 15 parts and 80 items, which was sent to higher education institutions. Parts 1-13 of the questionnaire, covering 69 items, sought information and data on the institution, and its staff, students, funding, facilities, processes etc.

The AUC report concentrates on parts 14 and 15, requiring each institution to undertake a self-rating of the following 11 clusters of criteria: institutional governance; infrastructure; finance; teaching and learning policies; research; community engagement; programme planning; curriculum development; learning materials; teaching assessment; and programme assessment.

There was clearly a strong emphasis on teaching, as six of the 11 clusters relate directly to it. The three self-rating assessments, and corresponding scores, to be used by the institution for each criterion under the clusters were excellent (3), satisfactory (2) and unsatisfactory (1). The last part of the questionnaire asked each institution to rank its three best programmes using 15 specified criteria.

The questionnaire was disseminated in 2010, in English only, to all African higher education institutions, which were asked to complete and return it in 15 days.

The questionnaire was thus not targeted at a few specific institutions, and no mention was made that it was a pilot project. Institutions were informed that the questionnaire was to provide an indication of the status of their programmes and facilities, among other related issues.

Responses

There were 32 respondents from 11 countries, 21 of these being from Nigeria (9), Kenya (6) and South Africa (6). There were two institutions each from Ghana, Tanzania and Zimbabwe, and one each from Egypt, Ethiopia, Mauritius, Mozambique and Swaziland.

The respondents ranged from well-established, large public universities such the University of Cape Town in South Africa and Alexandria University in Egypt, to relatively recently established and small institutions such as Achievers University (private) in Nigeria and Laikipia University College (public) in Kenya.

No response was received from any of the four universities hosting the Pan-African University institutes in Algeria, Cameroon, Kenya and Nigeria.

The results from the 32 responding institutions were grouped by cluster. Of particular note is that no external validation of the questionnaires was undertaken and the results were based solely on the institutions’ self-assessment.

Ten of the 32 institutions did not respond to all of the clusters. In particular, one institution did not respond to seven clusters, another one to six clusters, and the other eight institutions did not answer between one to three clusters.

Many of the clusters are vital, such as infrastructure, finance, teaching and learning policies, and community engagement. This makes it difficult to have a single average rating for all of the institutions for comparative purposes. It is not clear why those institutions decided not to fully complete the questionnaire.

The clusters that the largest number of respondents assessed as ‘excellent’ were programme planning (22), curriculum development (20) and institutional governance (17). The ones that the least number of respondents assessed as ‘excellent’ were programme assessment (4), teaching assessment (6) and finance (7).

Five institutions assessed at least one cluster as ‘unsatisfactory’; these clusters (and number of institutions) were infrastructure (2), finance (3), research (1) and community engagement (1). With regard to ranking their three best programmes, eight of the institutions did not do so.

The summarised AUC report makes no mention of the very substantial amount of institutional data that must have been collected from the institutions’ responses to parts 1-13 of the questionnaire.

Analysis

It is extremely difficult to analyse the results and draw meaningful conclusions from them. One major constraint is that all the criteria under the clusters were purely qualitative.

Under finance, for example, among other criteria, institutions were asked to assess whether they have access to sufficient financial resources, have established procedures for attracting funding from industry and have clearly specified budgetary procedures in place; under research, they were asked whether they have a policy or strategy for research, publications and IPR, and whether they have succeeded in attracting research grants.

The institutions must have replied according to their understanding of these criteria, and there could well have been some degree of arbitrariness in their responses.

This was not a ranking exercise, but for rating to be effective, it should result in some degree of comparability. Some quantitative criteria could have been established to demarcate between institutions having the same rating.

The results also gave rise to situations best illustrated by the results of the research cluster. Here, the University of Cape Town, University of Swaziland and Michael Okpara University of Agriculture in Nigeria were all rated as ‘excellent’.

If the exercise was meant to lead to competitiveness and attractiveness of the institutions, such a situation is not helpful, and could even be misleading, considering that the University of Cape Town consistently appears as among the best in Africa in global rankings that use research as the main criterion.

Again from the point of view of attractiveness, with 11 cluster ratings and with the same institution very often having different ratings under different clusters, potential students would find it difficult and confusing to use the AQRM as a guide for choosing their institution.

There is no indication of the mechanism used in the institutions for completing the questionnaire. If this was done in a consultative manner involving several sectors of the institution, there is likelihood that there would be follow-up on identified areas of weakness.

However, the strength of AQRM in leading to institutional quality improvement is limited when compared to the institutional review methodology or even accreditation used in implementing quality assurance.

Ranking of higher education institutions is usually carried out globally using limited and specific quantitative criteria. Rating of institutions using a multitude of criteria is usually undertaken by individual countries; this is because of ability to enforce participation, access to institutional data and having a common higher education system.

In Africa, Nigeria rates its higher education institutions and Kenya is planning to do the same. Undertaking a meaningful rating of higher education institutions across a space as complex as Africa, which has so many diverse higher education systems and where institutional data are scarce, can be very challenging.

The AUC plans to extend the pilot phase with another launch of the revised questionnaire. In doing so, it should take the abovementioned issues into account.

The questionnaire should be translated into French and Arabic to encourage Francophone and Arabophone institutions to participate.

It should also be ensured that a mechanism for external validation of the responses be put in place, as proposed in the AUC report itself. This, however, would invariably require significant human and financial resources, especially if a far greater number of institutions send in their responses.

* Goolam Mohamedbhai is the former Secretary General of the Association of African Universities and former Vice-Chancellor of the University of Mauritius.

* African Union Commission, African Quality Rating Mechanism: 2010 pilot self rating of higher education institutions, summarised report, April 2012.