GLOBAL: A scorecard for higher education

The discussion on indicators for the management and control of higher education systems and institutions is by now a long-standing one. It appeared prominently in OECD countries in the 1980s within the context of higher education reforms conducted under the new public management concept.

Indicators used to be strongly linked with the idea that the state should steer higher education institutions at a distance but that, in exchange, appropriate accountability instruments, and above all others indicator systems, needed to be set up for the monitoring of policy and institutional performance.

Our research was produced under a joint project of two Unesco institutes, the International Institute for Educational Planning and the Unesco Institute for Statistics. The project was initiated to help statistical and planning units placed in ministries of education or higher education, mainly in developing countries, to create indicator systems for the monitoring of their higher education policies. It is based on the assumption that the development of indicator systems is, of course, not just a political endeavour, but also a technical one.

The main thrust of the publication is thus a step-by-step methodology on how to construct an indicator system. But the publication also takes care to place the topic within the context of public policy changes such as deregulation and the growing importance of performance monitoring as a counterpart to institutional autonomy. So we highlight the different uses of indicator systems which may either predominantly serve as instruments for public information, policy monitoring or management at system or institutional levels.

Unlike many other publications on the topic, our research does not take a critical stance towards indicator systems as such. The fact that they are a response to a justified request for transparency by governments and stakeholders of higher education is taken somewhat for granted.

But the research alerts the reader to two important prerequisites. First, indicator systems can only be established if there is an operational information system in place in a country or within a higher education institution. However, the information system in many developing countries is unable to provide reliable and timely information from which an indicator system can be developed. Second, although we believe indicators are more useful if higher education systems and institutions have clear policy goals, we recognise this is often not the case in reality and many policy statements remain vague with regard to their objectives.

The research presents a series of 10 steps to be followed by statisticians and planners. These include the identification of policy objectives and major issues from which a list of indicators will be derived. The methodology also comprises the listing, identification and location of data sources. Finally, it highlights the need for the calculation of indicators, verification of results and the presentation of indicators in a communicative format.

In addition to this practical road map towards the creation of an indicator system, the publication also discusses major angles of analysis that are commonly addressed such as access, internal efficiency, relevance and external efficiency, the quality of higher education, professionalisation, capacity for research and innovation, and last but not least, equity. The meaning and modes of calculation of selected indicators related to these angles of analysis are also discussed.

We highlight too some of the technical difficulties existing in the calculation of typical indicators, such as entry rates and indicators derived from financial data (% of GDP for higher education) and the need to agree on clear definitions and acceptable data sources.

Taking into account the policy nature of indicator systems, we discuss the organisational structure and the workflow chart of an indicator project. It is argued that an indicator project needs both a structure for political oversight and stakeholder involvement as well as a management/operational structure to conduct the technical work involved.

Finally, we look at the use of indicators in international comparisons, and the thorny issue of international rankings. International organisations, such as the Unesco Institute of Statistics and the OECD, prepare annual publications which may trigger policy debates on the performance of national higher education systems. This is even more the case with international rankings which give an ordinal position to higher education institutions or their departments. The report discusses rankings because indicators are used to construct rankings and because international ranking positions are used in national indicator systems.

While the report uses many illustrations of existing indicator systems, which can make the reading at times a bit cumbersome, these add to its practical purpose. We would also have liked to give more space to the discussion of indicator systems at the institutional level. In general, however, the report represents an honest effort to detail the necessary process of developing an indicator system.

* Michaela Martin is programme specialist in governance and management at the International Institute for Educational Planning and Claude Sauvageot is Head of Sector for European and International Relations at the Directorate of Evaluation, Forecast and Performance in the French Ministry of Education. More information here.