GLOBAL

University rankings should measure what we truly value

What is the true value of research and knowledge? What do we mean by excellence? Is it the reputation of the journal or platform on which that knowledge is shared or is it the relevance of that research to society’s needs?

Is it the way the research is created and debated through processes that bring diverse voices and ideas into the conversation and that recognise the different shapes and forms that that knowledge might take?

Is it the culture and environment within an institution that creates spaces for critical learning, connects its research to its teaching and sees its graduates as a vital part of its contribution to a knowledge system?

If so, then we need better, more equitable ways of assessing and evaluating research.

Across the globe, publication remains the primary way in which the value of research is judged. Moreover, research tends to be judged not on whether it was published in the most appropriate place, to be most accessible to its intended readership, but on whether it was published in a ‘top’, ‘high impact’ or ‘international’ scientific journal.

Measuring the real-world impact of research is not easy. But it is easy (or at least easier) to quantifiably measure publications, and citations of those publications, constructing metrics and tracking readership digitally to show how papers and journals perform against each other. These then become convenient proxies for what we’re really interested in – how much of a contribution is that research making in the world?

The result is that we measure the value of whole bodies of knowledge, and the careers of the researchers who produce them, on publication in a collection of scientific journals that are predominantly published in and by the North, and with relatively little understanding of the environments in which knowledge is produced and used across the world.

Rankings

The distortions that a reliance on publication measures have created have been amplified by the incorporation of those measures into the global university rankings.

In an effort to find ways of measuring and comparing the ‘excellence’ of universities as a whole, and to use this data to compare them with their peers, publishing data has become a significant part of the composite of metrics used to construct the league tables of the Times Higher Education World University Rankings and the ‘Shanghai’ Academic Ranking of World Universities.

While these have expanded in recent years to include universities across the world, the research that is counted is that which is captured by Northern citation databases. The knowledge produced and published in journals in and of the South, or published in other ways, is often rendered invisible and quite simply doesn’t count.

A recent ranking now seeks to measure contribution to the Sustainable Development Goals. On the surface this seems positive: a recognition of the broader social contributions universities can make. And while the overall rankings have been shaken up – the wealthy US and UK universities no longer occupy the top slots – they still recognise relatively well-funded universities in Europe, North America and Australia.

Although universities in Asia, the Pacific and Latin America are starting to become visible, it still seems to suggest that doing well is significantly determined by existing privilege and wealth – and language (English). The ideas and contributions of many universities in the South to their communities, their countries and the world at large are still rendered largely invisible.

Impact in the South

In our work with academics and researchers in the South, we bump into this problem on a daily basis: academics want to do work that is relevant to their societies, but also to progress in their careers, becoming visible to their peers and recognised by their institutions.

To do the former they may need to invest significant time in work with communities or groups of practitioners or policy-makers, or devote more energies to their teaching to develop the talents of the next generation; to do the latter they must publish in ‘top journals’ and do work that is sufficiently interesting to global science.

They’re either forced to choose between the two, or uncomfortably straddle both, getting enough of their work into the right journals that it ‘counts’ and finding space to develop their teaching, or to engage with users alongside it, as we found in a recent study about experiences and attitudes to open access in the South.

This is often felt more keenly by those who encounter further obstacles within the system – early-career researchers, struggling to establish themselves, women juggling professional and personal responsibilities in institutions which favour male colleagues, or researchers in smaller, rural universities with fewer connections to global networks or international partnerships or with less robust systems for research training and mentorship or researchers working in less popular or under-funded disciplines.

Rankings distort how knowledge is produced

That our systems for recognising and rewarding research don’t work as they should – and that new ways to assess and evaluate research are needed – has been widely recognised.

We could, of course, ignore the lists and the rankings – whether of top journals or top universities. The problem is the influence that these systems of assessment have on what research is conducted – the questions that are asked and the problems that are tackled, how that work is done, the engagement of local communities and the methods deployed to gather and analyse data – and the ways in which it is communicated and made available to those who could use it.

In turn, these distort not just individual research projects, but research agendas and the trajectories of individual researchers. All too often, excellence simply means what is produced by the elite centres of research and by the people who work within them.

They go further too – encouraging university leadership to invest in the things that are most likely to result in a better ranking, and not necessarily what will most benefit their staff, students and communities.

That might be a global problem, but it is particularly pertinent to scholars and researchers in the South, who to a large extent must perform within a system whose parameters and norms are set in the North.

Efforts to reform the system

The conversation is shifting – although it’s still heavily influenced by the more powerful research nations of the world. Initiatives such as DORA have pushed the global science system to ditch these metrics and to rethink the way research is measured and assessed. The Global Research Council has published a new report urging research funders to rethink the ways in which they assess research.

The Research Quality Plus framework of the International Development Research Centre (IDRC) has demonstrated that when research is judged against criteria such as relevance to local knowledge needs and connections to local communities – as well as scientific rigour – work undertaken by Southern researchers outperforms work produced by academics in the North.

Reimagining it from the South

The problem is that Southern researchers and Southern institutions are being judged and are judging themselves against metrics and systems of assessment that have been developed in the North. Although they are claimed to be universal measures of quality and excellence, they are deeply rooted in the research economies of the North and reward institutions with the resources to invest in the types of research and knowledge that the North judges to be valuable.

Erika Kraemer-Mbula and colleagues turned the excellence debate on its head to ask instead what excellence might look like if it was defined from the South. As they say, “What the South does not lack is scientific talent”; what it needs are ways to recognise and value its talent. The word value is important here. Because at the root of all of this is what we judge to have value in the world.

New models of the university

Our models for what a university should look like, and what it should do and how, are dominated by particular types of institutions, developed in Europe and North America over several centuries.

But if knowledge is a product of the ecosystem, the community, the culture and the society in which it is produced, then it stands to reason that the institutions that produce it should be too. Our very idea of the university, of knowledge institutions, needs to shift, to be more inclusive of alternative ways of creating and communicating and making use of knowledge.

In Northern Uganda, Gulu University is on a mission to become a university for and of the community – to be rooted in the knowledge needs of the people it seeks to serve, and to involve them in the very processes of teaching, learning, researching and debating ideas.

There are plenty of other examples too – of universities seeking to become less elite spaces, more open to different forms of knowledge and to people beyond academia and more explicit in their work to create and curate knowledge which is valued and can be used by those outside of academic communities.

In Mexico, the Intercultural University of Veracruz has a mission to equip young people to serve the needs of indigenous, rural communities under-served by the institutions of capital cities, while universities like Uganda Martyrs University are trying to cultivate young people with a particular ethic so that they can bring new approaches and new leadership into business, government or to social and community sector organisations.

Rethinking value, to attract investment

Of course, these are also questions of funding and investment. But rethinking how knowledge should be valued, and designing better ways of making that visible, and where needed, assessment processes, will be needed to make the argument for that investment.

What we need are opportunities for national research systems in the South to define their own frameworks for assessing and judging the value of research, and for assembling the evidence to demonstrate that value and contribution – not as improvements on the existing system, or as ‘adjustments’ that start from the same metrics, but by starting with what they want from research and knowledge, and from the people and institutions that produce and communicate it, and from their deciding what and how to measure and evaluate it.

Perhaps that might even be done with an understanding that, in a world of complex, cross-community and cross-border challenges, we ought to encourage and reward collaboration not just competition.

It could be a science council on its own, or a group of Southern science agencies that act together to do this. Importantly, they would need to work with experts in measurement and evaluation – to avoid the traps and failures of existing approaches – and to bring together a diverse group of users as well as producers to develop something that really addresses the social value of research and knowledge.

We won’t get to a more equitable knowledge ecosystem if we don’t have a better way to recognise and reward what we most need from research, and what we really ought to value more.

Jon Harle is director of programmes at INASP, an international development charity working with a global network of partners to improve access, production and use of research information and knowledge, so that countries are equipped to solve their development challenges.