Are the SDGs being used to rank impact or monetise data?
Education’s potential for providing people and their communities with the intellectual, economic and societal skills and tools essential for personal opportunity – and national development – has engendered a growing focus on valuing, demonstrating, assessing and measuring impact.
Impact has tended to be measured in terms of papers and citations – or rather interactions between academics. But there is a need to go beyond benefit for the scientific community and include issues relating to public good, public value and social contract.
Whether it is accounting for or measuring research impact, or community engagement, or the contribution to learning outcomes – most countries have policies and-or practices, obligatory and voluntary, and sometimes aligned with funding.
Over the years, global university rankings have entered this hotly contested and competitive space.
Evolution of impact rankings
There is a growing cluster of rankings and frameworks focused on engagement themes or which emphasise higher education’s social, cultural and economic impact and value for their communities. More recently rankings have begun to focus on sustainability.
Among the most well-known is UI GreenMetric World University Rankings. An initiative of Universitas Indonesia, it was launched in 2010.
There is also the Sustainability Tracking, Assessment and Rating System (STARS). Dating from 2006, it is a self-reporting framework for colleges and universities to measure their sustainability performance operated by the United States Association for the Advancement of Sustainability in Higher Education.
More recently, the World’s Universities with Real Impact (WURI) ranking was launched in 2021, covering industrial application, entrepreneurial spirit, ethical value and student mobility and openness.
Also, QS, in partnership with Elsevier SciVal, introduced an SDG filter to its university rankings as of 2021.
Times Higher Education or THE originally planned to focus primarily on economic impact. However, this idea was dropped following extensive consultation at and following its inaugural Innovation and Impact Summit in 2016.
In 2018, THE announced that the UN Sustainable Development Goals (SDGs) were a more suitable framework. The decision is not surprising. The SDGs have become a very powerful policy and strategic influencer on governments and institutions as well as for research and other agencies. They have transformed policy agendas and practices everywhere.
THE Impact Rankings 2022
The fourth edition of the THE Impact Rankings was released on Wednesday 27 April.
Of more than 20,000 officially accredited or recognised higher education institutions worldwide, according to the IAU World Higher Education Database, the THE Impact Rankings for 2022 includes 1,406 universities from 106 countries or regions. This compares with 450 universities in the inaugural 2019 ranking.
The methodology assesses activity against: research, stewardship, outreach and teaching. With the exception of data from Elsevier, universities submit evidence, examples and data against at least four of the SDGs.
The overall score is calculated on the basis of the three best SDG scores, plus performance against SDG 17. The latter aims to “strengthen the means of implementation and revitalise the global partnerships for sustainable development”.
A genuine effort to measure impact?
Most welcome is the inclusion of lesser-known universities and emerging countries among the top-ranked institutions compared with more traditional global rankings. Overall, 19 universities from 14 countries and regions achieve number one positions across the rankings.
While the United Kingdom has the most universities in the top 100, Pakistan is the third most-represented nation overall and Turkey is the fourth. Taiwan, India, Brazil, Malaysia, Indonesia, Thailand and Saudi Arabia also feature highly.
The absence of the world’s elite is both striking and a relief. However, this may not be due to their poor(er) performance but rather their choice not to participate. Participation is, after all, voluntary – and absence may simply be a statement about the rankings’ perceived status and value.
2022 represents the fourth edition of the methodology. It explains its format as having a low entry threshold due to less emphasis on data and more on evidence. While research still equates to 27% of the score per SDG, THE argues this represents only 7% of the overall score.
This may seem a significant improvement on conventional rankings, but it means the process is wholly reliant on self-reported and interpreted data.
Another vulnerability is that THE evaluates each submission. Not only is gathering the material for submission a lot of work – especially for smaller and emerging institutions and countries – but it is unlikely that THE can control or validate the accuracy of the information provided by the universities.
THE Methodology 2022 only says that “points are assigned according to the answer”. Unlike comparable public competitive processes, everything is conducted behind closed doors.
Nor is it clear that THE has the expertise for this type of work.
Anyone familiar with evaluating large-scale projects will understand the magnitude of the work involved and the necessary integrity and transparency of the process. Having an un-named advisory board is not equivalent to having a wholly independent third party conduct the assessments.
The complexities of evaluation on this scale may explain the number of universities tied for position.
Rankings and Ukraine
The Russian Federation and Belarus are included in the final results but appear faded out. In fairness, these are challenging times for those keen to retain academic engagement, as argued in University World News by Philip Altbach, Hans de Wit and Jamil Salmi.
Richard Holmes of University Ranking Watch suggests that to redact Russian universities would deny access to “useful information about Russian scientific and research capabilities”.
On the other hand, both THE and QS have indicated their intentions to end business with Russia. THE said it would “take steps to ensure that Russian universities are given less prominence in the rankings, and that their university profiles are not available”, while QS has said it “will redact Russian and Belarussian entries in new QS university rankings”.
By trying to have it both ways, they seem to be admitting their rankings are an editorial construct.
Ranking impact or monetising data?
Rankings may have started life as an information tool for the global era of higher education. But today they are principally a mechanism to collect and monetise data.
As one ranker said to me: “Rankings themselves cannot make money; one has to find funding or make money to support ranking activities.” No wonder THE describes itself in its accompanying press releases as first and foremost a “trusted data partner to global higher education”.
Until recently, too little attention has focused on rankings as a business – and the increasing integration between rankings, publishing and data analytics as detailed by George Chen and Leslie Chan.
The global higher education intelligence business has created “vast data lakes” containing “triple-digit billions of data elements”. Owning data-rich resources, as well as the smart tools to capture and interpret them, is where the real money and power lies.
Data is essential for good governance. But without a comprehensive database and analytic capacity, governments and higher education have effectively enabled the collection, management, warehousing and analysis of higher education data to be privatised and monetised.
The best that can be said about THE Impact Rankings is that they highlight the importance of the SDGs and provide a tool for self-assessment which higher education institutions can then showcase to students and others. This may be enough for some people.
Ultimately, the real value of THE Impact Rankings is the lucrative treasure trove of institutional data it collects.
As Alex Usher, president of Higher Education Strategy Associates, says, the prize is getting “universities hooked on the rankings” so they can “sell their data division’s analytical services – which are geared to helping institutions understand how to improve themselves in those rankings…”
As the era of self-regulation for Big Tech nears an end, similar questions about data ownership, governance and regulation must be asked of rankings, publishing and data analytics conglomerates.
Ellen Hazelkorn is joint managing partner, BH Associates, and joint editor, Policy Reviews in Higher Education. She is co-editor of Research Handbook on University Rankings: Theory, methodology, influence and impact.