Are university rankings the tip of the iceberg?
In the process, they have become a profitable industry - replete with perceptions of conflict of interest and self-interest, along with self-appointed auditors - all of which, in this post-global financial crisis age, would almost certainly provoke concern in other sectors.
By monetising educational data in different ways, these initiatives are tantamount to new product development or revitalising products in response to new market opportunities or consumer demand.
At the same time, there is a prolific academic market in rankings with almost 500,000 entries on Google Scholar and millions on Google - of which this piece is one.
No one could have predicted the game changer rankings would become.
They immediately - and have unceasingly since - captured the attention of policy-makers, the academy and other stakeholders. Their choice of indicators has effectively become the international norm for what constitutes quality.
Their results are interpreted as a measure of global competitiveness and the increasingly multi-polar character of higher education. Their unrelenting focus on the 'Top 100' has transformed a concept of 'world-class' into a strategy and put a premium on the status of elite universities and the university brand.
As a result, they have underpinned a profound transformation of our higher education institutions and systems while also placing investment in higher education and research and development high on the political and policy agenda.
Like credit ratings agencies, rankings wield immense influence over governments, higher education and society at large - with positive and perverse effects.
No doubt, their crude simplicity is what makes rankings infectious.
By focusing on a limited set of attributes for which (internationally) comparable data are available, they promulgated a small set of indicators as a meaningful measure of quality which are, in turn, easily communicated.
Yet, quality is a complex concept.
Most of the indicators used are effectively measures of wealth and socio-economic advantage, and privilege the most resource-intensive institutions and-or countries. Indeed, the difficulties encountered by both U-Multirank and AHELO, the OECD benchmarking initiative, highlight these issues.
Context remains fundamentally important. National or global, public or private, high or low socio-economic characteristics of the student cohort and the learning environment: all these dimensions can radically affect the performance of institutions and render simple comparisons meaningless.
On the other hand, rankings have acted as a wake-up call for higher education, challenging self-perceptions of greatness, by nations, by institutions and by individual academics. In a global marketplace, international comparisons are inevitable, leaving no room for self-declaration.
At a time of growing demand for and on higher education and rising costs, there is an emphasis on measuring outcomes, impact and benefit.
By placing consideration of quality, performance and productivity within a wider comparative and global framework, rankings have taken the debate outside the traditional bailiwick of higher education and placed it firmly onto the public and policy agenda.
Would this have happened otherwise?
With the involvement of the European Union via U-Multirank and the OECD via AHELO - Assessment of Higher Education Learning Outcomes - quality assurance has moved to the supra-national level, providing further illustration of the extent to which higher education has lost its role as the primary guardian of quality.
Even the United States, traditionally comfortable with regional accreditation processes built upon strong institutional autonomy, has moved to introduce a rating system. In the face of mounting public concern about the relationship between affordability, quality and value, Barack Obama's rating system was probably unavoidable.
These developments have accelerated what the European Union has long called the 'modernisation agenda' in recognition of higher education's importance in creating competitive advantage.
It has also increased the urgency surrounding the international discussion about 'quality' as part of the wider debate around transparency and public disclosure of student and institutional performance. After all, the public has a right to know whether its institutions are capable of delivering better outcomes for society.
In response and reaction to rankings, alternative methodologies and new formats have emerged. There is growing interest in benchmarking and-or profiling tools to compare and improve/enhance performance and demonstrate distinctiveness.
Some governments, such as Ireland and Norway, have begun to use these tools as part of their system of (re)structuring and resourcing strategies. Other governments, most notably Australia and the United Kingdom, have put institutional data online for easy accessibility and comparison.
Over time, the European Union will be able to provide a similar 'service' by effectively sucking institutional data into a single database operated by Eurostat, following initiatives already in train by Thomson Reuters and Shanghai Jiao Tong University.
Rate-my-professor sites may be scorned, but social media is likely to become higher education's 'TripAdvisor'.
These developments are part of an evolution towards a common data set.
These formats will put information directly into the hands of students, employers, peers and the general public. Who defines the criteria and controls the data and how it is managed is part of the new educational battleground.
We may scorn rankings, but in retrospect they are likely to be only the tip of the iceberg.
* Professor Ellen Hazelkorn is director of the Higher Education Policy Research Unit, or HEPRU, at Dublin Institute of Technology and policy advisor to the Higher Education Authority, Ireland. Email: Ellen.firstname.lastname@example.org. This article is based on "Reflections on a Decade of Global Rankings: What we've learned and outstanding issues", European Journal of Education, Special Issue: Global University Rankings. A Critical Assessment, Volume 49, Issue 1, pages 12-28, March 2014.