GLOBAL
bookmark

Study raises doubts over classifications in subject rankings

A new study argues that inaccuracies in the mapping of journal subject classification by Elsevier and the narrow subject field used by two major global university rankings organisations in their subject ranking system have resulted in inaccuracies in institutional scoring in both rankings.

The study, “On the credibility of QS and the Times Higher Education ranking by subject area: Misalignment of subject mapping to academic disciplines”, was published in Scientometrics on 7 April and claims that misclassification of some publications in terms of disciplines has affected the calculation of research outcomes and citations score for institutions in both rankings.

Authored by Hussam Alshraideh and Mohamed Abdelgawad at the American University of Sharjah in the United Arab Emirates, the study gave as an example of erroneous subject mapping the fact that publications on fuel technology, nuclear engineering, and all energy-related studies are classified under civil engineering in the THE ranking but under electrical and electronics engineering in the QS ranking.

“This is completely unfair, as many of these studies are conducted by researchers in mechanical or chemical engineering disciplines,” the study stated.

To demonstrate the effect of this erroneous mapping on the final ranking, the study obtained publication data for 13 institutions from the top 20 institutions in the Arab World from 2017 to 2021 and their citations until mid-2022, as indexed in Scopus.

Following QS and THE subject ranking methodology, the study then re-ranked these institutions based on citations per paper and h-index indicators based on a modified subject mapping suggested by a sample of 12 faculty members from six different engineering departments at the authors’ institution.

The study found that the new ranking differed considerably from the one calculated by QS and THE based on their controversial subject mapping.

“Many institutions (sometimes 10 out of 13) had their rank change in some subject areas, with the rank of some institutions dropping six ranks out of 13 in some cases!” the study pointed out.

Limitations of rankings

“We believe this study sheds light on the inaccuracies in subject rankings and the importance of coming up with a unified subject mapping to be used by the different ranking bodies,” the study argued.

Abdelgawad, co-author of the study, told University World News: “We found that the analysis behind certain components of university rankings is not as rigorous as commonly perceived. Specifically, some of the subject mapping methodologies used by QS and THE university rankings by subject contained significant inconsistencies, which ultimately lead to inaccurate ranking results.

“Our study demonstrated that university subject rankings should not be viewed as absolute measures. They simply reflect the particular details of the indicators being used. Unfortunately, these details aren’t always accurate, and more concerning is that they're typically not transparent to the general public.

“The takeaway message from the study to policymakers in higher education is that universities and educational programmes shouldn't be evaluated solely based on rankings.”

He stressed that a university’s contribution to society was far more comprehensive than what could be captured by a limited set of indicators.

“It’s troubling to observe institutions actively seeking ways to climb rankings without making substantive improvements to their societal contributions. If such practices continue, rankings will lose their meaning,” he said, referring to Goodhart’s Law, which states: “When a measure becomes a target, it ceases to be a good measure.”

Subject classification: A ‘complex’ issue

University World News reached out to QS World University Rankings and THE World University Rankings for their views on the study’s arguments.

William Barbieri, communications manager at QS Quacquarelli Symonds (QS), told University World News: “While we have not had the opportunity to review the full paper, we appreciate any research that contributes to the ongoing discussion about university rankings and their methodologies.

“At QS, we take the accuracy and credibility of our rankings very seriously. Our subject rankings use a balanced set of indicators, including academic reputation, employer reputation, research citations, and other relevant metrics to offer meaningful insights into institutional performance across disciplines.”

Barbieri stated: “Mapping subject research in an increasingly interdisciplinary landscape carries inherent challenges. While no system can capture every nuance, QS remains committed to providing the fairest and most useful benchmarks for students, institutions, and policymakers.”

Barbieri noted that while QS recognises the complexity of subject classification in academic publishing, including concerns about journal-to-discipline mapping, which can affect citation-based metrics, citations were “just one part of QS’ methodology, and we intentionally use multiple indicators for balanced assessment”.

He said that QS operates in partnership with Elsevier, but its analysis and definitions were shaped by wider academic and industry perspectives to ensure relevance and usefulness to students and stakeholders.

“This can lead to classifications that differ from those used in purely bibliometric systems,” Barbieri said. “We welcome dialogue with researchers and universities to help improve and standardise classification systems. Our aim is to reflect – not oversimplify – the diversity of global higher education,” he added.

A THE spokesperson told University World News it was important to note that THE ranked universities at 11 broad subject levels, including engineering in its entirety as one subject ranking.

“We do not rank sub-disciplines of engineering separately,” the THE spokesperson said.

“Our subject rankings also use 18 separate performance metrics spanning well beyond bibliometric indicators to cover research, the teaching environment, international outlook and industry links.

“Research publications often span multiple disciplines, and any mapping between classification systems inherently involves approximations and requires ongoing review,” the THE spokesperson said.

“We are committed to collaborating with the academic community to continually assess all aspects of our methodology, including subject mappings, and we always welcome constructive feedback,” the spokesperson added.

Harmonising classifications

Higher education global expert and director of strategic insights at RMIT University in Australia, Angel Calderon, told University World News he was not surprised about the study findings, as there is no “universally accepted typology” that maps journal subject classifications to narrow fields of study.

“The various subject areas determined by rankings tend to be based on natural groupings, and there will be many instances where mismatches become evident,” Calderon said.

“As the authors highlight, the subject classification typologies used by the various ranking schemas do not correspond to any other existing classification standards,” said Calderon, who is the author of a 2023 study titled “Sustainability Rankings: What they are about and how to make them meaningful”.

Calderon said that while it was desirable to have one universal classification or correspondence between multiple and diverging classifications, in practice it is almost impossible to achieve harmonisation, as there are many nuances or peculiarities across disciplines.

“At the same time, it is important to recognise that institutional structures vary between and across national systems. The subject classification that may be considered appropriate for one institution or national system will be irrelevant to another.

“While I believe it will be useful to harmonise subject classifications, I also believe we are a few years away from having one, as it will be necessary to have global consultation with the scientific community, ranking and bibliometric organisations, and government organisations,” he stated.

Calderon said that, at the same time, it was necessary to accept that, at the end of the day, there were many competing standards.

“We are likely not the first to want to have one universal standard. However, the more we seek to harmonise standards, the more standards we end up having,” he stated.

Dmitry Kochetkov, associate professor in the Department of Probability Theory and Cybersecurity of RUDN University in Russia and deputy director for innovations in scientific communication at scientific electronic library LLC (eLIBRARY.RU), told University World News the study demonstrated the “arbitrariness of the classification system through the mislabelling of research areas”.

“The dramatic change in the re-ranking of 13 Arab institutions based on a modified subject mapping shows that the rankings rely on subjective decisions rather than objective measures of excellence,” said Kochetkov, who is the author of a 2024 study titled “University rankings in the context of research evaluation: A state-of-the-art review”.

Kochetkov said that such inconsistencies “distort not only the numbers, but also the perception of institutional advantage, funding allocation, student admissions, and policy priorities”.

Referring to the authors’ call for the adoption of standardised classification systems or the use of artificial intelligence to analyse individual articles rather than rely on journal labels, Kochetkov said: “Journal classification, which is detached from the actual content of research, perpetuates inaccuracies.”

Multifactor labelling

“Being a deputy CEO of the academic electronic library eLIBRARY.RU, I strongly support the idea of article-level thematic classification. Currently, we use multifactor article labelling (vectorisation, co-citation, bibliographic coupling, etcetera),” he noted.

Kochetkov said: “The journal subject labels are also employed, but we plan to replace them with Large Language Models.

“However, these shortcomings point to a deeper problem: the rankings rely on artificial frameworks rather than the actual interdisciplinary nature of scientific research.

“Thus, this study supports the criticism that university rankings lack validity and should be avoided in favour of assessments that take into account the complexity and context of scientific research.”