19 October 2017 Register to receive our free newsletter by email each week
Advanced Search
View Printable Version
Ranking’s research impact indicator is skewed

The 2012 Times Higher Education World University Rankings consisted of 13 indicators grouped into five clusters. One of these clusters consisted of precisely one indicator, research impact, which is measured by normalised citations and which THE considers to be the flagship of the rankings.

It is noticeable that this indicator, prepared for THE by Thomson Reuters, gives some universities implausibly high scores. I have calculated the world's top universities for research impact according to this indicator.

Since it accounts for 30% of the total ranking it clearly can have a significant effect on overall scores. The data are from the profiles, which can be accessed by clicking on the top 400 universities. Here are the top 20:

1. Rice University
1. Moscow (State) Engineering Physics Institute (MEPhI)
3. University of California Santa Cruz
3. MIT
5. Princeton
6. Caltech
7. University of California Santa Barbara
7. Stanford
9. University of California Berkeley
10. Harvard
11. Royal Holloway London
12. Chicago
13. Northwestern
14. Tokyo Metropolitan University
14. University of Colorado Boulder
16. University of Washington Seattle
16. Duke
18. University of California San Diego
18. University of Pennsylvania
18. Cambridge

There are some surprises here, such as Rice University in joint top place, second-tier University of California campuses at Santa Cruz (equal third with MIT) and Santa Barbara (equal seventh with Stanford) placed ahead of Berkeley and Los Angeles, Northwestern almost surpassing Chicago, and Tokyo Metropolitan University ahead of Tokyo and Kyoto universities and everywhere else in Asia.

It is not totally implausible that Duke and the University of Pennsylvania might be overtaking Cambridge and Oxford for research impact, but Royal Holloway and Tokyo Metropolitan University?

These are surprising, but Moscow State Engineering Physics Institute (MEPhI) as joint best in the world is a definite head-scratcher.

Other oddities

Going down a bit in this indicator we find more oddities.

According to Thomson Reuters, the top 200 universities in the world for research impact include Notre Dame, Carleton, William and Mary College, Gottingen, Boston College, University of East Anglia, Iceland, Crete, Koc University, Portsmouth, Florida Institute of Technology and the University of the Andes.

On the other hand, when we get down to the 300s we find that Tel Aviv, National Central University Taiwan, São Paulo, Texas A&M and Lomonosov Moscow State University are assigned surprisingly low places. The latter is actually in 400th place for research impact among the top 400.

It would be interesting to hear what academics in Russia think about an indicator that puts MEPhI in first place in the world for research impact and Lomonosov Moscow State University in 400th place.

I wonder too about the Russian reaction to MEPhI as overall second among Russian and Eastern European universities. See here, here and here for national university rankings and here and here for web-based rankings.

Déjà vu

We have been here or somewhere near here before.

In 2010 the first edition of the THE rankings placed Alexandria University in the world's top 200 and fourth for research impact. This was the result of a flawed methodology combined with diligent self-citation and cross-citation by a writer whose lack of scientific credibility has been confirmed by a British court.

Supposedly the methodology was fixed last year. But now we have an indicator as strange as in 2010, perhaps even more so.

So how did MEPhI end up as world's joint number one for research impact? It should be emphasised that this is something different from the case of Alexandria. MEPhI is, by all accounts, a leading institution in Russian science. It is, however, very specialised and fairly small and its scientific output is relatively modest.

First, let us take a look at another source, the Scimago World Report, which gives MEPhI a rank of 1,722 for total publications between 2006 and 2010, the same period that Thomson Reuters counts.

Admittedly, that includes a few non-university institutions. It has 25.9 % of its publication in the top quartile of journals. It has a score of 8.8 % for excellence – that is, the proportion of publications in the most highly cited 10% of publications. It gets a score of 1.0 for ‘normalised impact’, which means that it gets exactly the world average for citations adjusted by field, publication type and period of publication.

Moving on to Thomson Reuters’ database at the Web of Science, MEPhI has had only 930 publications listed under that name between 2006 and 2010, although there were some more under other name variants that pushed it over the 200 papers per year threshold to be included in the rankings.

It is true that MEPhI can claim three Nobel prize winners, but they received awards in 1958 and 1964 and one of them was for work done in the 1930s.

So how could anyone think that an institution that now has a modest and specialised output of publications and a citation record that, according to Scimago, does not seem significantly different from the international average using the somewhat larger Scopus database – Thomson Reuters uses the more selective ISI Web of Science – could emerge at the top of Thomson Reuters' research impact indicator?

Furthermore, MEPhI has no publications listed in the ISI Social Science Citation Index and exactly one (uncited) paper in the Arts and Humanities Index on oppositional politics in Central Russia in the 1920s.

There are, however, four publications assigned to MEPhI authors in the Web of Science that are listed as being on the literature of the British Isles, none of which seem to have anything to do with literature or the British Isles or any other isles, but which have a healthy handful of citations that would yield much higher values if they were classified under literature rather than physics.


Briefly, the essence of Thomson Reuters' counting of citations is that a university's citations are compared to the average for a field in a particular period after publication.

So if the average for a field is 10 citations per paper one year after publication, then 300 citations of a single paper one year after publication would count as 30 citations. If the average for the field was one citation it would count as 300 citations.

To get a high score in the Thomson Reuters research impact indicator, it helps to get citations soon after publication, preferably in a field where citations are low or middling, rather than simply getting many citations.

The main cause of MEPhI's research impact supremacy would appear to be a biennial review that summarises research over two years in particle physics and is routinely referred to in the literature review of research papers in the field.

The life span of each review is short since it is superseded after two years by the next review so that the many citations are jammed into a two-year period, which could produce a massive score for ‘journal impact factor’.

It could also produce a massive score on the citations indicator in the THE rankings. In addition, Thomson Reuters then gives a weighting to countries according to the number of citations. If citations are generally low in their countries, then institutions get some more value added.

The 2006 “Review of Particle Physics” published in Journal of Physics G, received a total of 3,662 citations, mostly in 2007 and 2008. The 2008 review published in Physics Letters B had 3,841 citations, mostly in 2009 and 2010, and the 2010 review, also published in Journal of Physics G, had 2,592 citations, nearly all in 2011. Someone from MEPhI was listed as co-author of the 2008 and 2010 reviews.

It is not the total number of citations that matters, but the number of citations that occur soon after publication. So the 2008 review received 1,278 citations in 2009, but the average number of citations in 2009 to other papers published in Physics Letters B for 2008 was 4.4.

So the 2008 review received nearly 300 times as many citations in the year after publication as the mean for that journal. Add the extra weighting for Russia and there is a very large boost to MEPhI's score from just a single publication. Note that these are reviews of research so it is likely that there had already been citations to the research publications that are reviewed. Among the highlights of the 2010 review are 551 new papers and 108 mostly revised or new reviews.

If the publications had a single author or just a few authors from MEPhI then this would perhaps suggest that the institute had produced or made a major contribution to two publications of exceptional merit. The 2008 review in fact had 173 co-authors. The 2010 review listed 176 members of the Particle Data Group who are referred to as contributors.

It seems then that MEPhI was declared the joint best university for research impact largely because of two multi-author (or contributor) publications to which it had made a fractional contribution. Those four papers assigned to literature may also have helped.

As we go through the other anomalies in the indicator, we find that the reviews of particle physics contributed to other high research impact scores. Tokyo Metropolitan University, Royal Holloway London, the University of California at Santa Cruz, Santa Barbara and San Diego, Notre Dame, William and Mary, Carleton and Pisa also made contributions to the reviews.

This was not the whole story. Tokyo Metropolitan University benefited from many citations to a paper about new genetic analysis software and Santa Cruz had contributed to a massively cited multi-author human haplotype map.

Number of authors

This brings us to the total number of publications. There were more than 100 authors of or contributors to the reviews but for some institutions the number of citations had no discernible effect and for others not very much.

Why the difference? Here size really does matter and small really is beautiful.

MEPhI has relatively few publications overall. It only just managed to cross the 200 publications per year threshold to get into the rankings. This means that the massive and early citation of the reviews was averaged out over a small number of publications. For others the citations were absorbed by many thousands of publications.

These anomalies and others could have been avoided by a few simple and obvious measures. After the business with Alexandria in 2010 Thomson Reuters did tweak its system, but evidently this was not enough.

First, it would help if Thomson Reuters scrutinised the criteria by which specialised institutions are included in the rankings. If we are talking about how well universities spread knowledge and ideas, it is questionable whether we should count institutions that do research in one or only a few fields.

There are many methods by which research impact can be evaluated. The full menu can be found on the Leiden Ranking site. Use a variety of methods to calculate research impact, especially those like the h-index that are specifically designed to work around outliers and extreme cases.

It would be sensible to increase the threshold of publications for inclusion in the rankings. The Leiden Ranking excludes universities with fewer than 500 publications a year. If a publication has multiple authors, divide the number of citations by the number of authors. If that is too complex then start dividing citations when the number reaches 10 or a dozen.

Do not count reviews, summaries, compilations or other publications that refer to papers that may have already been cited, or at least put them in a separate publication type. Do not count self-citations. Even better, do not count citations within the same institution or the same journal.

Most importantly, calculate the indicator for the six subject groups and then aggregate them. If you think that a fractional contribution to two publications justifies putting MEPhI at the top for research impact in physics, go ahead and give them 100 for natural sciences or physical sciences. But is it reasonable to give the institution any more than zero for arts and humanities, the social sciences and so on?

So we have a ranking indicator that has again yielded some very odd results.

In 2010 Thomson Reuters asserted that it had a method that was basically robust, transparent and sophisticated but which had a few outliers and statistical anomalies about which they would be happy to debate.

It is beginning to look as though outliers and anomalies are here to stay and there could well be more on the way.

It will be interesting to see if Thomson Reuters will try to defend this indicator again.

* Richard Holmes is a lecturer at Universiti Teknologi MARA in Malaysia and author of the University Ranking Watch blog.

Apology for factual error

In an earlier version of this article in University World News I made a claim that Times Higher Education, in their 2012 World University Rankings, had introduced a methodological change that substantially affected the overall ranking scores. I acknowledge that this claim was without factual foundation. I withdraw the claim and apologise without reservation to Phil Baty and Times Higher Education.

Richard Holmes

Receive UWN's free weekly e-newsletters

Email address *
First name *
Last name *
Post code / Zip code *
Country *
Organisation / institution *
Job title *
Please send me UWN’s Global Edition      Africa Edition     Both
I receive my email on my mobile phone
I have read the Terms & Conditions *