GLOBAL

Must academic evaluation be so citation data driven?

For the past quarter-century, I have reviewed cases for academic tenure and promotion in many disciplines in many countries. Usually what is required is an evaluation of the candidate’s research record. Teaching and, increasingly, public engagement are also mentioned as factors to weigh. Grant income often appears as a consideration, but not so as to give the impression that I am being asked to judge the candidate’s suitability to make partner in a law firm.

Of course, the dossier is normally prefaced by the candidate’s own ‘statement of purpose’, but it is made clear that the candidate will not be judged primarily on his or her own terms.

The most striking common requirement of these reviews is that the candidate should be judged against people ‘at a comparable stage in their career’. And lest such comparisons seem too invidious, the university helpfully supplies the candidate’s ‘citation data’ – that is, the number of times that the candidate’s work is cited in the academic literature. And to make my task still easier, I am also often provided with such data for other people at a similar career stage in the same field at ‘peer institutions’.

Implied in all this is that the candidate’s success will depend on a competition in an imaginary academic labour market, as constructed by the university administrators soliciting my judgement.

This marks a subtle yet profound change to the norms of academic conduct. After all, academia has traditionally differed from the business world by the absence of a generalised incentive to seek out what rivals are doing in order to get one step ahead of them.

Indeed, as Max Weber famously counselled graduate students a hundred years ago, ‘science as a vocation’ is simply about following the path of inquiry wherever it may lead, even if no one else follows. To be sure, this norm of intellectual independence, or ‘academic freedom’, also helps to explain why much – if not most – of what academics publish remains in a state of benign neglect.

Enter the Web of Science

While academic rivalries have certainly existed, especially in the natural sciences, they tend to be quite focused and relatively self-organised, often turning on a clash of personalities. The race to decipher the structure of DNA is iconic in all these respects.

In contrast, when I am asked to review a tenure and promotion case, the competitive field is understood to be potentially vast and already set in place. Moreover, that is not simply because the commissioning university has imagined it that way.

That way of looking at things has been made possible by the existence of the Web of Science, formerly known as the Science Citation Index, a United States-based commercial enterprise that began early in the Cold War with some start-up money from the National Science Foundation to map the geopolitics of knowledge production.

The Web of Science is founded on the principle that regardless of whether academics cite someone whom they regard as good, bad or indifferent, the very act of citation acknowledges that the cited party exerts some power over the field of play. The name of the game for the academic then is to become what is commonly called a ‘strong market attractor’.

When the classical languages were still studied, the word used was ‘cynosure’, which comes from the Greek for the constellation that contains the polestar. The Greeks gave the constellation this name because they thought it looked like a dog’s tail, which is what ‘cynosure’ literally means.

In that spirit, we might wish to consider whether the tail is wagging the dog in the case of academia’s fixation on citations as a measure of intellectual value.

The H-index: a measure of consistency?

A citation-based metric that many universities favour these days is the H-index, named for Jorge Hirsch, a physicist at the University of California, San Diego, who in 2005 proposed that neither number of publications nor number of citations were by themselves reliable measures of a researcher’s market value.

On the one hand, one might produce many publications that no one cites; on the other, one might produce only one publication that everyone cites. Neither looks good as a long-term investment prospect, which after all is what academic tenure and promotion is all about.

Instead what you want – and which the H-index purports to provide – is a measure of the researcher’s capacity to command the attention of fellow academics with anything that he or she publishes.

The intuition informing the H-index is simple and plausible. If your publications are ordered according to the number of citations they have received, then your H value is determined by the last publication whose rank order is higher than its citation count.

Take two researchers with five publications. The citations of the first are 100, 50, 12, 2, 2 and the second are 50, 20, 10, 7, 2. The first has an H value of 3 since the top three of the five papers have more than three citations and the fourth has less than four, while the second has an H value of 4 – even though the number of citations of the first researcher more than doubles that of the second researcher.

Of course, the first researcher may have earned so many citations because he or she had an unrepeatable ‘genius moment’, as Einstein did in 1905 when he published three articles that changed the face of physics. Nevertheless, the second researcher is arguably more consistent in staying ahead of his or her field’s frontier to see things that others are already looking for just before they do. And that may be what universities are looking for these days.

This situation is ripe for analysis on several fronts. For example, one might wonder how academics managed to get themselves into this situation in the first place. Such a question is the stock in trade of what is nowadays called ‘critique’. But to those who don’t see the value in crying over spilt milk, there remains the question of where academics go from here.

The H-index poses an interesting challenge because it takes seriously two worthy but normally countervailing ideas: that inquiry is in the first instance self-determined yet progress, in the final instance, is made collectively. When taken together, as the H-index does, the desirability of becoming a ‘strong market attractor’ starts to look reasonable. But this conclusion involves several background assumptions that may or may not be so reasonable.

One elementary assumption of the H-index is that you must be prolific in order for your H value to reach world-class levels. Hirsch himself suggested that a value of 45 is needed to become a member of the US National Academy of Sciences, and that among living scientists the ones with the highest H values are in the 150-200 range.

To be sure, the universities that ask me to judge the H-index of a candidate for tenure or promotion always stipulate a realistic reference class for the candidate, but a clear direction of travel is presumed for the candidate’s career – namely, an ever-increasing H value.

How to play the game

What sort of working conditions enables a researcher to publish the number of citable works that correspond to those very high target H values? Co-authors are invariably involved, and sometimes the lead researcher may not be aware of everything – appropriate or otherwise – that the co-authors are doing.

This was made abundantly clear when Nobel Prize-winning cancer researcher David Baltimore, who has one of the very highest H values in the life sciences, was caught in a scientific misconduct investigation of one of his co-authors in the 1990s, resulting in his resignation from the presidency of Rockefeller University.

But we can delve deeper. The idea that researchers with high H values are ‘strong market attractors’ presupposes that researchers are free to cite each other, in which case the rank order of researchers’ H values is ‘spontaneously generated’. But that’s not really how academic peer review works.

Editors often require researchers to insert citations to favoured authors as a condition of publication. Indeed, more experienced researchers normally cite such authors so as not to be told.

In effect, editors impose a toll on researchers who want to enter the citation market by forcing them to credit people in their articles who may not have intellectually contributed to whatever is claimed to have been done. The practice is typically justified in terms of the need to acknowledge established precedents in order to legitimise one’s own line of research.

Whatever one makes of this justification, the practice itself casts serious doubts on whether the academic citation market is sufficiently ‘free’ to inspire confidence that the rank order of researchers’ H values truly reveals whose work has substantively mattered the most to the research community.

In this respect, the recent turn to ‘Altmetrics’, which are largely internet-based citations that lie beyond the official reach of academic peer review, is a positive step toward what the H-index is trying achieve.

One might even look forward to a day when academics take seriously Google’s capacity to allow readers of an academic article to engage intellectually with it at their own level without the training wheels provided by citations. In that utopian world, it will become generally recognised that a formal citation is tantamount to no more than virtue signalling.

Steve Fuller is Auguste Comte Chair in Social Epistemology at the department of sociology, University of Warwick, United Kingdom. He will be leading a seminar hosted by the International Centre for Higher Education Management titled ‘Professors versus Robots: Is there value in being human in an automated university?’ on Thursday 4 October 2018 at the University of Bath, UK.