Over the summer, the United States National Institutes of Health, or NIH, announced that they were abandoning their plan to cap the maximum number of grants researchers can get, using an index that takes the total amount of money obtained by a Principal Investigator into account.
This was a surprise to many observers for it came just a month after the NIH explained its decision to limit grant applications on the basis of a detailed quantitative analysis of input and output of research, confirming that there were diminishing returns (in terms of citation-based scientific impact) for the money invested once a certain maximum had been passed.
It was thus considered rational to use the money freed up to offer grants to younger researchers who were far from that maximum and thus would gain more in terms of productivity and impact.
That there are diminishing returns in science funding cannot be a surprise to anyone concerned with research evaluation as all economic phenomena obey such laws. For who can seriously believe that concentrating resources on a small amount of persons (and thus ideas) can maximise the probability of getting really new ideas and arriving at scientific breakthroughs? The evidence clearly showed that the NIH policy was sound.
By contrast, its critics essentially pointed to anecdotes and individual cases in their argument about the damage the new policy might do. Even in the hardest of sciences it is always possible to quibble about accuracy and reproducibility when one does not like certain results.
However, the diminishing returns case seems clear and a policy’s aims are based on overall benefits and cannot take into account every ‘molecule’ of the system.
Encouraging greater system-efficiency is bound to affect a few individual researchers. It is estimated the new funding policy would affect about 6% of NIH investigators, but would create about 1,600 new awards.
Remote vs face-to-face meetings
An analogous crisis emerged a year before when the Canadian Institutes of Health Research, or CIHR, announced that it would replace face-to-face reviewers’ meetings in Ottawa with remote meetings on the basis that evaluations made before the meeting did not much change during the meetings which cost millions of dollars to organise.
Scientists were quick to react and even asked the Minister of Health to intervene to block the decision. Again, anecdotes were offered about the role that discussions between members played in the decision-making process. But the idea that those face-to-face meetings could have their own biases – for instance, when a dominant individual imposed a view that went against earlier, more cool-headed, individual evaluations – was not entertained.
The protests, which were amplified by the media, replaced a more thorough quantitative analysis showing that the meetings did not in fact provide better assessment of grant applications in general than those made by individual reviewers working from home. But due to the pressure exerted, the CIHR abandoned its new policy.
These two cases suggest that whereas scientists like to promote so-called ‘evidence-based’ policies in all areas, they seem curiously less keen (or able) to do the same when it comes to science funding. This lack of reflexivity is disturbing.
It is true that science is probably the only system in which funding is limitless in the sense that ‘more research’ on any topic is always welcome to occupy new PhDs and postdocs. But input is not the same as efficiency.
This state of affairs is even more surprising given the fact that most scientists are eager to use pseudo-indicators like journal impact factors to evaluate articles to boost their own position or berate their competitors.
But whereas most bibliometric indicators are dubious when applied to individual researchers, they provide important information when used at the aggregate level.
Much as Boyle’s law says nothing about the individual atoms but provides precious information about the relation between pressure, volume and temperature of a gas, policy-makers need to better explain that they work at the level of the research system and use tools relevant at that scale. Anecdotal evidence does not create a valid case.
However, it is also true that even the most robust evidence cannot always defeat vested interests and ideologies like the one based on the rhetoric of ‘excellence’ that justifies the irrational concentration of resources on a very limited amount of supposedly ‘excellent’ persons and research programmes that have already passed their maximum level of efficiency.
As the saying goes, some have more money than brains. The concentration of research funding in a few hands not only goes against the data on diminishing returns, it also goes against logic and common sense that clearly suggest that one should never put all one’s eggs in only one basket.
Yves Gingras is scientific director of the Observatoire des sciences et des technologies and Canada Research Chair in History and Sociology of Science at the Université du Québec à Montréal, Canada. Email: firstname.lastname@example.org
Receive UWN's free weekly e-newsletters