RUSSIA
bookmark

Dangers from the pressure to publish

My argument in this article is simple – that attempts to use publication indicators as a measure of academic performance are to a considerable degree responsible for the miserable state of Russian scholarly periodicals.

The move to publish in international publications as an alternative has been largely a gesture of despair on the part of academic administrators – an attempt to transfer the evaluation function which Russian journals were unable to perform properly to presumably more reliable international editions.

The problem arising at this point is that this change of tactics puts the latter under the same pressure which has previously corrupted the former. One wonders whether a similar process of decay could be repeated now on a global scale.

Russia can arguably be considered the country where quantitative performance indicators based on academic publications were invented. University professors were obliged to publish an article every year as early as the 1830s and the members of the Petrine Academy of Sciences faced such a requirement even earlier.

The reason why publication indicators were so readily adopted in this country is not hard to fathom. From the very beginning Russian universities were a part of the state apparatus, a cog in the bureaucratic machinery.

Thus, according to the 1803 law that led to the emergence of the national system of higher education, every university was responsible for education matters in a certain territory (uchebny okrug). Their professors supervised gymnasiums, employed school teachers and advised parish schools on their choice of textbooks. The academic profession was fully submerged in the state bureaucracy.

As with other parts of the bureaucratic apparatus, that meant that bureaucrats were ultimately responsible for the promotion of the most worthy. Defining who is worthy in the academic sphere is, however, a problematic task. By default, academic achievements belong to an area that is impenetrable to outsiders, including one’s superiors in the state bureaucracy.

The bureaucrats could delegate the judgment to an academic’s immediate scholarly peers, but that would run counter to some of the strongest instincts of the Russian bureaucrats – to retain as much centralised control as possible.

To a certain degree at least, these instincts were based on the obsessive fear of collusion between the judges and those whom they must judge. Through all of Russian academic history, senior bureaucrats were fervently suspicious about the prospect that someone whom they entrusted with the responsibility to judge their peers would use the evaluative power to promote their relatives (nepotism) or to trade a positive evaluation for some other benefits (such as direct payments).

Different explanations can be given for this all-pervasive tendency. In bureaucrats’ defence it needs to be said that their fears were not totally ungrounded – Russia witnessed a long history of academic corruption, although it is not always easy to decipher cause and effect (see below). All in all, while delegating expertise to academic insiders could not be fully avoided, ministerial bureaucrats preferred to retain a certain degree of control.

This explains the importance of publications. First, publications are easily recognisable entities, even for a complete outsider. Second, they are highly visible to the members of a disciplinary community and if their quality is low, academics in a given field can be expected to volunteer to make this fact public and, indeed, the contents of academic periodicals were widely discussed in the Russian 19th century press.

Credible seals of approval

When academic quality is evaluated by counting the number of publications, collusion is still possible, but the network of collaborators involved would have to be much wider, including the editor-in-chief and probably a considerable part of the editorial board as well as peer reviewers if there are any.

Such a network is much more difficult to build, especially taking into account the necessity to keep the affair secret. Moreover, an editor may prefer publishing better pieces, and, thus, other things being equal, disinclined to enter into the collusion.

Publications can be trusted as credible seals of approval from the disciplinary community. This was the line of reasoning that led Russian ministerial bureaucrats of the 19th century – as well as many subsequent academic administrators – to use publications as a reliable signal of academic merit. The usual practice experimented with in the early 19th century was either to require a certain number of publications from those occupying a certain position or to pay bonuses for each article on a piecemeal basis.

The problem was that while publications had certain in-built protection mechanisms against collusion, these mechanisms were not strong enough, especially in the light of the fact that by rewarding publications the government multiplied the incentive to publish.

To increase the chances of seeing their name in print, individual scholars might do more research – and this was the reaction hoped for by the bureaucrats. They could, however, try to get articles of lower quality into print. They could also seek out collusions with editors and publishers.

Ironically, instead of destroying collusions between academic employers and employees, counting publications produced even wider and more extensive collusion without gaining much in terms of quality control.

Moreover, promoting publication led to many unexpected consequences, including the degradation of academic periodicals themselves.

Academic publications perform two main functions.They serve as vehicles for communicating ideas and as filters signalling which ideas are worth communicating. In the latter sense, they also suggest which individuals have valuable ideas. An overly intensive use of academic periodicals as a way of fulfilling this signalling function, however, may lead to the loss of its ability to perform its two main functions.

Most obviously, it creates an overload – everybody publishes as much as possible and, if circumstances permit, autoplagiarises. This decreases the average quality of publications but greatly increases their quantity and thus inhibits navigation through the literature and creates a general feeling that 'anything goes' as far as publications are concerned.

Equally damaging is the fact that it may incentivise collusion between authors and editors, with editors trading publication space for some kind of benefits.

Guaranteed publication

Ties with journals may be sought for apparently benign reasons that, however, can also lead to the deterioration of the journal system as a whole. With one’s career prospects and a significant share of one’s income depending on publications, an academic is interested in making his or her article’s path into print as smooth and predictable as possible.

On the brighter side, that may result in a good match between journals and authors with authors submitting their texts to the journals that are most likely to accept them. In an ideal case, this matching helps to maintain the thematic profiles of journals and to create a hierarchy of quality.

There are dangers, however, as well. Under pressure to publish, authors prefer journals that can give them guarantees that their texts will be published on time. This preference is clearly incompatible with the very idea of blind or double-blind peer-review which is by its nature a highly unpredictable process.

Publication pressures make the cost of a matching process based on blind peer review enormous. Institutions seek to cut these costs by getting editors to invite academics to publish. For authors this means a guarantee that their text will be accepted. For the scholarly community, however, the fact that decisions on content come down to the taste of editors, rather than the advice of anonymous reviewers, creates a risk of dependency on the idiosyncratic whims of powerful individuals who may be also tempted to use their position to strengthen their own patronage networks.

Negative consequences

Returning to the history of Russian academia, it is not clear if the negative consequences of administrative stimulation of publication activity were already visible then. By the end of the Imperial period, Russian science was flourishing in any event, but that situation gradually changed over the course of the 20th century and the Soviet successors of the imperial administrators were much less happy with the achievements of Russian scholars.

By 1975 the falling quality of academic expertise was widely discussed; dissertation defence procedures were made more stringent and a new emphasis was put on journals. This was the period when the concept of peer-reviewed journals entered the Russian legislature. The hope was that a working editorial board and reviewers would reduce an editor’s discretion and limit their power.

The USSR did not survive to see the results of these experiments. In the laissez-faire atmosphere of the 1990s the performance requirements survived – anyone holding an academic job was to produce a certain number of publications – but control over their implementation was nearly abandoned.

Instead of challenging the Moscow authorities, the universities demonstrated compliance by starting a series of periodicals called “Proceedings of university X” (Vestnik universiteta) subsidised by the university’s budget and publishing material from the university’s faculty only. There were unanimous objections to publishing these more widely.

It was also common to regard such editions as maintained for the benefit of the faculty of particular institutions. Outsiders, if they wanted to submit an article, were either rejected or requested to pay a sizable fee.

In addition to such institutional journals, some periodicals were printed by commercial publishers ready to accept anything for a fee. At least 90% of all periodicals existing in the first half of the 2000s belonged to one of these two categories. Along with them, a handful of mostly Moscow-based periodicals with a wider readership existed. They were ruled by autocratic editors and mainly published articles by members of their close circle – an inevitable result of the practice of soliciting papers.

International rankings

The period from the mid-2000s on was marked by renewed attempts by the ministerial bureaucrats to regain Russia’s intellectual leadership on the international scene and particularly to get the best Russian universities into international rankings.

As a part of these attempts, requirements for all those aspiring to academic degrees were made more stringent and included publishing the results of their work in peer-reviewed journals.

Since 2006, the ministry has been issuing the lists of journals it considers as having a full-fledged peer review procedure on an approximately yearly basis. Such journals are subject to surprise inspections. Thus, in 2014 they were requested to produce reviews from anonymous reviewers for a period of several years to controllers from Moscow.

These policies have had a mixed effect at best. The journals complied, but usually by imitating what the ministry wanted to see. Thus, it was not uncommon for an editor of a Vestnik to request a prospective author to submit not only an article, but also reviews from presumably anonymous reviewers.

Generally, the control policies greatly increased the complexity of the transactions needed to get a paper into print. It seems that overall they strengthened the networks involved in demonstrating compliance rather than destroyed them. What is more, they produced an enormous workload that even the most virtuous academics could not carry.

Scandals included an episode when a journal that the ministry approved of and which was published by one of the most prestigious universities accepted a machine-generated text submitted by a prankster.

Partly in recognition of this failure, the government turned to international science as a source of unbiased judgment. Journals indexed by Web of Science and Scopus were included in the ministerial lists and publications in them were specifically rewarded.

There were even rumours of making international publication a necessary condition for obtaining a degree, a policy which was implemented in Kazakhstan some time ago.

Obviously, the government’s attempts to internationalise Russian science had many causes, with the desire to get Russian universities into the international rankings being probably the most important. But the ministry demonstrated a preference for foreign experts before positions in international rankings were adopted as a leading indicator of success and in situations where there was no direct connection to ranking indicators.

The reason given by senior officials behind the scene was that the ministry wanted to capitalise on the continuing isolation of post-Soviet science: while collusion was a problem within Russia, few cliques had international connections and could collude with foreigners.

Universities reacted to these new publication pressures from the ministry, leading to a skyrocketing demand for the services of various international journals of negotiable selectivity “considered for inclusion in Scopus”. In some universities such publications were semi-officially recommended and professors disregarding them reprimanded. Those publications regarded as accessible and potentially helpful were courted.

Faculty having contacts with these publications could request special resources from their universities to cultivate their relationships. In one of the most notorious incidents, international scholars were notified that their travel and accommodation would be paid for if they promised to refer to the host university favourably in the next reputation survey.

Exporting collusion

Overall, the most visible reaction to attempts by ministerial bureaucrats to internationalise Russian science was attempts by academics to export the circle of collusion outside of Russia.

Was this prospect realistic? There is some good and bad news. The good news is that Russian academics are too few and not resourceful enough to make a difference globally. Collusion requires providing something in return for compromising academic integrity – and here Russians have simply not much to offer other than paying the expenses of a handful of academic tourists in return for a favourable rating.

This group together with publishers of various predatory journals have been so far the only international agents seriously affected by the government’s attempts to increase academic performance.

As far as publications are concerned, there are a few documented cases of academics establishing partnerships with editors of important journals to bypass the more unpredictable regular submissions system, but these cannot be considered to have had a major impact on the system.

Unless the Russian academic market becomes much more important globally, it is hardly a major threat to international academic virtue. The bad news is that similar pressure is exerted on scholars globally, and while Russia may have the dubious honour of being the first experiencing its consequences, it will probably not be alone.

Mikhail Sokolov is professor in the department of political science and sociology at the European University at St Petersburg, Russia. An edited version of this article was first published in the current edition of Higher Education in Russia and Beyond.