Does science publishing rely too much on subjective decisions and personal networks?

Probably the most important scientific discovery of the 20th century – James D Watson and Francis Crick's description of the double-helix structure of DNA – was not peer reviewed before it was published in Nature in 1953.

Successfully publishing research results depends on more than the quality of your work. It depends at times on happenstance. Watson and Crick's paper appeared in Nature because an editor thought it was important.

The selection process journals use is particularly opaque at two points. When processes are not clear, open and transparent, then prejudice comes easily into play. When that happens, our confidence in the system is weakened and the results are more likely to be unfair.

In-house triage

Submissions to the biggest and most influential journals are sorted in-house for peer review.

The Lancet, for example. reports sending out approximately 30% of submissions for review. Nature allegedly sends out 40%. I recently heard editorial representatives from each of these journals suggest that they only send out about 20%.

How do journals make that decision? Which papers progress to peer review? The staff of these journals include scientific editors who read and grade submissions. What criteria are used?

It is hard to know. The otherwise extremely detailed ‘information for authors’ at The Lancet provides no insight whatsoever into this part of the process.

I have had the opportunity to ask editors of some of the most prestigious journals in science about this process. Of course, there are fairly formalistic issues, like conformity with required methodologies or the proper treatment of subjects and data that can lead to early dismissal.

But editors also repeatedly appeal to impact or – as they sometimes more accurately put it – anticipated impact.

The in-house team identifies articles they consider likely to have high impact and sends them out for review. Those deemed likely to have low impact are rejected immediately, without peer review.

While I doubt neither the qualifications nor the competence of the editorial teams, speculation on which articles will have high or low impact seems at best deeply conservative and at worst dangerous.

Such a process is indisputably hidden behind a cloak of intuition and experience. It surely is one of the key opportunities for making scientific publishing more understandable, transparent and fair.

Selecting reviewers

Articles that do advance to the stage of peer review are then subjected to the second murky area in this process – namely, the selection of reviewers.

The opinions of peer reviewers are, of course, crucial for the final determination of whether an article gets published. All editors are concerned to avoid conflicts of interest when they select reviewers.

But once we get beyond that, how does this process work? Of course, editors try to identify suitable experts based on publishing records, activity levels of various research groups and references from other researchers.

Nonetheless, the personal networks of editors are important in the selection of reviewers. When professional activities are built on personal networks, we cannot be confident that the results are of the highest possible quality.

Moving forward

Systems that rely on intuition and networks are unlikely to be fair. In scientific publishing, the systems for initial screening and reviewer selection are opaque; we do not know what criteria are used, and networks play too big a role.

There are steps that can be taken to improve these parts of the process; it might be the case, however, that the solution requires radical change in models for dissemination and quality control.

The two challenges I've mentioned here are not the only ones faced by scientific publishing today. Lack of transparency has a gender equality aspect also; it is one of the reasons women in science do not want to work in universities. And, of course, there will be disruptive effects once social media and publishing are better integrated.

Scientific publication has a hallowed status. At times, it seems deserved. But at other times, it looks like just one more old boys’ network. When that happens, we have to find new models and new approaches. Our work is too important not to.

* Curt Rice is pro rector (vice president) for research and development at the University of Tromsø in Norway. He blogs on topics related to university leadership. Join him on 2 May for a free webinar on “How to get more women professors”.