UNITED KINGDOM-GLOBAL

‘Sort by relevance’? Algorithms may bias literature searches

A new report shows how algorithms can bias searches of academic literature, favouring authors who are white, Western and male, and that many researchers are unaware of how widespread this is. Academics need to learn what they can do about it.

It has never been easier to search for academic literature, with vast online bibliographic databases available at our fingertips. However, as the amount of scholarly literature available online has grown, effectively searching it has become more challenging, and there is a risk that the results you are most likely to see will exacerbate diversity issues in academia.

This is because it is increasingly common for bibliographic databases to present search results sorted ‘by relevance’ – often as the default way of sorting the results. However, ‘relevance’ prioritises results based on more than just your search terms.

A recent report that I co-authored for the Society for Research in Higher Education explored this issue, how far academics are aware of it and what we can do about it.

The publicly available definition of how ranking works in Google Scholar states that content is ranked by relevance in “the way researchers do”. Determinant factors include the content, journal of publication, the author and recency of citations “in other scholarly literature”.

Many of these factors have been shown to be biased; favouring authors who are white, Western and male – and combining them risks accentuating this bias. As a result, the first few pages of search results will likely give you the ‘greatest hits’ of the most established scholars in the field.

Work by women, scholars of colour, early career researchers, or those from the Global South, is more likely to be further down the ranking.

Articles with a female first author have been shown to receive significantly fewer citations than those with a male first author, while the number of articles published in ‘international’ journals remains heavily weighted towards Western scholars.

While Google Scholar is probably the best known example, it is not alone. In our recent study of bibliographic databases, a colleague and I examined whether other databases also use ‘relevance’, and how it is defined.

Of the 14 databases we looked at – which included JSTOR, PubMed, Scopus, Semantic Scholar and Web of Science, among others – ‘sort by relevance’ was the default setting in all but two. Only half provided a definition of how relevance ranking worked. Of those that did, the definitions provided varied in depth and were often extremely vague.

Without clear definitions, academics who use these sources may unintentionally be reproducing these biases when using these search results.

Awareness of bias

But to what extent are academics aware of this? And how does it affect the way that they work with the literature? In the study, in addition to looking at how platforms define relevance, we surveyed academics about how they use search platforms and their assumptions about how ranking works.

For Google Scholar, there was widespread recognition that an opaque algorithm was at play – “algorithmic magic”, as one participant put it – and some caution about it. Many acknowledged that it uses an algorithm and that they do not know how it works.

“It’s a total black box,” said one. The number of citations a paper received was often perceived to be involved, but academics often didn’t make the link to biases within this.

Participants in the study often described using multiple platforms, not just relying on Google Scholar, in order to compensate for this. However, when we asked about how ranking works on other platforms, “algorithm magic” wasn’t perceived to be an issue – even though most platforms now do use ranking by relevance too.

Positive action to tackle bias

What can be done to address this? There are several ways in which positive action could be taken, from the level of the platforms themselves to academics’ individual practices, and institutional support.

At a minimum, databases should be more transparent about the use of ranking algorithms, making clear the risk of bias. Developers should also carefully consider whether ranking by relevance is really necessary at all.

There are also positive actions that can be taken by universities and individuals in order to raise awareness and take steps toward countering biases in the academic literature. For example, various resources and guidelines for ‘positive citation practices’ have been developed to help researchers diversify their sources of literature.

There is scope to make this type of approach more integrated into academic practice. Journals could require reviewers to look at the diversity of the references list in submissions. Universities could highlight the biases in ranking by relevance, and what can be done about it, through staff development sessions.

At an individual level, the good news is that most databases do allow you to control how your results are presented – so next time you search for literature, look to see if the default is delivering your results ‘by relevance’, and change it to sort by something else. Although this won’t fix the biases in academic publishing, it will remove the uncertainty of ‘relevance’ in your literature search, and you may find something new.

Katy Jordan is a senior research associate in the faculty of education at the University of Cambridge. The report from the project “‘Sort by relevance’: Exploring assumptions about algorithm-mediated academic literature searches”, funded by the Society for Research in Higher Education, can be downloaded now from their website.