GLOBAL

When rating impact, don’t forget social networks – Study

A recent study from the University of Cambridge suggests that official impact assessment systems may not be sufficiently geared towards measuring and acknowledging the impact and public value of research increasingly being shared by socially networked academics over a multitude of social media platforms.

The use of social media by academics to increase the impact of their research is increasingly common and actively encouraged. But are the contributions of social media being adequately acknowledged in official impact assessments?

A recently published paper by University of Cambridge faculty of education researcher Dr Katy Jordan suggests that academics’ own perceptions of what constitutes research impact through the use of various social media platforms do not always align with official impact assessments – such as the Research Excellence Framework (REF) in the United Kingdom – and that some of the public value of research shared over these networks is being overlooked.

“The official language presents impact as a top-down, outward flow from universities to a waiting public, but this is an outdated characterisation – if it was ever valid at all. Ask researchers about their most impactful interactions on social media, and you'll get a much wider range of examples than the REF covers,” said Jordan in a university press statement to announce the paper’s publication late last month.

“You could argue that this means too many researchers are misunderstanding what impact is; but it’s also potentially evidence that times have changed. There’s a huge amount to be said for asking universities to demonstrate their value to wider society, but it may be time to rethink how we measure this.”

Distribution of funding

This link between impact and the use of social media for research purposes obviously has important implications for academics because, as the official UK system for auditing and measuring research quality, the REF has a bearing on the distribution of national research funding.

Jordan’s study, published in Learning, Media and Technology, argues that the existing system of measurement, which informed the REF results published on 12 May 2022, fails to account for the more “socially networked” nature of a significant amount of current research, and is failing to keep pace with socially networked academics.

“Given that the use of social media is increasingly part of an academic’s work – and is certainly important in how academics view impact – it’s appropriate for this to be acknowledged within evaluations,” she told University World News via email.

However, she acknowledged that not all academics were equally socially networked.

“It isn’t going to be the same for all academics and will depend on the nature of their research and the communities they engage with,” she said.

Feedback loops

Jordan’s analysis of 209 examples of how 107 academics from 15 countries (but mainly from the UK) discuss and encourage uptake of their work on social media showed that academics are often engaged on social media platforms in ongoing “feedback loops” with organisations, community groups, policy actors and other publics during a project’s lifetime.

These lead to opportunities to collaborate and share expertise while the research is still underway, often in ways that the REF is unlikely to cover.

When academics were invited to provide examples of strong impact they had achieved through social media, fewer than half of the examples submitted related to cases in which research had been disseminated “outwards” to the public, as products, in the way the REF presumes, according to the press release.

In these cases, academics had tended to use social platforms to share their research with a wider audience, to stimulate discussions with colleagues, or to generate evidence of positive engagement with the research.

However, about 56% of the responses referred to impacts from exchanges that were not only uni-directional, with social media being used to test research ideas, report interim findings, crowdsource information and data, or advertise for research participants.

“These discussions appear to have generated more than just ‘impact’ in the official sense. As a result of the exchanges, researchers were invited to give public lectures, participate in panel discussions, give evidence and advice to organisations, or run training sessions,” Jordan notes in the press release.

In one case, Jordan notes, a post on social media led to a senior civil servant from the Cabinet Office visiting an entire group of academic colleagues to explore how their work as a whole might inform and shape policy.

Institutional definitions

Jordan’s analysis of the responses revealed that the forms of interaction identified by participants did include activities that fell within the institutional definitions of impact of helping to facilitate changes and benefits to a wider range of stakeholders (UK Research and Innovation), and having both academic and social impacts (Vitae).

There were also examples which suggest that social media can be useful in terms of showing the reach of an impact, which is a key component of REF impact case studies.

However, the “significance” part was “less easily illustrated through metrics alone” – a finding that, Jordan’s paper argues, highlights a “need for academics to receive training in the terms and concepts associated with the impact agenda – to be informed about the need for both demonstrating reach and significance, and actively look to find evidence for both in their interactions”.

In an email to University World News, Jordan explained that the way in which social media-derived impact can be included in the REF at present is through the impact case studies.

“These do not specifically ask about social media, but case study authors may opt to include reference to social media in their case study narratives – for example, within the 2014 REF impact case studies, some form of social media was mentioned in approximately 25% of them,” she said.

In the paper, Jordan notes that the REF measures impact through two principal dimensions: ‘significance’ (the meaningful difference a project makes) and ‘reach’ (the quantifiable extent to which it does so).

“The definition of impact beyond this is very open-ended, varies across disciplines, and is often considered ambiguous,” said Jordan in the study’s press release.

The REF also makes a distinction between “research impact” and “public engagement” and explicitly states that the act of engaging the public with research “does not count as impact”.

As Jordan’s study notes, the advice on public engagement is confusing, encouraging it “in general but discouraging it in the assessment metrics”.

According to official REF guidance quoted by the study, the “act of engaging the public with research does not count as impact. Impact is what happens when people interact with the research, take it up, react or respond to it.”

Jordan argues that social media is blurring the distinction between “impact” and “public engagement”.

“As information flows into academic projects – from people, companies and organisations who are contributing ideas, questions and feedback through social platforms – so these generate both formal and informal opportunities for ‘outward’ exchange. This circuit of interaction seems to be influencing and benefiting society in multiple ways not tracked by the REF,” she said.

More nuance and flexibility

The challenge, she concedes, is that these more nuanced impacts are difficult to measure.

“One solution may be to amend the assessment so that it asks universities not just to provide evidence of research outcomes, but to explain the research process across a project’s lifetime,” Jordan said.

“This isn't a call for yet more ambiguity about what impact is, but for more open-mindedness about what researchers achieve. In an increasingly complex, socially networked culture, this would help to ensure that the broader effects of their work are not forgotten,” she said in the accompanying press release.

Jordan told University World News that some “nuance” and “flexibility” around what impact means in different contexts is necessary.

“In terms of measuring impact, concepts such as ‘significance’ and ‘reach’ are useful guidelines where more specificity is required but there is risk in reducing impact to just these dimensions, or metrics as proxies for them (eg, follower counts).

“To an extent, some ambiguity around the definition of impact is helpful in allowing academics in different fields to describe what impact means to them,” she said.