UNITED STATES

Russian trolls stoke public discord on vaccine science
Social media ‘bots’ and Russian ‘trolls’ have promoted discord and spread “unverified and erroneous” information about vaccines on social media, disrupting science communication to the public and posing a threat to public health, according to new research led by the George Washington University in the United States.Using tactics similar to those at work during the 2016 United States presidential election, these Twitter accounts entered into vaccine debates months before the election season was underway.
The study, "Weaponized Health Communication: Twitter bots and Russian trolls amplify the vaccine debate", published on 23 August in the American Journal of Public Health or AJPH, concludes that “health communications have become ‘weaponised’: public health issues, such as vaccination, are included in attempts to spread misinformation and disinformation by foreign powers” .
A lot of misinformation may be pushed forward by ‘bots’ (accounts that automate content promotion), or ‘trolls’ (individuals who misrepresent their identities), with the aim of promoting discord, the study explains.
“One commonly used online disinformation strategy, amplication, seeks to create impressions of false equivalence or consensus through the use of bots and trolls,” the study says.
Exposure to a vaccine debate in which it falsely appears – due to the work of bots and trolls – that many people support opposing sides of the argument may suggest there is no scientific consensus, shaking confidence in vaccination.
The proliferation of this type of content is associated with vaccine hesitancy and delay and vaccine-hesitant parents are in turn more likely to turn to the internet for information and less likely to trust health care providers and public health experts on the subject, the study says.
“Recent resurgences of measles, mumps and pertussis (whooping cough) and increased mortality from vaccine-preventable diseases such as influenza and viral pneumonia underscore the importance of combating online misinformation about vaccines,” the study says.
The team behind the study, which also includes researchers from the University of Maryland and Johns Hopkins University, examined thousands of tweets sent between July 2014 and September 2017.
They discovered that several accounts now known to belong to the same Russian trolls who interfered in the US election, as well as marketing and malware bots, tweeted about vaccines and skewed online health communications.
David Broniatowski, an assistant professor in George Washington’s School of Engineering and Applied Science, said: "The vast majority of Americans believe vaccines are safe and effective, but looking at Twitter gives the impression that there is a lot of debate. It turns out that many anti-vaccine tweets come from accounts whose provenance is unclear. These might be bots, human users or 'cyborgs' – hacked accounts that are sometimes taken over by bots.
“Although it's impossible to know exactly how many tweets were generated by bots and trolls, our findings suggest that a significant portion of the online discourse about vaccines may be generated by malicious actors with a range of hidden agendas," he said in a statement published by the George Washington University.
The researchers found that Twitter bots distributing malware and spam masquerade as human users to distribute anti-vaccine medicine messages.
These "content polluters" shared anti-vaccination messages 75% more than average Twitter users.
“A full 93% of tweets about vaccines are generated by accounts whose provenance can be verified as neither bots not human users yet who exhibit malicious behaviours,” the AJPH article says. “These unidentified accounts preferentially tweet anti-vaccine misinformation.”
Russian trolls and more sophisticated bot accounts used a different tactic, posting equal amounts of pro- and anti-vaccination tweets.
"These trolls seem to be using vaccination as a wedge issue, promoting discord in American society," Mark Dredze, a team member and professor of computer science at Johns Hopkins University, said. "However, by playing both sides, they erode public trust in vaccination, exposing us all to the risk of infectious diseases. Viruses don't respect national boundaries."
In an additional qualitative study, Broniatowski's team reviewed more than 250 tweets using the hashtag #VaccinateUS about vaccination sent by accounts linked to the Internet Research Agency, a Russian government-backed company recently indicted by a US grand jury because of its attempts to interfere in the 2016 US elections, the AJPH article says.
The researchers found that the tweets used polarising language linking vaccination to controversial issues in American society, such as racial and economic disparities. Some were linked to religious and populist or anti-‘elite’ messages.
Anti-vaccine messages included "VaccinateUS mandatory #vaccines infringe on constitutionally protected religious freedom”, “Don’t get #vaccines, illuminati are behind it”, “Did you know vaccines caused autism?”
Pro-vaccine messages included: “Do you still treat your kids with leaves? No? And why don’t you #vaccinate them? It’s medicine!”, “vaccines are a parent’s choice. Choice of a color of a little coffin #VaccinateUS” and “#VaccinateUS You can’t fix stupidity. Let them die from measles, and I’m for vaccination!".
Bots modify human content
The researchers found that bots and trolls frequently retweet or modify content from human users.
“Thus well-intentioned posts containing pro-vaccine content may have the unintended effect of ‘feeding the trolls’, giving the false impression of legitimacy to both sides,” they said, “especially if this content directly engages with the anti-vaccination discourse.”
They said the fact that bots spreading malware and spam are more likely to push anti-vaccine messages suggests the advocates are using pre-existing infrastructures of bot networks to promote their agenda. They may also be deliberately using emotive messages to pull in followers and drive up advertising revenue and also to expose users to malware.
Sandra Crouse Quinn, a research team member and professor in the School of Public Health at the University of Maryland, said: "Content polluters seem to use anti-vaccine messages as bait to entice their followers to click on advertisements and links to malicious websites."
The researchers warn that “more research is needed to determine how best to combat bot-driven content”. But they also say that responses might include highlighting that a significant share of anti-vaccine messages are “organised ‘astroturf’ – not grassroots responses”.
They suggest that public health communication officials should combat bot and troll driven disinformation by stressing that the credibility of the source is dubious and that users exposed to such content maybe more likely to encounter malware.
“Anti-vaccine content may increase the risks of infection by both computer and biological viruses,” they say in their article.