Russian trolls, Twitter bots stoked vaccine debate by spreading misinformation, study finds
Social media bots and Russian trolls sowed discord and spread false information about vaccines on Twitter, according to a new study conducted by researchers at Johns Hopkins University and other institutions.
Using tactics similar to those at work during the 2016 United States presidential election, these Twitter accounts entered into vaccine debates months before election season was under way. The study, “Weaponized Health Communication: Twitter Bots and Russian Trolls Amplify the Vaccine Debate,” was published Thursday in the American Journal of Public Health.
The research team included Mark Dredze, an associate professor in the Department of Computer Science at Johns Hopkins and a pioneer in the collection and study of social media data to monitor flu cases, mental illness trends, and other health concerns. Hopkins computer science graduate students Tao Chen and Adrian Benton, as well as researchers from George Washington University and the University of Maryland, also contributed to the study.
The team examined thousands of tweets sent between July 2014 and September 2017 and discovered that several accounts—now known to belong to the same Russian trolls who interfered in U.S. elections in 2016, as well as marketing and malware bots—tweeted about vaccines and skewed online health communications.
“The vast majority of Americans believe vaccines are safe and effective, but looking at Twitter gives the impression that there is a lot of debate,” says David Broniatowski, an assistant professor in GW’s School of Engineering and Applied Science. “It turns out that many anti-vaccine tweets come from accounts whose provenance is unclear. These might be bots, human users or ‘cyborgs’—hacked accounts that are sometimes taken over by bots. Although it’s impossible to know exactly how many tweets were generated by bots and trolls, our findings suggest that a significant portion of the online discourse about vaccines may be generated by malicious actors with a range of hidden agendas.”
For example, the researchers found that “content polluters”—bot accounts that distribute malware, unsolicited commercial content, and disruptive materials—shared anti-vaccination messages 75 percent more than average Twitter users.
Russian trolls and more sophisticated bot accounts used a different tactic, posting equal amounts of pro- and anti-vaccination tweets.
Broniatowski’s team reviewed more than 250 tweets about vaccination sent by accounts linked to the Internet Research Agency, a Russian government-backed company recently indicted by a U.S. grand jury because of its attempts to interfere in the 2016 U.S. elections. The researchers found the tweets used polarizing language linking vaccination to controversial issues in American society, such as racial and economic disparities.
“These trolls seem to be using vaccination as a wedge issue, promoting discord in American society,” says Dredze. “However, by playing both sides, they erode public trust in vaccination, exposing us all to the risk of infectious diseases. Viruses don’t respect national boundaries.”