Banner image: The headquarters of the Internet Research Agency, a troll farm, in St. Petersburg |听Credit: VOA
Facebook users flipping through their feeds in the fall of 2016 faced a minefield of targeted advertisements pitting blacks against police, Southern whites against immigrants, gun owners against Obama supporters and the LGBTQ community against the conservative right.
Placed by distant Russian trolls, they didn鈥檛 necessarily aim to prop up one candidate or cause, but to turn Americans against one another.
The ads were cheaply made and full of threatening, vulgar language.
And, according to a sweeping new analysis of more than 2,500 of the ads, they were remarkably effective, eliciting clickthrough rates as much as nine times higher than what is typical in digital advertising.
鈥淲e found that fear and anger appeals work really well in getting people to engage,鈥 said lead author Chris Vargo, an assistant professor of Advertising, Public Relations and Media Design at 兔子先生传媒文化作品.
The , published this week in Journalism and Mass Communication Quarterly, is the first to take a comprehensive look at ads placed by the infamous Russian propaganda machine known as the Internet Research Agency (IRA) and ask: How effective were they? And what makes people click on them?
While focused on ads running in 2016, the study鈥檚 findings resonate in the age of COVID-19 and the run-up to the 2020 election, the authors say.
鈥淎s consumers continue to see ads that contain false claims and are intentionally designed to use their emotions to manipulate them, it鈥檚 important for them to have cool heads and understand the motives behind them,鈥 said Vargo.
How the study worked
For the study, Vargo and assistant professor of advertising Toby Hopp scoured 2,517 Facebook and Instagram ads downloaded from the U.S. House of Representatives Permanent Select Committee On Intelligence . The committee made the ads publicly available in 2018 after concluding that the IRA had been creating fake U.S. personas, setting up fake social media pages, and using targeted paid advertising to 鈥渟ow discord鈥 among U.S. residents.
Using computational tools and manual coding, Vargo and Hopp analyzed every ad, 听looking for the inflammatory, obscene or threatening words and language hostile to a particular group鈥檚 ethnic, religious or sexual identity. They also looked at which groups each ad targeted, how many clicks the ad got, and how much the IRA paid.
Collectively, the IRA spent about $75,000 to generate about 40.5 million impressions with about 3.7 million users clicking on them鈥攁 clickthrough rate of 9.2%.
That compares to between .9% and 1.8% for a typical digital ad.
While ads using blatantly racist language didn鈥檛 do well, those using cuss words and inflammatory words (like 鈥渟issy,鈥 鈥渋diot,鈥 鈥減sychopath鈥 and 鈥渢errorist鈥) or posing a potential threat did. Ads that evoked fear and anger did the best.
One IRA advertisement targeting users with an interest in the Black Lives Matter movement stated: 鈥淭hey killed an unarmed guy again! We MUST make the cops stop thinking that they are above the law!鈥 Another shouted: 鈥淲hite supremacists are planning to raise the racist flag again!鈥 Meanwhile, ads targeting people who sympathized with white conservative groups read 鈥淭ake care of our vets; not illegals鈥 or joked 鈥淚f you voted for Obama: We don鈥檛 want your business because you are too stupid to own a firearm.鈥
Only 110 out of 2,000 mentioned Donald Trump.
鈥淭his wasn鈥檛 about electing one candidate or another,鈥 said Vargo. 鈥淚t was essentially a make-Americans-hate-each-other campaign.鈥
The ads were often unsophisticated, with spelling or grammatical errors and poorly photoshopped images. Yet at only a few cents to distribute, the IRA got an impressive rate of return.
鈥淚 was shocked at how effective these appeals were,鈥 said Vargo.
COVID-19 a new opportunity for trolls
The authors warn that they have no doubt such troll farms are still at it.
According to some news reports, Russian trolls are already engaged in disinformation campaigns around COVID-19.
鈥淚 think with any major story, you are going to see this kind of disinformation circulated,鈥 said Hopp. 鈥淭here are bad actors out there who have goals that are counter to the aspirational goals of American democracy, and there are plenty of opportunities for them to take advantage of the current structure of social media.鈥
Ultimately, the authors believe better monitoring, via both machine algorithms and human reviewers, could help stem the tide of disinformation.
鈥淲e as a society need to start seriously talking about what role the platforms and government should play in times like the 2020 election or during COVID-19 when we have a compelling need for high-quality, accurate information to be distributed,鈥 said Hopp.