The US Congress is investigating how the Russians used Facebook, Twitter and Google to wage information warfare during the 2016 presidential campaign, and a special counsel is looking into whether the Trump campaign colluded with the Kremlin in its digital strategies.
There is also growing concern that the business models and algorithms that drive social media companies are fuelling extreme partisanship and opening up the floodgates to phony news.
But America isn’t alone. Challenges to democracy posed by social media surfaced last year in European elections. And the use of the internet to wage information warfare was perfected by the Russians in the Baltic states and Ukraine, according to the NATO Stratcom Center of Excellence in Latvia.
The centre has tracked Russian information operations over the years, and according to its director Janis Sarts, “a lot of our knowledge is very relevant to what is happening in [the] US because obviously for anyone who has been watching the methodologies have been more or less there for some time.”
Sarts says the use of social media as a military tactic is part of the so-called Gerasimov doctrine, named after the current chief of Russia’s General Staff. The goal is permanent unrest and chaos within an enemy state. Achieving that through information operations rather than military engagement is a preferred way to win.
“It gives you a plausible deniability. And it was quite a significant change… in dynamics, because typically Russian theoretical thought was more about traditional military conflict,” Sarts says.
“This was where you first saw the troll factories running the shifts of people whose task is using social media to micro-target people on specific messaging and spreading fake news. And then in different countries, they tend to look at where the vulnerability is. Is it minority, is it migration, is it corruption, is it social inequality. And then you go and exploit it. And increasingly the shift is towards the robotisation of the trolling.”
Increasingly the shift is towards the robotisation of trolling.
The use of automated social media accounts, what are known as bots, has become more advanced in recent years. At a workshop in Riga sponsored by the Atlantic Council’s Digital Forensic Research Lab last September, journalists and researchers learned how bots are being used to manipulate the public and spread disinformation.
Ben Nimmo, the information defence fellow for the Council, specialises in tracking bots on Twitter. “At the very basic level a bot is a social media account which is preprogrammed to do stuff without people pressing the button,” Nimmo says. “There are plenty of good bots out there, like there are bots that share poetry. What we focus on particularly is malicious bots which are bots used to distort conversations online.”
Botnets are groups of fake accounts that can be automated to tweet, like and post together.
Nimmo says that while a lot of bot activity is coming out of Russia today, he’s also seen “a lot of botnets from the US. For example, there are botnets which exclusively post far-right content in the US.”
A bot can be identified by what Nimmo calls the three A’s: activity, anonymity and amplification.
He found an account called dreamofdust that was 15 months old and had 319,000 tweets. “So, there you have the activity, that’s a ridiculously high figure,” Nimmo says.
In terms of anonymity, the account’s “avatar picture was a cartoon image, not a real person. And the biography a collection of pro-Trump phrases. And if you look at the amplification, all you see it’s sharing stories to its 2,600 followers.”
Nimmo says there are commercial botnets with tens of thousands of fake accounts that are automated to act together. “And you quite often find that what these are doing is drowning out the middle ground. They are trying to normalise a situation in which hatred and abuse are just the standard way of going and the danger is real users will get sucked into that emotional swirl and join in.”
Bots are also used to try to silence political opponents and journalists. But using bots to get hashtags to trend on Twitter is one of their most effective uses for those who want to spread disinformation. In recent elections, this tactic was used to support Donald Trump, Brexit, Marine Le Pen in France and the far-right Alternative for Germany party.
“A group of maybe a dozen people can create the impression of anything between 20,000 and 40,000 tweets in an hour. They can then push that hashtag into the trending lists. It throws a smoke screen over the whole idea of one man, one vote. Somebody who controls 10,000 bots or 100,000 bots, they are controlling 100,000 voices and they distort the debate,” says Nimmo.
The disturbing use of Twitter to mislead the public was demonstrated after a mass shooting in Las Vegas last year, and another at a Texas church. False claims that the shooters were Muslim converts, members of
Antifa, a radical antifascist group, and progressive Democrats were spread on Twitter and Facebook and showed up as prioritised stories on Google search.
According to Sam Woolley, who heads the Digital Intelligence Lab at the Institute for the Future, social media portals “are intimately connected in terms of how information flows occur all across the platforms. And bots are able to sort of manipulate this flow of information.”
In a recent paper published by the Computational Propaganda Research Project at Oxford, Woolley and his colleague Douglas Guilbeault demonstrated that botnets, particularly those tweeting Trump-related hashtags, played an influential role in the 2016 election.
“What we found was that they were successful in spreading their messages to people at the highest levels of power in the United States,” Woolley says. “So people from major news networks, journalists, politicians, other pundits, all kinds of people like this were tweeting and messaging bot content.” Both Hillary Clinton and Trump re-tweeted content that bots were sharing online.
It’s been reported that close to half of Donald Trump’s Twitter followers are bots, and according to Woolley “they’re particularly active. He’s an incredibly active Twitter user, bots are used to spread those tweets and they’re used to amplify those tweets.”
During the campaign, bots were also used to spread damaging information the Russians hacked from the Democratic National Committee. And a study showed they generated 20 percent of all the political traffic on Twitter.
“During the first, second, and third debate, bots were overwhelmingly used at a sort of rate of 5 to 1 in support of Donald Trump compared to Hillary Clinton,” Woolley says. “When it came to swing states, we saw that bot usage was at an all-time high. Bots were being used to spread fake news stories in ways that we’d never really seen before.”
While the use of bots on Twitter is the focus of most analysis, Woolley says that bots are a problem on Facebook as well. “Facebook has to come clean. For instance, during the election, there were actually real protests that happened because of fake profile pages that were automated on Facebook.”
“We saw a bunch of fake grassroots groups that were built by the Russians in Texas and Florida. You have the bots seeding and fertilising conversation around this group, and the group becomes real, people really join it and you have hundreds of thousands of people in a group and often times it actually translates into offline action.”
Seth Turin’s software is one of many turn-key systems to pop up in recent years that make it easy to build and use bots on social media sites. A version of his platform can be purchased for as little as $300.
“Web bots used to be the domain of programmers, very technical people. But we make it really user-friendly in a way that anybody can just jump right in,” says the founder of UBot studio.
UBot can be used for hundreds of applications, Turin says, from advertising jobs to mining research data, from pumping up celebrity popularity to promoting company brands. It can also be used to spread and amplify a political message, real or fake.
While he warns users against it, Turin is well aware that his software can be used for so-called black hat purposes – tasks that violate a platforms terms of service or are illegal.
On the programme’s practice “playground”, where there are clones for common sites and fictional accounts to experiment with, Turin demonstrated how UBot might be used to spread fake news on Facebook.
On the fictional account of a user named Tyrone, Turin posted a fake story about how corgis, a type of dog, are really “genetically modified sausages.” He then programmed Tyrone’s account with UBot to send out friend requests, potentially exposing other users to the disinformation.
“These people when they get that friend message they’re going to come back to Tyrone’s profile and are going to try to figure out who it is,” Turin says. “And when they do that they’re going to see that message that corgis are genetically modified sausages…. They’re going to start talking about it and even possibly believe it to be true.”
You can use a bot to attract followers to a Facebook page in a similar way, Turin says. “Instead of saying corgis are sausages we might just post a link to the page or the group that we want to advertise for and there’s a chance that they’re going to click it and there’s a chance that they’re going to like it.”
It is also possible to enhance the appeal and credibility of pages and disinformation using a wide range of products offered by black hat bot operators. Turin is not in the business, but he showed us online sites where you can buy fake Facebook accounts “with personal photos, cover, avatars, timeline posts for $5 a piece.” Other sites offered Facebook likes for as little as $1 per 1,000 interactions on Twitter, known as “retweets”, for $1.5 per thousand, and Youtube shares.
“In the words of one bot builder that I spoke to, it’s like the Wild West,” Woolley says. “They said that they can do anything in terms of political outreach, political columns, political advertising and it’s really hard for them to get caught.”
In the words of one bot builder... it's like the wild West. They can do anything in terms of political outreach, political columns, political advertising and it's really hard for them to get caught.
In the US, some of the most skilled users of bots for political purposes are extreme, conservative white nationalists, the so-called alt-right.
“I think that the alt-right was very successful at using these tools,” Woolley says. “There’s been whole handbooks written by the alt-right on how to use social media to manipulate public opinion. And those are all born out of a lot of the traditions that we’ve seen in Russia.”
The handbooks, posted to online forums like 4Chan, encourage alt-right activists to steal people’s online pictures and impersonate them. There are instructions on how to use bots to spread fake stories and mislead journalists.
“The alt-right has a playbook on this stuff,” Wooley says. “And they’ve no hesitancy to use the most manipulative, nefarious tools to get their message across to people.”
The Berkman Klein Center for Internet and Society at Harvard University recently published a study on the 2016 presidential campaign that shed light on the scope of far-right propaganda and disinformation.
The analysis was based on some 2 million stories that mentioned one of the candidates, found on more than 70,000 online news sites – ranging from blogs to mainstream media. A team, led by the Center’s co-director Yochai Benkler, digitally mapped how often the stories were either linked to, shared, or tweeted on Facebook and Twitter.
The mapping revealed two separate media ecosystems in America, one grounded in the established mainstream media and the other in sites dispensing propaganda and fake news from the right. The result, Benkler says, “is a remarkably clear image with a very distinct insular right wing and a fairly mixed rest.”
Breitbart, the right-wing website that was run by Steve Bannon, Donald Trump’s campaign strategist and former White House advisor, has a central place in the mapping. The alt-right “played a significant role in generating memes, and ideas and framings,” Benkler says. Sites like Breitbart offer critical “translation services if you will, to generalise and normalise for the population.”
“So when you look at the stories, and you try to see how is Breitbart talking about immigration, in terms of terrorism, in terms of bringing diseases into the country, in terms of criminals. So, you have these extreme right framing but without the explicit white supremacy so they become in some sense more accepted.”
I think that if the left did start using it then what we'd see is just even more noise and spam. We've got to figure out how to address the issue of bots and the issue of fake news and all of this junk content that's spreading over social media without fighting fire with fire.
Benkler says the study also revealed that there is more fake news generated by the right than the left.
“Retweeting and a lot of Facebook sharing of these sites like Gateway Pundit, Truthfeed that are fairly systematically generating false claims and conspiracy theories are very prominent on the right. You don’t really have a parallel to this on the left.”
According to Benkler, social media is an effective tool for spreading propaganda and fake news because “credibility is gained from the fact that the same audience reads and retweets and Facebook shares the same set of sites.”
“While the centre and the centre-left and the left are all still organised roughly around traditional media organisation and science, the right seems to have created for itself its own universe of facts – a sort of tribal knowledge production system where everything is just ‘whatever my people say is true whatever those people say is false’. This is a profound break in our ability to have anything like reasoned policy and reasoned political engagement,” Benkler says.
Keegan Goudiss, who was the director of digital advertising for Senator Bernie Sanders 2016 presidential campaign, believes that “people who believe in reason and the truth and facts can win out at the end of the day.” And he argues that social media can play a key role in that triumph.
Social media was critical to the success of a socialist like Sanders in the Democratic party primary, Goudiss says. “Only 3 percent of Americans knew who he was and without social media, I don’t think people would know who he is now. So candidates such as Bernie don’t have to go through the traditional gatekeepers whether in the party or whether in the media and you’re able to build popular support in a way that has never been done before.”
Goudiss did not use bots in 2016, but believes they can play a positive role in campaigning. “I obviously disagree very strongly with the Alt-right,” he says. “But they are using tactics that are successful in terms of propaganda, in terms of like conveying an emotional connection with their base. Now I’m someone who thinks we shouldn’t be purposely distorting facts and lying to people. But we have to learn from what they were doing and do it better.”
Woolley argues that it would be a mistake for the left to try and imitate the far-right playbook. “I think that if the left did start using it then what we’d see is just even more noise and spam,” he says. “We’ve got to figure out how to address the issue of bots and the issue of fake news and all of this junk content that’s spreading over social media without fighting fire with fire.”
“We’re going to need a broad spectrum of approaches to deal with the problem but at the end of the day, you need the population to actually decide that it doesn’t want to believe in the nonsense,” Benkler says.
In his view, the key to dealing with the threats social media pose for democracy is for the purveyors of disinformation to be defeated at the polls both in Europe and the US. “So long as Trump and the Republican party keep winning there’s a strong feedback cycle and these techniques prove themselves. Then you can see real destabilisation of the democratic system.”