Battling the Bots
by Elias Groll via stacey - Foreign Policy Saturday, Dec 29 2018, 8:47am
international /
prose /
post
To the casual observer, it’s not immediately evident how Kris Shaffer’s training as a musicologist prepared him for his job as an online detective.
Gyorgy Ligeti
Shaffer, an analyst with the company New Knowledge, tracks disinformation campaigns for a living—the kind that Russia waged against the United States during the 2016 presidential election. His title is a mouthful: senior computational disinformation analyst.
But before he took the job, Shaffer was a musicologist who wrote his dissertation on a Hungarian avant-garde composer, Gyorgy Ligeti, whose claims about his music—not unlike the Kremlin’s propaganda—contained some “mistruths and half-truths,” he said.
Ligeti had composed a series of works that drew heavily on the giants of Western classical music—Beethoven and Mozart, among others—all the while insisting that his music remained entirely new, entirely avant-garde.
Shaffer encoded Ligeti’s music as a text file and then analyzed it using natural language processing, a subfield of artificial intelligence. He was able to demonstrate that Ligeti’s assertion of originality was “an easily falsifiable claim.”
These days, Shaffer is one of a growing number of students of disinformation who are using similar tools of data science and artificial intelligence to examine how online propaganda campaigns work.
By working with massive datasets of tweets, Facebook posts, and online articles, he is able to map links between accounts, similarities in the messages they post, and shared computer infrastructure. The data allows him to identify networks of accounts that appear to be acting together to spread messages online.
“It sounds like a big shift from computational musicology to computational disinformation, but I’m finally coming full circle,” Shaffer said in an interview. “As with musicology, when we take something to scale there are new questions that we can ask.”
This method of analysis is in its infancy, remains a fairly blunt instrument, and still requires human intervention. It sometimes mistakes real people who post anti-imperialist arguments about U.S. foreign policy for Kremlin trolls, for example.
The 2016 Kremlin campaign to boost Donald Trump’s U.S. presidential run illustrated the possibilities of an aggressive propaganda campaign and provoked a reckoning in Silicon Valley and in Washington over how to prevent more efforts [that] interfere in U.S. politics.
In the days following last Tuesday’s U.S. midterm elections, no evidence emerged of another massive slash-and-burn Kremlin campaign. Facebook announced that it took down more than 100 Facebook and Instagram accounts suspected of being linked to Russia’s infamous Internet Research Agency. In an online post purporting to be from the agency, an anonymous writer made extravagant claims about Russian bot and troll activity in an apparent effort to exaggerate Moscow’s online influence.
That activity paled in comparison to 2016, but the analysts who study such campaigns caution that it’s far from time to declare victory over the Kremlin’s propagandists. They say Moscow is still trying to influence U.S. politics but not with the same intensity as it was two years ago.
“What we see is that the Russians are still involved in trying to sway the political narrative in the United States,” said Priscilla Moriuchi, a former National Security Agency official and the head of strategic threat development at Recorded Future, a head of strategic threat development at Recorded Future, a cybersecurity firm. “It works to their advantage that we don’t know as much about it this time around.”
And this time around, Russian trolls and propagandists appear to have switched up their tactics.
In September, New Knowledge began tracking a network of social media accounts that appeared to be coordinating messages to disseminate Russian propaganda and advance Russian interests, according to the company’s CEO, Jonathon Morgan. The network included about 100 Facebook pages and 1,400 Twitter accounts that posted between 50,000 and 60,000 times a day.
The data scientists at New Knowledge discovered the network by first identifying a set of accounts that appeared to be engaging in propaganda activity and then looking for similarities between those accounts and others. That’s where the machine learning algorithms came in handy. The company used them to take large amounts of unstructured data and identify patterns.
The network that emerged from that data-crunching doesn’t look much like the previous Russian operation. The accounts identified by New Knowledge—a sample of which were shared with Foreign Policy—are promoting stories from outlets easily identifiable to the trained eye as Russian propaganda. But to the casual observer, they look a lot like any other online news outlet, providing a way to push pro-Russian news into the mainstream.
Many of the stories advanced by this network are not necessarily false. Instead, they appear to be promoting information that supports Russian messaging portraying the United States as arrogant and hypocritical.
“It was Goebbels who observed that Nazi dispatches should be as close to the truth as possible,” said Shaffer from his home in Colorado, where he does his work on a “big-ass shelf” he’s re-purposed as a standing desk.
The network cannot be conclusively attributed to the Russian government, Shaffer said, but is composed of a thousands of accounts that are pushing stories aligned with Kremlin interests.
In recent weeks, for example, the network has promoted articles criticizing the U.S. response to the killing of Saudi journalist Jamal Khashoggi while continuing to support Saudi Arabia’s intervention in Yemen.
Other stories promoted by this network traffic in outright falsehoods, including claims about chemical weapons use in Syria. The network repeatedly promoted the false notion that chemical weapons attacks carried out by the Bashar al-Assad regime were in fact perpetrated by Syrian rebel groups.
Companies such as Facebook and Twitter have made progress in banning troll and fake accounts from their platforms, but they continue to play host to propaganda networks that are turning up in Shaffer’s mathematical models.
Copyright applies.
[Editors note:
Readers beware that the political assertion of Russian interference in the 2016 presidential election is an unsubstantiated claim as are many more political assertions in the article. Nevertheless, if political propaganda claims are ignored the article is useful in detailing some methods of data mining using tailored algorithms for data analysis -- that is the only reason it has been re-posted here.]
https://foreignpolicy.com/2018/11/12/battling-the-bots-ai-russia-disinformation-fake-news/
<< back to stories
|