In the months leading up to the 2016 election, Twitter, Facebook, and Instagram were flooded with false but provocative posts. A decades-old tabloid story accusing Bill Clinton of fathering a love child. A stolen coloring book image of a "Buff Bernie" Sanders flexing, accompanied by a fake quote from the illustrator. Social media accounts impersonating activists warned about creeping "sharia law" and the dangerous influence of Islamic rule in American cities. All were posted by accounts tied to Russia with the intent to fan the flames of partisan bickering and, according to the declassified assessments of the U.S. intelligence community, bolster the Kremlin's preferred candidate, Donald Trump. The ads, events, videos, and photos may have been shared hundreds of thousands of times, according to data provided by social media giants.
An ideal platform for this disinformation campaign was Twitter because of its openness and lackadaisical attitude toward eradicating accounts that impersonate others or spew spam, propaganda, and lies, argued cybersecurity expert Thomas Rid in a November 2017 Motherboard op-ed. "Driven by ideology and the market," he wrote, "the most open and liberal social media platform has become a threat to open and liberal democracy."
It's easier than ever for news that is false, misleading, or lacking vital context to worm its way through the deep channels of the internet, reaching millions of people—partially because it's so easy to automate large numbers of accounts that don't belong to real humans, Rid points out. But don't lose hope, he says. Twitter is also a great place to fight back against falsities and interference, the new faculty member at the School of Advanced International Studies told me over coffee in downtown Washington, D.C., right down the street from the school. "As everybody is talking about disinformation, watching Trump's Twitter feed ... at the same time we have this content reaction" related to digital forensics, Rid explains.
A growing body of journalists, academics, students, and average Americans are dissecting information moments after it's released with a sharp, analytic eye, Rid suggests. They're detecting tiny bits of information left behind in image and document files to determine when and where they were made, by whom, and how many times they were edited. They're sifting through materials posted by WikiLeaks, trying to piece together leaked emails, primary documents, and technical information. In fact, there's practically a new industry dedicated to teaching readers how to fact-check in the digital age. The John S. and James L. Knight Foundation recently poured millions into fighting the public's distrust in news media. One method: automating the fact-checking process, using digital bots, to prevent any human error. Facebook, dragged through the mud for failing to realize Russian accounts were posting divisive comments and event listings throughout the election, has mounted several new efforts to involve users in real-time fact-checking—including, as of January, prioritizing information from publishers that readers flag as their most trusted sources of news.
Rid points to Twitter superuser Matt Tait, a former information security specialist for the Government Communications Headquarters, Britain's top digital spy agency; past corporate technology expert at Google; and current academic at the University of Texas at Austin. Online, he is known as @pwnallthethings, and he tweets about almost every major political or national security event for his more than 100,000 followers. He performs surgery on public information by laying out individual facts and footnotes and checking them—whether it's about the investigation into how the Russians meddled in the presidential election or the latest stolen documents published by WikiLeaks.
"With social media and some cable TV channels, it's really easy for fake or partisan narratives to get legs," Tait wrote in a private message on Twitter. "So when I notice them, I tend to find it helpful to go back through my notes or documents and recap on what we actually know about the issue." No, he's not your average Twitter user, but he is emblematic of a bounce-back reaction to the sheer amount of pure garbage online these days, Rid says.
For his part, Rid has introduced digital forensics into his curriculum and says he sees students actively engaging with leaked documents, searching for secrets hidden in the text. In March 2017, WikiLeaks released documents about the Central Intelligence Agency's hacking tools, the techniques they use to access foreign digital devices including web browsers, smartphone operating systems, cars, and even smart TVs—and Rid's students were immediately on the trail as part of a class activity. "They were gripped by the urgency of the moment," Rid says. "It's like they're playing detective in real time."
And while intelligence officials and government executives are publicly angry when journalists expose internal secrets, Rid says investigators and the media are often on the same side, working toward the same goal of uncovering the truth. "We could have a happy hour with all these people and we wouldn't disagree on many things," Rid says. "We're after the same thing: accuracy, objectivity, getting to the bottom of things."