Q+A

Zeynep Tufekci on tech's powers and perils for democracy

Technosociologist joins author and innovator Alec Ross for "Democracy Dialogues"

Zeynep Tufekci

Image caption: Zeynep Tufekci

Editor's note: The Democracy Dialogues event has been rescheduled for April 18. Future details will be shared on the Hub.

Today's world may appear to be a golden era for free speech, where anyone can broadcast their live, unfiltered thoughts via social media, but Zeynep Tufekci warns that we need to reframe the ways we think about censorship and suppression in the digital age.

"The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself," the technosociologist writes in a recent Wired article. "As a result, they don't look much like the old forms of censorship at all."

Tufekci, a computer programmer turned academic, has become known in recent years for her sharp examinations of the social impacts of technology and especially social media. For her 2017 book, Twitter and Tear Gas: The Power and Fragility of Networked Protest, she analyzed the strengths and weaknesses of using digital tools to mobilize large numbers of people, looking at protest movements across the world including the Arab Spring, Occupy Wall Street, and the Zapatista uprisings in Mexico.

Digital technologies in general, and social media in particular, are altering the fabric of human organization and communication by changing how we find one another, what we see, and what we don't see.
Zeynep Tufekci

Tufekci—also an associate professor at UNC-Chapel Hill and regular New York Times columnist— will visit Johns Hopkins University to talk about the complex intersection of technology and democracy. She joins another leading voice on these issues: former State Department official Alec Ross, an innovator, investor, and author of the 2016 bestseller The Industries of the Future.

The conversation, which will be held April 18, was scheduled to be the first event in the Democracy Dialogues series, presented by the Stavros Niarchos Foundation Agora Institute and moderated by Johns Hopkins University President Ronald J. Daniels.

In advance of the event, Tufekci shared a few insights with the Hub via email.

What is the most encouraging evidence you've seen recently of technology's power to strengthen democracy—and vice versa, where have you seen dangerous or troubling impacts?

Digital technologies in general, and social media in particular, are altering the fabric of human organization and communication by changing how we find one another, what we see, and what we don't see. Rather than looking at it as good or bad, or as strengthening or weakening democracy, we should see it as a transformative event: just like cars weren't horseless carriages but a new form of transportation that brought about a striking and profound transformation to how organize our cities, how we live, and how we travel, digital technologies are altering the way the public sphere operates.

One can certainly find things to celebrate, like when Egyptian dissidents use social media to find one another and people who'd otherwise find it very difficult to be heard can broadcast their concerns. But one can also find many things to deeply worry about, ranging from the spread of misinformation and hate speech to abuse and harassment online to even ethnic cleansing—as Facebook has been implicated in a UN report with the ongoing events in Myanmar.

You've written that YouTube "may be one of the most powerful radicalizing instruments of the 21st century."

Perhaps the biggest issue at hand is the ad-financed business model online: right now the major platforms like Facebook and YouTube make money by keeping us engaged as long as possible and also micro-targeting us via the massive amounts of data they have collected on billions of people. In the case of YouTube, it appears that the recommendation algorithm, which was instituted in its current form reportedly in 2015 after employing the latest and shiniest AI research produced by Google, has a tendency to surface and amplify hateful, racist, misogynistic, or conspiratorial content to people who did not search for such content, but were just looking for a bit of entertainment, or a tutorial, or some random fact.

In my experience, I found that the YouTube recommendation algorithm is an example of the real and immediate dangers of AI. Some people fear scenarios like AI escaping our control, and that may be something to consider in the long run, but the immediate threat is how AI is used, right now, by powerful corporations or states for profit or for social control or for unethical goals. In the case of YouTube, the result appears to be a recommender algorithm that pushes people to more and more extremes, a significant danger to peace around the world as the spread of hate speech and misinformation can have tragic consequences.

How encouraged are you by YouTube's recent announcement that it will decrease recommendations of "borderline" and misinformative content?

In terms of their recent announcement, it remains to be seen whether it is a fundamental enough shift in how YouTube operates. But I would say that, at the moment, real change would probably involve significant financial costs because it would mean making the platform less "engaging" and also hiring large numbers of people around the world to intervene as necessary.