Faculty Spotlight

SIPA’s Tamar Mitts Discusses New Minerva Grant to Research Online Extremism

By Katherine Noel
Posted Aug 28 2024
Image
Tamar Mitts


Tamar Mitts is an assistant professor of international and public affairs and a faculty member at the Saltzman Institute of War and Peace Studies, the Institute of Global Politics, and the Data Science Institute at Columbia University. Her research addresses emerging challenges at the intersection of technology, conflict, and security, including how militant and hate groups use digital platforms for mobilization and recruitment.

Earlier this month, Mitts and a team of researchers from Princeton University’s Empirical Studies of Conflict Project (ESOC) and Cardiff University in the UK were awarded a multi-year grant from the Department of Defense’s Minerva Research Initiative to research the evolution of foreign state information operations and their impact on political decision-making.

In this Q&A, Mitts discusses the role of social media in spreading mis- and disinformation, the threat AI poses to manipulating public opinion in elections, and her forthcoming book, Safe Havens for Hate: The Challenge of Moderating Online Extremism (Princeton University Press). This interview has been condensed and edited for clarity.

The grant you received will fund a three-year interdisciplinary research program, which will include projects on several different topics: information campaigns launched during the Israel-Hamas and Russia-Ukraine wars, online election interference, and tech companies’ countermeasures to combat influence operations. Can you tell us a bit more about the project and what you’ll be focusing on in these areas?

Our objective is to map what we know about current and ongoing influence operations and how they are evolving over time. Some projects will focus on understanding specific campaigns – such as those launched during the war between Hamas and Israel in Gaza, as well as the war between Russia and Ukraine, where both sides have been engaging in information campaigns on many different platforms and also in the mainstream media. Our goal is to understand the tactics and methods used in these campaigns, and track how they are changing over time.

The second major focus is on elections. With a record number of elections in 2024 and extending into 2025, part of the project involves understanding how influence operations are occurring in these contexts and descriptively characterizing them.

A third project will investigate the effects of online information campaigns and how they might affect voters. This is actually one of the hardest things to do – it's often really difficult empirically to know whether somebody was exposed to an influence operation and how it affected their subsequent attitudes and behavior. We will be launching multiple data-collection efforts to analyze how campaigns are targeting voters in different countries undergoing elections, and will evaluate exposure to this content through a panel survey.

The final part of the project will evaluate countermeasures. In response to the growing concern about influence operations, governments and tech companies have intensified efforts to counteract their impact. Drawing on our previous work in this area, we will study which countermeasures are effective in combating these operations and which are less effective.

You study radicalization and have done extensive research on extremist groups such as ISIS and why their propaganda campaigns have been effective. What makes an extremist group’s online propaganda or recruitment efforts successful? How are these groups able to evade content moderation?

The nature of influence campaigns and their targeted audiences obviously vary from case to case. But a growing body of research in this area, including my own work on the influence campaigns of the Islamic State, shows that certain themes or messaging strategies are particularly effective at attracting audiences. These strategies often tap into vulnerabilities, grievances, passions, or interests of the target audience – themes that consistently drive engagement. In this new project, one of our objectives, for example in the context of the war in Ukraine, will be to assess when an information campaign is launched, who is exposed to it, and whether there is any evidence of its influence.

And we’ll use the same approach to study online campaigns during the war between Hamas and Israel, where both sides have been actively engaging in information campaigns. Our goal is to characterize the themes of these campaigns and identify which ones are getting the most attention, engagement, and response from the audience.

One interesting trend we’ve observed in recent years is the increasing use of multi-platform strategies in information operations. By disseminating information across various online spaces, these actors increase the potential impact of their campaigns and become more resistant to countermeasures.

In your forthcoming book you discuss how content moderation shapes the activity of harmful content producers. You analyze data on the activity of more than a hundred extremist organizations across social media platforms, and find that differences in moderation standards on these platforms create “safe havens” where these actors are able to launch campaigns and gain support. Which platforms are these groups most often utilizing, and which companies are more adept at detecting and removing extremist content?

One of the most interesting findings in the book relates to the nature of the online information environment and the interaction between the incentives of tech companies, governments, and extremist organizations (or other harmful content creators).

The varying standards for content moderation across platforms often result from uneven pressure that tech companies receive from governments. As I show in the book, many companies do not invest in content moderation until government regulation forces them to do so. However, governments often focus on specific large platforms when designing regulations to address harmful content. This uneven pressure causes large companies with large user bases to moderate more harmful content, while smaller companies tend to moderate less.

Extremist organizations, for their part, aim to maximize the impact of their online campaigns while avoiding content moderation. The variation in moderation standards across platforms of different sizes is what allows them to become resilient to countermeasures.

In the book, I show how these dynamics operate in specific cases, drawing on rich empirical data. I show, for example, that major platforms such as Facebook, Instagram, Twitter (before it became X), YouTube, and even TikTok face significant pressure from governments and therefore are more proactive in removing extremist campaigns. In contrast, smaller platforms like Telegram, Gab, Rumble, and Parler receive less attention from governments. The book shows how groups exploit these varying standards to launch sophisticated and often successful information campaigns across different platforms.

How can new data tools and AI technology be used to more swiftly identify extremist propaganda?

AI amplifies the dynamics we’ve already seen: on one hand, it enables propaganda creators to produce more convincing content more quickly, but on the other hand, it also provides tools to detect and remove such content faster. While we haven’t yet seen these tools deployed effectively, they very well could become effective over time – this is something we need to study more. Although automated tools for detecting propaganda, such as hash-sharing databases, have been around for nearly a decade, the rise of generative AI highlights the increasing importance of exploring how automation can improve propaganda detection.

So one really big question is how we can leverage these technologies to identify both traditional or AI-generated propaganda. At the moment, I don’t have empirical data to share, as this is an area of active research that we are exploring. But the generative AI frontier is a major focus for many in this field, both in terms of understanding how state and non-state actors are using it to manipulate public opinion, and in developing countermeasures.

Turning to the use of generative AI to manipulate public opinion in the context of elections, are there particular trends you’ve noted in incidents of foreign actors using the tech to target elections?

The most successful instances of these tools being used in elections involve generating highly authentic-looking content that is then taken out of context to convey a message different from its original intent. How exactly are these tools being used? We’ve seen a lot of attempts, some of them more effective than others. One notable example is the deep fake video Russia created early in the war with Ukraine, which purportedly showed President Zelensky saying that Ukraine was not strong enough and needed to surrender to the Russian army. The video appeared quite convincing, but it was quickly debunked and removed. The deepfake frontier remains significant, especially on video-based platforms like YouTube and TikTok.

With the US election less than three months away, can you talk a bit about some of the threats that we're facing with foreign interference and AI, and how those threats have evolved since 2016?

I believe the main threat is that we seem to be taking this issue less seriously than before. Relative to past efforts, such as those during the 2020 election, tech companies have allocated fewer resources and placed less emphasis on content moderation.

As a result, we may see more people exposed to manipulated content. With advancements in technology making content manipulation more sophisticated than it was four years ago, we could be facing a surge of misinformation. This concern extends beyond the US elections and is relevant for other countries as well.