October 7, 2020

40659275634_9cbd5317f5_k.jpg

Misinformation
Lauren Walker / Truthout (CC BY-NC-ND 2.0)

While the rise of social media has increased connection and information sharing around the world, it has also created a breeding ground for political misinformation and disinformation. During long election cycles, this impacts not only the public’s assessment of political candidates and campaigns, but of the truth itself. Indeed, it has proven to be a serious threat to global democracy.

While some online platforms, such as Twitter, have responded by banning political ads altogether, blanket content bans often hurt social advocacy organizations while merely shifting incentives for bad actors to change their advertising approach. So what regulation of online ads should we advocate for?

A virtual panel discussion on September 29 welcoming analysts and activists to consider this question of ethical data sharing. Nathalie Maréchal, senior policy analyst at Ranking Digital Rights; Ann Ravel, former chair of the Federal Election Commission; Rebekah Tromble, director of the Institute for Data, Democracy, and Politics at George Washington University; and Jamal Watkins, VP of Civic Engagement for the NAACP, joined moderator Anya Schiffrin, director of SIPA’s Technology, Media, and Communications specialization. The participants highlighted four practical solutions to end the spread of false and inflammatory online political advertising.

Revive the Honest Ads Act

The bipartisan Honest Ads Act—sponsored in October 2017 by Senators Mark Warner (D-Va.) and Amy Klobuchar (D-Minn.)—was the first congressional measure to address disinformation after the 2016 presidential election. While it didn’t gain much traction, supporters remain committed to its goal of full disclosure of online political ad funding and target audiences, the maintenance of public files for political communications purchased at more than $500, and the banning of foreign actors from purchasing ads that mention political candidates in the periods leading up to elections.

The Honest Ads Act is now being reintroduced with a number of amendments addressing prevalent disinformation patterns since 2017, such as the expansion of regulated time periods covered (as political messaging often spans a year or more before elections). It also expands the definition of paid digital ads to include messages with any correlation to funding spent to produce the ads; the microtargeting process of the ads; and the payment of bots, algorithms, individuals, or groups to spread them.

Increase Transparency and Consistency of Ad Targeting Practices

During the panel, Schiffrin discussed her new report for the Roosevelt Institute, “Beyond Transparency: Regulating Online Political Advertising,” which cautions that blanket bans on microtargeting can inadvertently hurt small advocacy organizations. Instead, she argues, instituting measures that make online ad targeting practices fully transparent can address the worst abuses of political ads technologies while allowing free speech that is critical to democracy. Specifically, Schiffrin says platforms should maintain a public file of all political communications purchased over $500. The files could include digital ad copies and description of the targeted audience, funding sources, the number of views generated and revenue generated, rates charged, and contact information of the buyer. And, by permitting only certain types of verified targeting (like lists of registered voters in a given district), tech companies can increase the transparency of their enforcement processes and outcomes in a more productive, democratic manner.

Expand Federal Privacy Regulations

The Cambridge Analytica scandal exposed significant threats to democracy due to the misuse of consumers’ personal information. It proved that America lacks a thoughtful and comprehensive federal privacy law that is necessary to give users more control over the collection and sharing of their own information. Only with such regulation can social media users opt out from political advertising and from the collection of data that enables it. This puts the power to disrupt algorithms amplifying dangerous mis- and disinformative speech in the user’s hand. Of course, a privacy regulation is only as good as its enforcement, which is why data protection agencies should receive increased funding and resources.

Make High-Quality and Verified Information Free for All

So much of the dangerous effects of disinformation could be averted if social media platforms would publish verified and publicly useful information on their newsfeeds to counteract bad actors’ efforts. For example: Facebook and other networks could immediately enable verified opposing campaigns’ high quality, informative ads to the same audience that has just received an inflammatory, false advertisement. The networks could also choose to donate all revenue from political advertising to nonprofits and researchers focused on election integrity or invest it in R&D and improvement of their own election integrity products offering protection against false and inflammatory advertising.

The panelists concluded that online political ads should not be “cancelled” in and of themselves. Rather, we should look closely at their intention and outcomes—while demanding more transparency to do just this—in order to regulate bad actors. As a self proclaimed marketplace of ideas, it behooves the tech industry to reflect and act on these rising policy solutions.

— Christina Sewell MPA ’21

WATCH: REGULATORY FIXES FOR ONLINE POLITICAL ADVERTISING

Regulatory Fixes for Online Political Advertising