News & Stories

Tackling Online Mis- and Disinformation: The Role of AI Startups

Posted Jul 28 2022

SIPA News thanks Zachey Kliger MPA ’22, who submitted the following account of his recent Capstone workshop on Tackling Online Mis/Disinformation: the role of AI Startups. His fellow team members were Hiba Bég, Juan Carlos Eyzaguirre, Tianyu Mao, Aditi Rukhaiyar, Kristen Saldarini, and Ojani Walthrust; their adviser was Anya Schiffrin.

The German Marshall Fund has since published a project summary and final report.

The COVID-19 pandemic and Russia’s invasion of Ukraine have both elevated the issue of online mis- and disinformation on the public policy agenda. According to a recent poll from the Pearson Institute, 95 percent of Americans think misinformation is a major problem.

While much attention has been paid to potential public-policy interventions to address disinformation, a niche private-sector market of artificial intelligence startups trying to fight online mis- and disinformation has emerged in recent years. As part of a Capstone project here at Columbia SIPA, I worked with a team of fellow students—under the guidance of our faculty adviser, Anya Schiffrin—to examine the current state of this market.

Our interviews with 20 companies in the space revealed a surprising development: There is less consumer demand for mis- and disinformation solutions than was expected just a few years ago.

In other words, Americans overwhelmingly acknowledge that misinformation is a problem, and say they want credible news that is free from false or misleading information. But it appears they are less eager to pay for tools that would help them be more discerning in their news consumption.

Information gathered by Crunchbase, confirmed in our interviews, suggest there are a small number of well-capitalized startups in this space, and very few, if any, that are making money in the business-to-consumer (B2C) market. In search of reliable revenue streams, more than half of the companies we interviewed have mainly moved into the business-to-business (B2B) market. The firms we interviewed, including those that set out to sell their services to the general public, derive most of their revenue today from helping private companies track how they are being talked about online (what’s known as “brand safety” services).  

“We are yet to see a B2C scenario,” said Guyte McCord, COO of Graphika, a company that uses AI to create detailed maps of social media landscapes to discover how information flows within large networks. “There are consumer-facing applications, like fake news detection and news source ratings, but right now they are sold through B2B.”

Our interviews revealed two important reasons why AI startups fighting disinformation have largely pivoted away from selling their products directly to consumers. The first is a demand-side issue: Consumers are either unaware there is a problem, or don’t think they need to pay for a solution.

“Misinformation is a problem, but very few people think their source is the problem, and therefore don’t think they need a solution.” said Shouvik Banerjee, founder and CEO of Averpoint, a news app and browser extension that provides consumers with information about the news they consume.

Matt Skibinksi, general manager at Newsguard, a browser extension tool that rates websites’ credibility, shared a similar concern: “Most people who need NewsGuard don’t think they do. The real challenge is making the consumer see the benefit.”

In addition to a lack of consumer demand, a second reason many of the companies we spoke with are no longer selling directly to consumers is because of the nature of their funding.

“The investment money that has fueled some of the space has created demands for short-term profitability which have diminished some of the value of the product,” Ben Decker, founder and CEO of Memetica, told us.

While it’s common for startup companies to have limited capital in their early years, nearly all of the companies we interviewed spoke of the unique challenge of raising money for these types of services from banks, angel investors, and venture capitalists.

“Initially, our goal was to release to the masses a free, downloadable application that anyone could use to capture images and verify their authenticity,” said Mounir Ibrahim, vice president of public affairs at Truepic, a photo and video verification platform. “But that was not really a sustainable business model. Banks needed frictionless ways to get the product into end users. So we pivoted from more consumer-facing to business-facing.”

Ibrahim added: “Most of our ‘societal’ work is done through grants, like the Bill & Melinda Gates Foundation, or heavily subsidized projects. We don’t profit from those at all. It’s the business clients that generate business.”

Despite the challenges of securing outside investment for B2C disinformation solutions, there are a handful of companies that still see a reliable B2C market developing. Karim Maassen, founder of Nwzer, a Netherlands-based, user-generated news agency whose algorithm enables readers to self-moderate, told us she is more interested in a B2C strategy in the long run: “Our short-term strategy is working with publishers. I think the B2C opportunity is more interesting in the long term. We want to connect all the interactions people have on any form of content they interact with, and combine that into one citizen journalism platform.”

As for the future of the AI solutions to disinformation market, nearly all of our interviewees were adamant about the need for regulation. Jeff Koyen, founder and CEO of Pressland, suggested introducing legislation to prohibit algorithmically curated news on platforms. Mark Little, founder and CEO of Kinzen, threw his weight behind a public service broadcast model, one in which we have “shared facts and the incentives are pure.”

Some, more than others, remain steadfast in their belief that consumers need to take greater responsibility. “Customers need to be more responsible and vigilant about their content consumption,” Tania Ajuha, founder and CEO of Nobias, said. “I do not see the big platforms changing.”

Danielle Deibler, cofounder and CEO of Marvelous AI, perhaps best captured the prevailing sentiment: “We see ourselves as part of an ecosystem. One group is not enough to fight misinformation. You need policy and regulation. You need the social media companies and journalists to not spread and propagate misinformation. And you need people to keep governments and media in check. We’re trying for a prophylactic against misinformation. We’re not ever going to cure the problem.”