Online disinformation has become all-pervasive during the COVID-19 pandemic. The volume and multitude of ways in which disinformation spreads means that no single solution can effectively curb it. This report built on previous research (AI startups and the Fight against Disinformation, Schiffrin and Goodman, 2019) by examining the current state of Artificial Intelligence (AI) solutions fighting mis/disinformation, including: the financial incentives undergirding the market for these solutions, the benefits and shortcomings of using this technology to limit the spread of harmful content online, and the latest innovations in the field. 

AI will continue to play an important role in combatting disinformation. But interviewees informed the Capstone team that Big Tech is a tougher customer than expected, a sharp contrast to the hopes expressed by the firms interviewed in 2019. Moreover, the market for these technologies appears less lucrative than was expected, with greater demand for brand management and consumer-facing products.  There are non-profits in this field; for example some universities are working in the public interest to develop tools that track and rein in disinformation. These serve public interest and some get help from foundations. 

There are no easy answers to governing human expression; only tradeoffs. The types of technologies that assess content at scale can be employed by authoritarian regimes which speak to contain free expression. Thus, firms must be mindful of the environments in which they operate, especially volatile ones. AI technology, on its own, remains an insufficient tool for fighting disinformation, and manual content moderation – and other forms of human intervention – continue to be necessary. Expanded internet regulations and other policy interventions, such as the Digital Services Act in the EU, may play an important role in setting global standards and providing business opportunities for European companies trying to identify potentially harmful mis/disinformation online.