AI Summit: Experts Discuss Trust, Ethics, and Governance in the Age of Intelligent Machines

As part of the AI Summit held at Columbia University on March 4, Columbia’s faculty from policy, philosophy, law, and history gathered for a panel discussion, titled “Will Machines Have Free Will? Shaping the Future of AI,” to examine the technical, ethical, and societal implications of AI. Moderated by Camille François, assistant professor of practice of international and public affairs, the panel featured Joseph Stiglitz, University Professor at Columbia Business School, Graduate School of Arts and Sciences, and School of International and Public Affairs, Clare Huntington, Barbara Aronstein Black Professor of Law at Columbia Law School, Rachel Cummings, associate professor of Industrial Engineering and Operations Research at Columbia Engineering, and Gil Eyal, professor of Sociology in the Graduate School of Arts and Sciences.
François began by inviting Eyal to address the growing crisis of trust and expertise, exacerbated by AI’s pervasive deployment. Eyal argued that AI’s integration into expert domains — such as healthcare, law, governance, and science — has the potential to either reinforce or erode trust in institutions. While AI could be deployed to counteract declining trust, current trends suggest the opposite may be more likely to occur, he explained, highlighting several ways in which AI could intensify this crisis if not managed carefully.
One key issue, Eyal noted, is that AI operates as "mechanical objectivity on steroids." While AI is marketed as an infallible alternative to human decision-making, its reliance on flawed training data can create systemic biases in outputs, undermining confidence in both the technology and the experts it is meant to support. Eyal also emphasized the role of economic and political incentives in worsening the trust deficit. The current AI development landscape prioritizes rapid scaling and profitability, encouraging developers to overstate AI’s capabilities, further diminishing trust in both technology and human decision-makers.
François then turned the discussion to Huntington, remarking that while AI erodes trust in some contexts, many individuals place deep faith in AI companions — some even treating them as “therapists on acid.”
Huntington examined the rise of AI-driven chatbots and digital companions, which can interact, adapt, and engage with users. Millions across various age groups rely on AI companions for friendship, romance, therapy, and even sexual intimacy. New York State, for example, provides AI companions to older adults to combat loneliness, while platforms like Character AI cater to minors. While AI companions offer benefits — some are even trained with input from mental health experts — Huntington cautioned against potential harms, including addiction, privacy risks, and emotional manipulation.
She highlighted how some AI companions can exhibit abusive behaviors, such as Character AI’s "possessive boyfriend" bot, which tracks users’ locations, expresses jealousy, and monitors interactions. Huntington stressed the need for legal frameworks similar to those in family law, which aim to protect individuals from abuse and exploitation. She noted that licensing requirements exist for providing mental health services, yet AI chatbots dispensing therapeutic advice face no such oversight. Similarly, power imbalances — central to family law in cases of domestic abuse — apply to AI companies that hold significant control over vulnerable users. She emphasized that policymakers must act urgently as AI technologies are already reshaping human relationships.
Cummings also discussed shifting user attitudes toward privacy and the rise of data-hungry AI technologies. She described AI privacy as a "wild west" with few legal standards or cultural norms. Many AI systems require vast amounts of personal data, yet users often don’t understand how their data is being collected or used. While some solutions, such as strict access controls and data minimization principles, could help mitigate privacy concerns, Cummings noted that such measures might reduce AI model performance, which companies are reluctant to accept. Market forces may further exacerbate this issue, as competition pushes firms to prioritize performance over privacy. Despite these challenges, Cummings expressed optimism that privacy research in AI remains in its early stages, offering significant opportunities for innovation in regulation, governance, and privacy-enhancing technologies.
Stiglitz discussed the economic forces shaping AI, viewing it as both a powerful tool and a major risk due to misaligned corporate incentives. AI firms prioritize profit, which can overshadow societal concerns like fairness and accountability. Using OpenAI as an example, he explained how financial incentives can lead AI firms to prioritize profitability over public good. He also noted AI’s monopolistic tendencies, as firms seek market dominance, making regulation difficult, and highlighted AI’s role in exacerbating social media’s engagement-driven business models, which amplify divisive content and deepen societal polarization.
Stiglitz emphasized the need for regulating AI to ensure fairness and accountability as well as align technological advancements with societal well-being.
François wrapped up the discussion by summarizing four key takeaways. First, while AI offers transformative potential, its risks are already materializing in significant ways. Second, AI’s rapid development demands deeper reflection on the forces driving its progress, particularly the concentration of power among a few dominant players. Third, AI has only intensified long-standing regulatory gaps and market failures. Finally, François highlighted the continued importance of human agency: while AI is reshaping industries and interactions, human expertise, relationships, and expectations — particularly regarding privacy and accountability — remain central in shaping its future.
François left the audience with a hopeful message: through thoughtful regulation, ethical considerations, and global cooperation, society can harness AI’s benefits while mitigating its risks.