AI in Government: A SIPA News Q&A with Political Anthropologist Eduardo Albrecht

In his new book, Political Automation: An Introduction to AI in Government and Its Impact on Citizens (Oxford University Press), Eduardo Albrecht, a SIPA professor and political anthropologist, examines how states deploy AI to make consequential decisions about their citizens. He proposes a revolutionary “Third House” of government—a virtual legislative chamber dedicated to AI oversight and governance.
Albrecht, who is also a senior fellow at the United Nations University’s Centre for Policy Research, draws on extensive fieldwork and ethnographic interviews with rights activists to document both the current state of algorithmic governance and emerging citizen responses.
In this SIPA News Q&A, Albrecht discusses the profound implications of AI-mediated governance, the risks of surrendering human judgment to automated systems, and his vision for preserving citizen agency through the creation of “digital citizens” who would represent citizens’ interests in an increasingly automated public sphere.
In your new book, you propose a revolutionary idea of the “Third House,” a chamber beyond traditional houses of congress, dedicated to legislating exclusively on AI in government decision-making. Why not use existing democratic institutions to govern AI?
Albrecht: These political institutions were developed before AI and before the data revolution, so they are not equipped to properly address how these political machines (AI-driven autonomous decision-making systems) operate in our society. In a way, they’re not operating at the same level because representative institutions are quite slow. First of all, you have to vote for someone, that person has to go to Congress, then that person has to deliberate along with other representatives. The pace of decision-making is so much slower than the pace of decision-making by these AI tools used by government. So there’s a mismatch that requires this new institution.
What role, if any, would current institutions like Congress play?
Existing institutions would influence the shape and purpose of this potential Third House. At the end of the day, it’s a little bit like layering another layer on top of a sandwich. This additional layer would ensure democratic accountability for citizens. What makes this Third House different is that it would operate in a virtual space populated by our digital representatives—avatars that express our preferences and interact with millions of other citizen avatars. These digital citizens would serve as our emissaries, ensuring that AI systems deployed by governments remain accountable to the public. This model recognizes that the relationship between citizens and the state has fundamentally changed, with governments now having access to vast amounts of data about individuals. The Third House becomes a counterbalance, giving citizens a direct say in how their digital selves are governed.
If AI governance constantly nudges our behavior through algorithmic influence, at what point does persuasion become coercion? Where should the line be drawn?
Albrecht: Many of my colleagues argue that we have to stop AI’s nudging because it’s unethical or biased. They try to use existing institutions to regulate AI and make sure that it does not nudge. However, my position is quite different.
My position is that it is impossible to stop AI from nudging—it’s exactly what it does. There is no way that we can create rules that will stop AI from trying to nudge our behavior this way or that way.
It’s a little bit like an avalanche—you can’t stop it by creating a fence. So, my argument is that in addition to utilizing existing institutions to regulate AI, which we should do, we must also ride the avalanche. We must ride this wave so that it continues to do its job, which is—whether we like it or not—to nudge, but to do that with citizen participation, and with as broad as possible citizen participation. That’s where the idea of the Third House of digital citizens comes in; it’s a way to ensure our participation in the inevitability of AI systems nudging behavior.
You argue that legacy political institutions—elections, representative democracy—will become obsolete in an AI-driven world. Could these institutions evolve alongside AI, or are they fundamentally incompatible with the emerging reality?
Albrecht: I’m approaching this issue from an anthropological perspective, not wearing the hat of a political scientist looking at the next ten or 20 or 50 years. I’m trying to look at it wearing the hat of an anthropologist that looks at things in cycles of hundreds of years.
If we look at it from this anthropological perspective with longer cycles, it is possible that current representative democratic institutions, like parliaments, will one day become more ceremonial, similar to how aristocracy has become more ceremonial in many places in Europe. We still have a king in the UK, but it’s largely understood that the king’s role is not to make laws and govern on everyday things. It is more of a ritualistic role, similar to the constitutional monarchy in Japan.
This will likely happen also to current democratic institutions. We might still have them, but they might have more or less ceremonial roles, or maybe they will make laws on certain things but not on others. The exact way that authority will be divided between these different institutions will be different in every place and will change over time. Some places might totally do away with representative democratic institutions; other places might still give them a lot of power.
Some governments like China are embracing AI-driven governance, while others like the EU are emphasizing regulation and rights. Do you think a universal standard for AI governance is possible, or will we see further fragmentation of political systems?
Albrecht: There are competing frameworks emerging for AI. The United States is more the "Wild West" where innovation is less contained, while places like Europe are known for stronger regulatory frameworks. However, we have to be clear that most of these regulatory frameworks regard the use of AI in society by corporations.
I should emphasize that when we talk about AI governance, we’re usually talking about governing how Meta, Microsoft, or other AI companies operate. That’s government governing the private sector. My book, however, looks at a completely different topic. I’m examining not how governments are governing AI in the private space, but how governments themselves are using AI to govern—so instead of AI governance, it’s AI in governance.
There is a fundamental shift in the relationship between citizens and the state. And this shift is not so much because of AI itself—AI is just one part in this machinery. This shift is because citizens now have so much data generated about them. Governments rely on all this information, whereas before they didn’t have this information about their citizens.
The way I like to describe it is: just like a driverless car requires information about its environment—sensors that can tell there’s a tree, there’s a bicycle, there’s a road—governments are the same. To have driverless government, you need to have information about the environment, including the people in it. Governments are now organizing this information and making decisions that impact the lives of real citizens by looking at their data. The AI tools are just ways to manage that information, but what’s changing is the relationship between the government and the citizen, because it’s now mediated by this new digital version of citizens. And this is happening all over the world.
You highlight policymakers’ struggle to understand AI’s decision-making process. Is this due to the policymakers’ capability issues, or more due to the inherent lack of transparency in how AI is used in governance?
Albrecht: It’s education, it’s transparency, but also a lot of confusion. Often regulators don’t really know where their governments are using AI. The use of AI tools is usually done at the bureaucratic or administrative level across many different bureaus, making it difficult to regulate this usage because it’s so widespread. A politician in government may simply not know how a local police agency is using AI—you can’t know everything.
The biggest challenge beyond education, accountability, transparency, and bias—which are all important—is the speed and ubiquity of AI systems. Remember the avalanche I talked about earlier? The avalanche is made of many, many snowballs in many different parts of government, making it hard to keep track of everything.
It’s impossible for one human being, like your representative from New York State, to understand where all these AI tools are being used and how they’re impacting citizens. That one politician you elect might be a genius or very educated, but will always be limited by being human, while these systems are just replicating and replicating.
My main argument is that we cannot have humans control something that is not human. We need to fight fire with fire. We need some kind of special weapon; otherwise, we will simply lose, as citizens, the balance of power between the state and citizens.
You reference Fabian Hijlkema’s idea that AI will handle the “act” of politics while humans focus on its “art.” Could this separation create a more enlightened society?
Albrecht: I’m actually an optimist, and I hope that comes through in the book. I think that if we do this correctly, it will free up a lot of time for people to have meaningful conversations. Almost all of us now use AI daily, whether we’re writing emails or selecting what to watch on Netflix, and it makes our lives easier because it offloads a lot of the mental work.
I argue that the mental work of politics can, should, and will be offloaded to machines. There’s no reason we should be plowing fields with our hands if we have a tractor. Of course, some people enjoy that and can continue boutique farming in their garden, but that won’t be what the majority of people do.
The majority of us are going to use AI to enhance ourselves. If we do that in politics, what does it look like? That’s the theme of the book. I argue that if we’re going to use AI to enhance ourselves politically, then we’re going to need institutions to contain that.
What happens if the Third House runs independently and overrides human interests?
Albrecht: That’s where the role of what I call “digital citizens” is so important. The Third House is not empty—it has individual representatives of each of us citizens in it. You would be in the Third House, but not as a person, rather as a kind of avatar or persona.
The Third House is a virtual chamber that doesn’t exist anywhere—it’s in the cloud. Think of it like video games where you have an avatar, such as Roblox or Second Life. Your avatar goes to represent your interests. You drive that avatar; you can tell that avatar what’s good and what’s bad, and then that avatar interacts with millions of other avatars. Together, they decide on what is right and what is wrong for the political machines that the Third House regulates.
You’ve mentioned concerns about AI creating new forms of social division. Could you expand on that?
Albrecht: The integration of AI into governance will likely create a new kind of class divide—not just based on economic factors, but on one’s relationship to AI systems. This division will emerge between those who understand and can influence AI and those who cannot. Politically, this is going to create more inequality between an upper class that understands AI, can direct it, and benefits from it, versus a lower class that is merely subject to AI’s decisions. This isn’t just about technical expertise—it’s about who retains agency in an AI-mediated world.
The traditional class struggles were primarily about economic power and access to resources. This new division is more insidious because it affects something more fundamental: how we think and make judgments. When governance is increasingly automated, those who understand these systems will have disproportionate influence, while others may simply receive the outputs of AI systems without understanding how decisions were reached or how to challenge them. This creates the possibility for a new kind of inequality, where people can easily be led to believe certain narratives, including that their government is good and just, without really knowing how they got those impressions, which of course are being promoted by the AI systems and those behind them.
What do you consider the most urgent risk as AI becomes more embedded in governance systems?
Albrecht: The introduction of AI in government, and indeed in everything else we do, is essentially outsourcing mental work, and the real risk is that we become mentally lazy. Just as automation in the industrial revolution meant that we became physically lazy because machines make things for us, having machines think for us risks making us mentally lazy. By mentally lazy, I mean giving up judgment—giving up deciding what is right and what is wrong and just saying “let the AI decide.” By giving up judgment, we become very vulnerable to power, whether state power or corporate power.
Throughout history, governments have always been limited in how much they can force somebody to think a certain thing. But now with AI, governments have extra power because they can nudge, they can influence people to think a certain way. It’s making people less critical. The only way to counterbalance that is by using AI ourselves to amplify our thinking. If governments have this turbo on their power, we need to put a turbo on our power as citizens to find the balance again. And critically, this is something all people must have access to—not just technical experts or policy professionals, but everyone, by virtue of being citizens. This may sound radical, but it’s necessary: all people must be augmented via AI to counterbalance its use in governance.