
Is Artificial Intelligence Helping or Hurting the Poor?

The jury is still out on whether AI will revolutionize the pace of development across the Global South or leave much of the region behind. But Daniel Björkegren is among the world’s leading researchers seeking answers. He first took an interest in machine learning back in 2000 as an employee at Microsoft, coding apps on the company’s first smartphone. But it was through an experience on a summer volunteer trip to Los Angeles in 2002 working with homeless people that he first grappled with the social impact of technologies. He thought, If we make computers more capable, what does that mean for humanity? Now an assistant professor at SIPA, Björkegren is doing cutting-edge research to understand the potential of AI across the developing world.
The following conversation with SIPA Magazine has been condensed and edited for clarity.

Tell me how you got interested in applying economics to new technologies like artificial intelligence in parts of the world where data, resources, and infrastructure are scarce?
What was most impactful for me was working with development economists at Harvard and MIT. This is a field that uses math and data to better understand what policies can help the poor. But one perennial issue is that we don’t know much about the lives of the poor—they’re not interviewed in many surveys, and they don’t interact with many formal institutions.
Tell me about some of your early research work—for example, I know you did your dissertation research on mobile phones and network effects in Rwanda.
My work has been based on the observation that as information technology spreads, it documents—and changes—the lives of the poor. My thesis considered how we might learn about these changes from peoples’ interactions with mobile phones. I used a dataset of about 5.3 billion mobile phone calls placed in Rwanda to understand how phones grew from a tool used by the wealthy in cities to one used by the poor even in remote villages. Here the issue is that a phone is only useful if the people you want to call also have phones. When you start from having almost no phones in the country, there’s a chicken-and-egg problem. But if you can get one person to adopt, that may trigger others to adopt.
To understand adoption, you need to understand these linkages, which are called network effects. That’s also a central problem in social media and many of the other digital networks that increasingly define our lives. My thesis came up with a method to disentangle what drove adoption for a mobile phone network. I found that simple policies could have further improved access. For example, a big concern in regulatory circles now is how to discipline digital networks. It’s easy to describe advantages of competition, but networks can become less useful when they are split up. I developed a way to estimate these countervailing effects in the Rwandan mobile phone system and found that adding a competitor could have reduced prices and increased incentives to invest in rural towers. Overall, allowing competition could have increased welfare by the equivalent of 1 percent of GDP. That’s enormous—it demonstrates that good policy can have huge effects.
How does AI factor into some of the similar research projects you’re working on now?
One that I’m excited about is a study of a chatbot for teachers in Sierra Leone made by an organization called Fab Data. It’s simple. Teachers already use WhatsApp to chat with their friends and family. So TheTeacher.AI links WhatsApp to GPT to let them chat with an AI. We find that they use it for planning lessons, creating content, and managing their classrooms. The questions they ask reveal that they are hungry for advice to better serve their students, which is hard to access in remote schools. We see teachers asking for advice on transitioning their discipline strategy away from corporal punishment. The chatbot offers other approaches and can respond to their specific needs. Once they start using it, the usage persists. Especially on small screens, it’s more natural to chat, and you can ask an AI things that you might be uncomfortable asking a person, who might be impatient or judge you.
Last year you wrote an article in Foreign Affairs that called on institutions like the World Bank to do more “to build applications aimed at the poorest people.” What specifically should these institutions be doing to help the world’s poorest?
Tech firms are investing enormous resources into AI development, but those investments are targeted at the needs of wealthy countries. Many low-income countries have budgets that are too small to translate these innovations to meet local needs. So there is a need for crosscutting institutions to raise funds, research what works, and spread learning across places. I’m working with the World Bank on an AI for People initiative to improve the potential of these technologies to serve human capital.
Tell me about the course you’re teaching at SIPA.
The idea is to get students to really understand machine learning from the ground up so they see how it works, how it can be useful, and how AI is likely to evolve over the coming years.
It’s a tough course: I ask students to write all the algorithms from scratch. That process forces them to grapple with the many decisions that go into developing these tools. That allows them as graduates to ask smart questions when they interact with data scientists or with tech firms. It’s easy to get tricked by fancy methods or buzzwords, but our students develop a deep conceptual understanding, allowing them to cut through the BS that is so common in these conversations. This foundation also allows them to understand when these tools may exacerbate—or reduce—biases, and how to develop systems better tuned to social needs.
Students work on a final project, and it’s been exciting to see how students apply the technical concepts to the causes they care about. This year students applied machine learning to immigration court proceedings, US elections, and many other important policy topics.
As attention pivoted to AI last year, it quickly became clear that there were few people who deeply understood both the technology and its societal implications. We’re training graduates to have both—to help our societies prosper in this new age.