In this blog, Data2X unpacks key insights from Episode 6 of AI: Alternative Intelligence, a podcast from Data2X, featuring Dr. Savita Bailur and Isabelle Amazon-Brown. Listen to or watch the full conversation on Spotify or YouTube.
As the first season of the podcast unfolds, the series has moved steadily deeper into the ecosystem surrounding artificial intelligence in the social sector. Earlier conversations explored global development institutions, the technology industry, and community-driven approaches to AI. This episode turns toward the most intimate point in that chain: where technology meets the individual user. The discussion focuses on AI-driven chatbots—when they are useful, what new challenges they introduce, the opportunities they unlock, and crucially, when organizations should pause and consider whether an AI tool is truly serving people or potentially doing more harm than good.
The conversation features Dr. Savita Bailur, adjunct associate professor at Columbia SIPA and gender researcher, and Isabelle Amazon-Brown, an independent consultant who designs AI-powered chatbots for sexual and mental health contexts. Together, they explore how chatbots are being deployed in development and what equitable use of such tools requires.
AI Chatbots in the Development Sector
Chatbots have surged across development programs, particularly for providing information, guidance, or behavior-change support. As Amazon-Brown explains, most current services leverage commercial large language models or other AI systems integrated into purpose-built platforms. They are used for vaccine sensitization, sexual and reproductive health, maternal health, financial literacy, and psychosocial support. Some public-facing tools; others serve as job aids for healthcare providers or teachers.
Chatbots themselves are not new—the sector has engaged with them since platforms like WhatsApp opened APIs in 2017—but generative AI has “supercharged” them. It enables personalization, richer responses, and a more conversational experience than rule-based systems. Their power lies in accessibility: anyone with a basic internet-enabled phone can engage. Behind the scenes, AI can also analyze conversation patterns and adapt content based on user behavior —a layer that often receives less attention yet holds enormous potential.
Challenges for Early Adopters
Despite growing enthusiasm, evidence about long-term impact remains limited. Much research is buried in organizations’ internal repositories, and randomized studies are only beginning to emerge. While early results are promising, especially for knowledge-building, both speakers note that generative AI adds unpredictability. Measuring sustained impact and preventing harm becomes more complex.
Bailur emphasizes that end users are far from homogeneous—chatbots may support farmers, patients, extension workers, or adolescents, and each context demands different design considerations. While AI can improve efficiency, it also creates dependency, risks becoming a “single source of truth,” and may diminish critical thinking.
Both guests stress that organizations should not feel compelled to adopt AI simply out of pressure or hype. In some cases, choosing not to use AI—or starting with non-AI chatbots—may be more responsible. Implementers must invest time, capacity, and ongoing maintenance, not just deployment. Ethical design, feedback loops, and clear pathways to human support remain essential. AI can help scale responses when demand far exceeds human capacity, but only when managed with intentional oversight.
Challenges for Women and Girls
For women and girls, inclusion remains a major challenge. As Amazon-Brown notes, many tools still do not reflect women’s lived experiences, resulting in mistrust. Gendered harms such as AI-enabled abuse or deepfakes intensify safety concerns. Even when access exists, social norms shape whether women can use voice-based tools privately and safely. If chatbots fail to account for these nuances, they risk widening digital divides rather than bridging them.
Looking Ahead
AI chatbots present real promise to connect people to information and support, but only when implemented with sensitivity to user context, gender dynamics, and ecosystem realities. As this field evolves, innovation must be paired with responsibility—and space to “lean back” when needed. The future of equitable AI depends not just on technological advances, but on thoughtful alignment with the needs, choices, and safety of the people these tools aim to serve.
