In this blog, Data2X unpacks key insights from Episode 1 of AI: Alternative Intelligence podcast, featuring Dr. Emily Springer and Gratiana Fu. Listen or watch the full conversation on Spotify or YouTube.

Artificial intelligence (AI) is transforming how we live, work, and make decisions — but whether it drives fairness or deepens inequality depends on the choices we make now.
That’s the focus of AI: Alternative Intelligence, a new podcast from Data2X, exploring how we can build technology that works for everyone. In the debut episode, Responsible AI Consultant Dr. Emily Springer and Data Scientist Gratiana Fu— co-authors of Data2X’s research paper The Gender Data Foundations of Responsible and Equitable AI — unpack what “responsible AI” means and why gender data is the missing building block for fairness.
Rethinking What “Responsible AI” Means
Responsible AI isn’t just about algorithms—it’s about people. Too often, AI systems are designed in silos, led primarily by technical experts while overlooking voices that understand people, social context, and power.
In this episode, Emily and Gratiana emphasize that developing responsible AI requires contributions from a broad set of disciplines.
- Social scientists reveal human behavior and power dynamics.
- Legal and policy experts shape governance frameworks and safeguard human rights.
- Civil society organizations elevate lived experiences, especially marginalized groups, often excluded from data collection.
- Technologists translate these insights into ethical design, building tools that reflect human needs.
When these perspectives are missing, “default AI” risks replicating existing inequalities. Fairness, transparency, informed consent, and user safety don’t emerge automatically—they require intentional design choices and a diversity of perspectives. Without such strong interdisciplinary collaboration, organizations that build and use AI may risk prioritizing technological novelty over social good.
Why Better Data Builds Better AI Systems
As Gratiana puts it simply: “bad data equals bad AI.”
Across the global development field, many organizations still lack the baseline datasets needed for responsible analysis, let alone advanced AI. Investing in inclusive, well-documented datasets today means smarter, fairer AI tomorrow.
As they argue in the paper, gender data sits at the heart of this effort. It’s not just about collecting more information on women and girls and marginalized groups — it’s about understanding intersections of gender with race, class, disability, and geography to paint a fuller picture of human reality. Inclusive data helps pinpoint gaps, shape better policies, and audit algorithms for bias — creating the foundation for AI that truly serves everyone.
A Call to Action: Building Responsible AI Requires all of Us
The episode closes with practical advice that responsible AI demands collective responsibility:
- Funders should invest not only in the technology but also in social science, community engagement, and interdisciplinary collaboration.
- Governments and civil society organizations should champion inclusive datasets and expand grassroots AI literacy.
- Technical experts need to bridge divides across disciplines—explaining their work clearly, listening to non-technical partners, and ensuring their innovations serve real-world needs.
As Emily reminds us, “We are so lucky to be alive at the moment humanity adopts AI. But with that comes great responsibility.”
The data choices we make today will shape the intelligence — and the equity — of tomorrow.
