De-Westernising AI Development: My Talk at London Product Design Week 2024

Joanna Brassett, CEO of Studio intO, shares insights from her talk at London Product Design Week 2024 on de-westernising AI development. Highlighting Studio intO’s Equitable AI Framework, she explores how integrating diverse cultural and linguistic perspectives into AI research can create more inclusive and impactful technology solutions.

Last week, I had the privilege of presenting at London Product Design Week 2024, during the UX Live Conference on the Google-sponsored AI stage. My talk, “De-Westernising the Development of AI: Ensuring Global Equity in AI UX Research,” was both a call to action and an invitation to shape a more inclusive digital future.

As the CEO and Founder of Studio intO, a global research agency, I’ve spent over 13 years working at the intersection of local insights and global strategies. Our mission has been clear from the start: to de-westernise innovation by making multi-regional collaboration accessible and impactful. This vision is particularly critical in AI development, where the acceleration of technology has the power to amplify both risks and opportunities on a global scale.

AI: A Double-Edged Sword

AI is transforming the way we live and work, but it often reflects the biases and values of dominant regions. The risks of this are profound:

1. Global Inequity: Excluding Billions from AI’s Benefits

Linguistic barriers significantly hinder equitable access to AI-powered tools. For instance, large language models like GPT-4 are predominantly trained on English and Mandarin, leaving speakers of less common languages with subpar functionality. This creates a cascade of challenges:

  • Economic Disparity: Businesses in regions where minority languages are spoken cannot leverage AI tools to the same extent as those using dominant languages, deepening global inequality. For example, tools like Grammarly AI, which enhance productivity, are only available in English. As a result, businesses in Poland, Italy, or Spain fall behind their English-speaking counterparts.
  • Language-Driven Exclusion: Many AI systems struggle with non-Latin alphabets. For example, ChatGPT often generates nonsensical outputs for languages like Thai or Punjabi, limiting the utility of AI tools in these communities.
  • Feedback Loops of Inequity: AI systems rely on user feedback to improve. When feedback predominantly comes from speakers of dominant languages, AI becomes increasingly tailored to their needs while sidelining those of other groups. This creates a feedback loop that further widens the gap.

2. Perpetuated Cultural Bias: Narrow Worldviews in Global Systems

AI systems often reflect and amplify Western-centric perspectives. This occurs because training data disproportionately represents dominant cultures, leading to cultural biases embedded within these technologies.

  • Reinforcement of Stereotypes: AI-driven platforms like Google Translate default to male pronouns for professional roles and female pronouns for domestic ones, perpetuating traditional gender norms.
  • Cultural Erasure: Gendered languages, such as Polish, reveal further bias. For instance, translations often default to masculine forms, requiring women to spend additional time editing their work to accurately represent their identity. This creates inefficiencies and reinforces systemic disadvantages.
  • Limited Cultural Representation: Many AI systems fail to account for regional and cultural nuances, offering a one-size-fits-all approach that alienates users outside dominant cultural spheres.

3. Digital Ethnocide: The Risk of Losing Linguistic Diversity

Language is a cornerstone of culture and identity, yet AI development often prioritises standardised versions of dominant languages, marginalising minority languages and dialects.

  • Accelerated Language Extinction: With over 7,000 languages spoken globally, nine disappear each year—one every 40 days. Without deliberate intervention, over half could become extinct within a century.
  • Erasure of Regional Variants: For example, AI often defaults to Castilian Spanish when translating, disregarding Latin American variations. This not only marginalises regional dialects but also homogenises cultural expression.
  • Loss of Nuance in Research: At intO, we’ve seen firsthand how individuals can better articulate their needs and emotions when speaking in their native language. By ignoring linguistic diversity, AI risks erasing the depth and richness of global user perspectives.

Equitable AI Framework

At intO, we developed the Equitable AI Framework, a set of principles designed to address these inequities. It’s a tool to guide the research, development, and consumption of AI with equity at its core. One key principle, Diverse Participation, ensures that voices from varied linguistic and cultural backgrounds are included at every stage of AI UX research and design.

Why Language Matters

Language is not just a tool for communication—it shapes identity, culture, and cognition. Ignoring linguistic diversity risks erasing the rich tapestry of global perspectives. For instance, AI often defaults to standardised language versions, such as Castilian Spanish, disregarding regional dialects like Latin American Spanish.

Without intervention, AI could exacerbate inequalities, leaving millions behind. It’s our responsibility as UX leaders to ensure AI systems are designed with and for everyone.

How We Can Act

To build truly inclusive AI, we must ask:

  • Who is participating in design? Ensure diverse researchers, designers, and developers contribute to creating authentic, culturally specific solutions.
  • Who is providing the feedback? Develop AI-UX testing programs that focus on multi-regional feedback loops.
  • What data is being used? Identify and mitigate biases in training datasets through specialised workshops.

Our Ten-Point Checklist for Equitable International UX Research, layered with the Equitable AI Framework, offers actionable steps to achieve this.

A Call to Action

Receiving the WIN Movers & Makers Award last month for the Equitable AI Framework was an honour, affirming the impact of our approach. It’s already helping partners like Google and YouTube design AI systems that are more inclusive and equitable.

But this is just the beginning. We must work together to build AI that reflects the diversity of our world. Let’s not turn a blind eye to the inequities in AI development—let’s confront them with actionable strategies.


If you’re as passionate about equitable AI as I am, let’s connect. Comment or DM me here and we can arrange a discussion. Together, we can create a future where AI works for ALL.

Connect with me on LinkedIn

News AI Strategy Publications Innovation


You may also be interested in:

Close
Close