Home » News and events » Making AI smarter, thanks to Indigenous knowledge

Making AI smarter, thanks to Indigenous knowledge

Philippe Lemonnier

AI has become the latest buzzword in various sectors, from markets to research, private organisations to governments. AI promises a systematic impact on our societies, communities, and lives. It’s already pervasive: our smartphone photographs, ChatGPT requests for school homework, or even writing a letter at work, all the way to recommendations in online shopping. However, the real game-changer might be AGI (Artificial General Intelligence), where AI could become as versatile and adaptable as humans. This potential has sparked considerable discussion and concern.

As AI becomes more integrated into our daily lives, making significant decisions for organisations and communities, crucial questions arise. What happens when AI makes life-or-death decisions, like choosing between the safety of a car occupant and a pedestrian? Or when it addresses climate change by targeting the root cause of pollution: humans? Experts often downplay these fears, as Stuart Russell argues in “Human Compatible,” emphasising that the focus should be on enabling AI to understand the value of choices rather than programming it with moral values. However, this reassurance may not be entirely comforting.

A major concern is that most AI developers share similar backgrounds, cultures, and moral frameworks, which inevitably shape how AI is developed and envisioned. Additionally, AI training predominantly relies on internet content, mainly produced by developed countries with a strong Americano-European influence. This raises the question: is AI poised to impose a homogenised worldview on all of us? Unlike historical conquests through force, this influence could subtly shape our decisions, nudging us toward a singular, median understanding of the world.

Someone recently told me that “losing diversity means losing parts of the solutions to our problems”. Humanity’s rich diversity of cultures, values, and worldviews has always been a strength and a key feature of our ability to adapt. We risk overlooking significant solutions if these diverse perspectives are not integrated into AI development. Therefore, AI should not only reflect mainstream American or European values but should also incorporate diverse perspectives. This includes different cultural understandings of the relationship between humans and nature, perceptions of time, and more. Such diversity would ensure that AI’s horizon is broadened enough for everyone to reap its benefits and not suffer from it, preventing it from being limited by the median knowledge available on the internet, which often lacks wisdom and open-mindedness.

Efforts to include diverse perspectives in AI have begun. For instance, the Indigenous AI initiative in Hawaii exemplifies how Indigenous knowledge can inform AI development. However, the industry needs to show more intentionality. Empathetic and visionary leadership from organisations and supportive government policies are crucial. The case of Timnit Gebru and Google highlights the urgent need for inclusivity in AI. Ensuring that Indigenous and marginalised voices are part of AI development is imperative.

I asked ChatGPT about including Indigenous perspectives in AI: it acknowledged the benefits yet recognised the current lack of action by AI creators like Sam Altman. Even AI better understands the need for inclusivity and expansion in this field than its creators. The journey toward a more inclusive AI development process will be long, so the time to start is now.

Links:

Philippe Lemonnier is a Senior Learning and Organisational Development Advisor at Fisher & Paykel Healthcare, New Zealand.