Closing gaps in governance to protect Indigenous rights in AI and neuroscience

On April 21 - 23, 2026, the thirteenth session of the Expert Mechanism on the Right to the Development (EMRTD) was held at the United Nations Headquarters building in New York, USA. The session focused on the 40th anniversary of the United Nations Declaration on the Right to Development and included thematic discussions around contemporary challenges and cross-cutting themes for the operationalization of the right to development.

Representing the IBI Crosscultural Working Group, Dr. Melissa L. Perreault delivered a statement on existing gaps in governance to protect rights to development in AI, neurotechnologies, and in neuroscience.

Read below for Dr. Perreault’s full statement:

Thank you for this invitation. To introduce myself, I am a citizen of the Métis Nation with historical ties to the Nipissing First Nation in Ontario, Canada. I am also a member of the Crosscultural Working Group of the International Brain Initiative. 

One of the objectives of this group is to work towards the meaningful involvement of Indigenous persons and communities in research and new technologies, specifically those related to the brain and mind. At a recent UN side event in Geneva, my colleagues spoke about progress across human rights frameworks in the context of Indigenous Peoples’ right to consent and data sovereignty.

Building on these efforts, today I speak to the existing gaps in governance to protect our rights to development in AI, neurotechnologies, and in neuroscience, as a whole. While speaking about these gaps in governance, we should keep in mind what many scholars now identify as technological colonialism. This refers to the extension of colonial power through data extraction, algorithmic design, and the imposition of external frameworks of knowledge—often without the consent, participation, or benefit of Indigenous Peoples.

AI governance in neuroscience has received little attention, although it is this field that has perhaps the greatest potential for harm to those most vulnerable. I offer the example of mental health and addiction. The prevalence of mental health and substance use disorders is disproportionately high in Indigenous communities around the world, as a result of the transgenerational impacts of colonization, and the ongoing impacts of colonialism that involve inequality, exploitation, and cultural dissolution. 

Although we have all heard of the overt dangers associated with AI chatbots, or AI therapists, the impacts of AI on Indigenous communities may also be much more insidious, leading to the further entrenchment of colonial practices that may challenge, suppress, or even extinguish the core wholistic aspects of Indigenous well-being. And so, even under conditions of true authenticity, such as through the development of AI tools for therapeutic interventions, for example, there is great potential for harm.

As we rapidly progress in the development of these tools for use in Indigenous communities, I offer a few key questions to consider. 

Who will it be who defines brain health? Resilience? Or wellness? 

For us, brain health does not stand alone but exists in balance with other aspects of ourselves. One cannot treat one aspect without considering the being as a whole. How will this be considered?

What about neurodiversity? Or disability? Indigenous scholars today are calling for better integration of their lived experience into Euro-Western medical practices and theoretical models about disability. It is not simply abled or disabled, rather, it is the wholeness of existence, considering communities, nature, family, past and present.

And finally, how will AI interpret Indigenous experiences? Euro-Western diagnostic categories have historically misclassified Indigenous experiences and concepts of health, and so there is great risk that AI mental health tools will magnify these attitudes and ideologies. 

Importantly, the risk is not only in how data are used—but in what data do not exist. Many Indigenous cultures, languages, and understandings of health remain underrepresented or entirely absent from the datasets used to train AI systems. This absence is not benign. It renders Indigenous realities invisible to algorithmic systems or forces them to be approximated through non-Indigenous proxies.

In this way, absence itself becomes a form of harm—producing systems that either ignore Indigenous Peoples altogether or misrepresent them through externally imposed categories.

There are existing Indigenous governance frameworks that offer some guidance. For example, in Canada there is the First Nations principles of OCAP, and of course, there is the United Nations Declaration on the Rights of Indigenous Peoples. However, we argue that in this era of AI and rapid technological advancement, adaptations to these governance models are required. 

Our Crosscultural Working Group will begin the process of adapting existing governance models by using a Two-Eyed Seeing¹ approach, first developed by Mikmaw Elder Albert Marshall of eastern Canada. Two-Eyed Seeing¹ requires cultural humility and allyship; it is a concept that integrates both Indigenous and Euro-Western ways of knowing and doing. 

This approach is critical when considering AI tools as AI alignment, processes that determine how systems interpret values, norms, and desirable outcomes, are often overlooked. If Indigenous perspectives are absent from these processes, AI systems will inevitably be aligned with dominant cultural assumptions. This creates a structural risk: systems that appear neutral but are in fact calibrated to prioritize non-Indigenous ways of understanding the world.

In this sense, misalignment is not accidental. It is a predictable outcome.

Given this work and conclusions to date, I leave you with several considerations.

First, data governance must move beyond inclusion toward co-elaboration. Indigenous Peoples must co-define research questions, interpretative models, and indicators of development. 

Second, data sovereignty must include epistemic authority, namely the power to define meaning, not simply to authorize access. 

And finally, there should be a mechanism in place for each individual or community to challenge how their data are used, as well as how their experiences are defined and interpreted.  

To conclude, technological colonialism does not require intent. It emerges wherever systems extract data without governance, interpret without consent, and define without accountability. If the gaps I just described remain unaddressed, we risk enabling a future in which Indigenous Peoples are not only underrepresented in data but overdetermined by systems that do not understand them. The question before us is whether AI and neuroscience will advance in ways that respect Indigenous sovereignty, knowledge systems, and futures.

The implementation of the Durban Declaration and Programme of Action is more relevant than ever. We need to learn from the past 40 years to better guide the next 40 years of AI development, ensuring it promotes justice and equity for affected populations.

¹ Illes J*, Perreault ML*, Bassil K, Bjaalie JG, Taylor-Bragge RL, Chneiweiss H, Gregory TR, Kumar BN, Matshabane OP, Svalastog AL, Velarde MR. 2025. Two-Eyed Seeing and other Indigenous perspecVves for neuroscience. Nature, 638: 58-68.
Next
Next

EBC introduces the Seal of Responsible Neurotechnologies