What Ifá, an Indigenous binary knowledge system can teach us about AI

with No Comments

AI is presented as a disruptive innovation, particularly in the fields of decision making and learning. However, as Uyiosa Omoregie argues, Indigenous knowledge systems, such as Ifá, provide an alternative decolonial lens through which to understand AI computation and its impact on society.

Published in the London School of Economics and Political Science LSE Impact blog

22 August 2025


Contour map of Africa with a technological electronics circuit.

Artificial intelligence is often described as new, Western, and unprecedented. But what if its underlying logic (binary code, probabilistic reasoning, and decision-making models) has been around for centuries, hidden in plain sight within African knowledge systems?

Ifá, a traditional Yoruba divination system from West Africa, operates with an uncanny resemblance to modern-day AI. It uses binary code, an 8-bit structure, and a vast oral corpus to generate predictive insights. Despite this, it’s rarely acknowledged in the broader discourse around technology and computation.

Here’s how it works: a practitioner casts an instrument called the opele—eight palm nuts or cowries—each falling into one of two positions, open or closed. These generate an 8-bit sequence that corresponds to one of 256 possible outcomes, known as odù. Each odù maps to a specific collection of oral verses, making up a database of over 430,000 interpretive entries.

The result is a decision-support system: a coded output mapped to narrative-based data, interpreted by a trained practitioner to address human questions or dilemmas. It’s not mystical—it’s algorithmic. It functions with structure, precision, and probabilistic input. In essence, it’s an Indigenous version of a predictive model.

Ifá, a traditional Yoruba divination system from West Africa, operates with an uncanny resemblance to modern-day AI.

Modern AI systems like ChatGPT or automated decision-making tools are often described as “black boxes.” Their logic is inaccessible, their output opaque. Yet Ifá too was once dismissed as mystical, simply because it wasn’t written down, peer-reviewed, or formalised in Western academic terms.

Clapperton Mavhunga describes this oversight as an “asymmetry of definitional power”: the ability of dominant cultures to determine what counts as science or technology. Because Ifá is mostly oral, spiritual, and community-based, it has often been excluded from narratives about innovation and computation.

But the computational logic is undeniable. Olu Longe has shown how Ifá uses binary representation, Boolean algebra, and memory addressing—all foundational aspects of computer science. Ron Eglash has even traced a conceptual lineage from African binary systems through Arab geomancy and European alchemy to the work of Gottfried Leibniz, the 17th-century mathematician often credited with formalising binary logic.

Understanding Ifá as a form of Indigenous algorithmic knowledge opens space for broader, more inclusive definitions of what intelligence and computation can be. It also invites a conversation about epistemic justice and the recognition of how knowledge systems outside the West have shaped, and continue to shape, digital technology.

This is the heart of decolonizing knowledge: questioning who gets to define valid ways of knowing, reasoning, and computing. The marginalization of Indigenous systems like Ifá is not just a historical oversight—it reflects a deeper imbalance in the global knowledge economy. Western systems of thought are often positioned as universal and scientific, while Indigenous knowledge is framed as cultural or spiritual, and therefore unscientific. This binary framing reinforces epistemic injustice.

AI ethics discussions often center on bias, fairness, transparency, and accountability—essential principles, but frequently filtered through Western legal and philosophical frameworks. What gets lost is the possibility that non-Western knowledge systems might offer radically different ethical structures. For example, Ifá embeds community participation, interpretive plurality, and moral responsibility in its very process of decision-making. There is no algorithm without interpretation, no output without context. The model demands human wisdom, not blind automation.

Ifá insists on social interpretation, embedding consultation and dialogue, not replacing it.

When a babaláwo (priest) casts the Opele or palm nuts, the Odù (binary result) only opens a reference point: it doesn’t dictate a decision. The client and community are drawn into dialogue about what the verses mean in their particular context. For example: the same Odù might appear for a farmer worried about harvest and for a parent worried about their child’s future. The meaning emerges through conversation and alignment with the person’s life circumstances.

Unlike automated AI that produces outputs with no community mediation: algorithmic decision-making in policing, hiring, or credit scoring often produces harmful, decontextualized outcomes (bias, discrimination). Ifá insists on social interpretation, embedding consultation and dialogue, not replacing it. Ifá demonstrates that an algorithm can never stand alone . The Ifá algorithm is incomplete without interpretation, dialogue, and moral responsibility. By design, Ifá integrates human wisdom into its computational process. It doesn’t outsource human responsibility to the system. It deepens it.

This makes it a powerful counter-model for today’s AI, which often seeks to automate decision-making without context or accountability.

Recognising the Ifá system not only challenges Western assumptions about what counts as science—it also proposes a relational and narrative form of ethics. Rather than seeking singular, automated answers, Ifá encourages deliberation, dialogue, and context-sensitive reasoning. These are precisely the capacities that many argue today’s AI systems lack.

The inclusion of Indigenous epistemologies in AI discourse is not a call for token representation. It’s a call for structural change in how we define intelligence, design technologies, and evaluate truth. Decolonizing AI means centering voices, values, and systems that have long been excluded—not to replace modern AI, but to question its foundations and broaden its horizons.

Recognising the Ifá system not only challenges Western assumptions about what counts as science—it also proposes a relational and narrative form of ethics.

Today, as concerns grow around algorithmic bias, AI explainability, and ethical governance, Indigenous systems like Ifá offer models of collective interpretation, transparency through narrative, and culturally grounded reasoning. They remind us that the future of AI must not only be about better models—it must also be about better frameworks for understanding knowledge.

Ifá is being digitised. Apps are emerging. Developers are exploring AI models trained on Odù verses. This isn’t a nostalgia project—it’s a new hybrid future. And recognising the value of Ifá isn’t just historical correction—it’s a guide for rethinking the ethics, logic, and purpose of the digital age.

Artificial intelligence models have been described as cultural and social technologiesand a future is envisioned when “different perspectives, encoded in different large models, debate each other and potentially cross-fertilize to create hybrid perspectives.” This is an apt description how an indigenous cultural and social technology like Ifá could engage with modern large language models.


The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: Adapted from Toluaye, A divination tray on which cowrie shells rests, as are used for Ifá divination, (Public domain), via Wikimedia Commons.


About the author

Uyiosa Omoregie

Uyiosa Omoregie is an online content analyst and researcher in AI history, with a focus on African knowledge systems. He is the co-author with Kirsti Ryall of Misinformation Matters: Online Content and Quality Analysis (Taylor & Francis, 2023), and co-founder/principal analyst of Avram Turing, a startup focused on digital epistemology and content analysis.