About Us

Avram Turing develops scalable approaches for detecting misinformation and embedding ethical reasoning into algorithmic design.

We believe the future of technology depends not only on smarter efficient systems but wiser ones. Our work helps digital platforms and policymakers align computational efficiency with moral and cultural responsibility.

Our approach helps online social media platforms transition from an algorithmic engagement ranking of content to a ranking method that prioritizes information quality. This would present social media users with 'algorithmic choice' as the users can decide how content on the social media newsfeed is organized.

We also advocate for centuries-old ethical principles from African knowledge systems to inform AI design in general.

We were pleasantly surprised to find that our approach to online content rating and ranking is in the same spirit with the solution proposed in 2019 by Dr Stephen Wolfram to the U.S. Senate Subcommittee Hearing on Communication, Technology, Innovation and the Internet.

This expert testimony validates our approach to online content ranking.

 

Our Work

We specialize in two intersecting areas of digital innovation:

  • Online Information Quality: Tools and frameworks to assess and elevate the factual integrity of algorithmically ranked content.

  • Ethical AI Design: Integrating principles from African knowledge systems—such as Ifá—to inspire more interpretive, context-sensitive forms of computation.

Together, these areas support the creation of technologies that promote clarity over confusion, wisdom over automation, and dialogue over noise.

 

Why It Matters

In a world governed by algorithms, information quality and ethics are inseparable.

Our approach equips organizations to design systems that understand not only what works but why it matters—building trust, accountability, and resilience into digital ecosystems.

 

 

 

Avram Turing is a non-partisan research-driven organization of analysts who summarize, label and rate the information quality of online written content.

Our hypothesis-focused online intervention hopes to add friction to low quality information and amplify high quality online content.

We also help online content consumers engage on the World Wide Web using more analytical thinking. We can work with educators, schools and other institutions of learning. We can also provide advisory work for civil society, public communication platforms (tech companies) and governments about online content engagement.

 

Avram Turing is currently developing/promoting its seed-stage startup called ContentQual® : a consumer-facing and content-focused website/app/Web browser extension.

Through our research, in 2021, we coined two new concepts “non-information” and “off-information” then introduced them to the international research community and the information quality body of knowledge.

In short, we exist to help limit misinformation online, improve the quality of online content engagement while respecting freedom of speech.

 

 

 

Analysts at Avram Turing are led by the two co-founders with guidance from the advisor

 

 

 

Message

Kirsti Ryall

Principal Researcher/Co-Founder

Increasingly, more people believe one point of view over another instead of checking for facts when an article is encountered online. Simple rule we need: read the whole piece before judging content and seek further information when necessary.

Avram Turing's ContentQual® uses an analytical content checklist that is descriptive: it does not endorse or condemn particular articles online. It upholds freedom of speech. The rating produced is objective. Readers are given a more informed choice to agree or disagree with content.

History is mutable: 'facts’ from the past can be discounted or broadened to include further research by people many years later. Our analytical checklist system is not prescriptive: it is not a ruling demanding readers accept or reject an online article rated by us. Nor is it that the checklist's scoring cannot be reconsidered. It can and it should. New information could reveal how truthful or not stated ‘facts’ are within an article and the scoring (and maybe the label too) issued previously would be reconsidered. Based on feedback, from the public, our system has been adjusted to reflect this.

Apparent ‘truths’ and opinions change over longer periods of time. What was once viewed as general, normal and consensus can change as we learn more and society progresses.

Kirsti 🙂

 

   Uyi Omoregie

Principal Analyst/Co-Founder

Uyi is a member of the Association of Internet Researchers (AoIR) and the Canadian Association for Information Science (CAIS). He is a referee ('expert reviewer') for the Journal of Information Science (SAGE Publishing) and Misinformation Review (Harvard University).  He is the co-author of the book Misinformation Matters: Online Content and Quality Analysis (Taylor & Francis, 2023)

Paul Miller, PhD

Advisor, Analytical Thinking Methods

Paul is currently an associate professor at Brandeis University. He has written papers on the modeling of cognitive processes in scholarly journals. He is also the author of the textbook An Introductory Course in Computational Neuroscience (MIT Press, 2018). He is a graduate of the University of Cambridge, UK (first class honours) and the University of Bristol, UK (PhD in theoretical physics). He has taught high-school level students in Malawi, East Africa (working for Voluntary Service Overseas, UK).

 

Oju Olatunji

Research Analyst

Ojuolape Olatunji is a techie: an IT product designer and researcher. She is passionate about user interface/user experience (UI/UX) research and design. She has also worked in branding, advertising and as a social media influencer.

 

Muhammad Saiful Islam

Saif is an educator, researcher, translator, poet and literary critic. He holds a MA degree from the Department of English and Theatre Studies of the University of Guelph, Canada

 

 

 

Guiding philosophy for our approach

 

The work of Avram Turing is shaped by five guiding philosophical traditions. Together, they provide a rigorous foundation for analysing online content, algorithmic systems, and AI-driven information environments — while upholding freedom of expression, intellectual pluralism, and human dignity.

These philosophies operate at different layers: facts, perception, belief, ethics, and social responsibility.

 

1. Wittgensteinian: A commitment to facts, clarity, and meaningful claims

Our analytical approach is grounded in the early philosophy of Ludwig Wittgenstein, particularly Tractatus Logico-Philosophicus (1922). Wittgenstein argued that the world consists of facts, not opinions, and that meaningful statements must be logically structured and grounded in reality. Claims that cannot be verified, clarified, or meaningfully articulated should be treated with caution in public discourse.

This perspective underpins our emphasis on:

  • evidence-based claims

  • clarity and intelligibility

  • logical coherence

  • distinguishability between facts, opinions, and speculation

In content analysis, this means prioritising what can be responsibly stated, evaluated, and understood — rather than amplifying ambiguity, emotional appeal, or engagement-driven distortion.

 

2. Kantian: How shared reality is formed — and fractured — in digital systems

While Wittgenstein focuses on what can meaningfully be said, Immanuel Kant explains how reality is experienced in the first place. In the Critique of Pure Reason, Kant argues that humans do not encounter the world “as it is,” but only as it appears through underlying mental structures that make experience possible — such as causality, probability, unity, and necessity.

These structures are not learned beliefs or opinions; they are the conditions that make sense-making possible.

In digital environments, algorithmic systems increasingly influence these structures of appearance by shaping what users see, how information is ordered, and which patterns are reinforced. Over time, this can fragment shared reality, producing radically different interpretations of the same events across communities.

From a Kantian perspective, misinformation is not merely false content. It is often the result of divergent conditions of perception. Addressing it therefore requires attention to how platforms shape shared sense-making — not only what information they distribute.

 

3. Ubuntu: Ethical technology rooted in collective human flourishing

Ubuntu is an African ethical philosophy commonly expressed as “I am because we are.” It emphasises interdependence, mutual responsibility, and the idea that individual wellbeing is inseparable from the wellbeing of others.

This philosophy informs our commitment to:

  • human-centred technology

  • social responsibility in algorithmic design

  • collective outcomes over narrow optimisation

  • the inclusion of non-Western knowledge systems in AI ethics

Our engagement with Indigenous African knowledge systems — including the Ifá computational tradition — reflects the Ubuntu principle that wisdom is distributed across cultures, histories, and communities, and that ethical technology must draw from global intellectual heritage.

 

4. The Harm Principle: Protecting liberty while preventing real-world harm

Drawing on John Stuart Mill’s On Liberty (1859), we uphold the Harm Principle: the idea that speech or action should only be restricted to prevent harm to others. This principle provides a crucial safeguard against overreach, censorship, and authoritarian information control.

In practice, this means:

  • resisting content suppression based solely on disagreement or offence

  • distinguishing harmful impact from unpopular expression

  • prioritising proportional, evidence-based interventions

For information governance, the Harm Principle ensures that responses to misinformation respect freedom of expression while still addressing demonstrable risks to individuals, communities, and democratic institutions.

 

5. Ramseyan: Belief, uncertainty, and decision-making under imperfect information

Finally, our treatment of belief and truth is informed by Frank Ramsey’s pragmatic philosophy and decision theory. We understand that while 'facts' are objective and somewhat universal, 'truth' and 'belief' are subjective (not necessarily universal). Ramsey recognised that human reasoning often operates under uncertainty, where beliefs guide action even in the absence of complete information. Real belief or truth accepted by an individual is that which can lead to knowledge, decision or action. (Ramsey suggested that the level of belief a person has in any given piece of information is equal to the level of willingness to act on it.)

Why this framework matters

Together, these philosophies allow us to analyse digital information systems at multiple levels:

  • Facts (Wittgenstein)

  • Perception and shared reality (Kant)

  • Belief and decision-making (Ramsey)

  • Ethical limits (Mill)

  • Collective human impact (Ubuntu)

 

This layered approach enables more effective, humane, and intellectually rigorous responses to misinformation, algorithmic harm, and AI-driven information disorder — without sacrificing freedom of expression or cultural plurality.

 

 

The clearest expression of our approach and our work is detailed in our book Misinformation Matters: Online Content and Quality Analysis (Taylor & Francis)

Our book can be purchased with a discount at our publisher's website, click here: Routledge