How Kant Can Make Sense of Misinformation

with No Comments

The term misinformation inherently suggests a problem that is grounded in the quality of information itself. Uyiosa Omoregie and Kirsti Ryall argue that by drawing on Kantian notions of appearances and approaching misinformation as an issue of interpretation policymakers and platforms can create conditions for a better public sphere.

Six years after the January 6 Capitol riots in the United States of America, the events remain deeply contested: not merely in their political interpretation, but in their very meaning. For some, January 6 was an insurrection driven by misinformation; for others, it was a justified response to a stolen election; for others still, it is a misunderstood protest.

What is striking is not simply disagreement over facts, but the persistence of incompatible realities, sustained long after extensive investigations, court rulings, and fact-checking efforts. Online misinformation often appears to be a problem of content: false claims, misleading narratives, low-quality sources. This is true. But what this deficit model misses is the deeper problem, that misinformation persists even when high-quality, verified information is available.

Even the most meticulously fact-checked content cannot reach people who do not perceive the world through the same basic structure of experience. As Stephanie Hare notes in Technology Is Not Neutral, one of today’s greatest metaphysical challenges is that many people “do not have a shared sense of reality.” And without that shared reality, platforms and policymakers are trying to fix a problem at the level of information that originates at the level of differing perception. To understand why this fragmentation is happening (why conventional misinformation responses keep failing) we must turn to an unlikely guide: Immanuel Kant.

Misinformation matters

As we worked on our recent book on misinformation we developed a structured, evidence-based model for evaluating online information quality, focusing on (among others):

evidence-based claims
explainable ranking
user choice (“algorithmic choice”)
credibility signals rather than engagement signals

By focusing on the quality of information, we argued social platforms could shift away from engagement-optimised systems and towards designs that elevate the quality and intelligibility of information. However, while continuing to examine real-world misinformation case studies following the publication of our book, we bumped up against a recurring puzzle: namely, even when the informational environment improves, the interpretive environment does not. People exposed to the same content still come away with radically different understanding, based upon how they perceive reality. This revealed a missing piece, one that Kant helps us illuminate.

We do not experience the world “as it is,” but as it appears through mental structures

In the Critique of Pure Reason, Kant argues that humans never encounter the world directly. We only access the world as it appears to us, structured through mental categories such as:

causality
unity
plurality
possibility
necessity

These are not ideas we “learn.” They are the conditions that make experience possible in the first place. This insight is crucial for understanding modern misinformation.

If two people use different conceptual structures to interpret the same event, they inhabit different worlds of appearance. This is not simply an epistemological disagreement (“You believe X; I believe Y”). It is a metaphysical divergence (“You and I inhabit different constructions of reality”).Digital environments amplify this divergence dramatically.

Even in the years since the “COVID-19 infodemic” (out of which our book developed), the way in which misinformation spreads online has not only become faster, making it harder to counteract in real time, it has become more “realistic”. Even those of us who spend our time investigating how and why misinformation is disseminated online are fooled by artificial intelligence more times than we would like to admit. Putting it bluntly, AI’s capabilities to fool us are becoming more and more sophisticated, and humanity is struggling to keep up, let alone counteract it with any real success. AI is creating different realities.

How digital platforms create different realities

The design choices of contemporary platforms (personalised feeds, opaque ranking systems, influencer-driven interpretation) shape the categories through which users make sense of the world. Personalised feeds construct personalised realities. Two people can scroll through the same platform and encounter entirely different representations of the world. As a result their structures of appearance slowly drift apart.

Conspiracy worldviews replace basic categories like evidence and probability with intention and hidden design. For a conspiracy-inclined mind:

randomness becomes impossible
coincidence becomes evidence
complexity becomes orchestration

This is a metaphysical frame, not simply a mistaken belief.

The breakdown of shared public institutions erodes shared metaphysical baselines.

Without agreed-upon authorities for interpreting reality (public broadcasters, scientific institutions, independent journalism), individuals rely on communities that supply alternative categories of meaning. The result is a digital public sphere in which people no longer merely disagree about facts: they disagree about the kind of world in which they believe they live.

Why fact-checking and moderation often fail

Most misinformation strategies treat the problem as one of incorrect knowledge. But if the underlying issue is divergent structures of appearance, then content-level interventions are bound to fail. By the time a fact-check appears, the user’s metaphysical framework has already shaped the way the original claim was experienced. In Kantian terms: Once the categories differ, shared truth is impossible. This reframes misinformation not as an informational failure but as a metaphysical fragmentation of the digital public sphere.

A metaphysical approach for policymakers and platforms

If misinformation emerges from fractured structures of appearance, then improving the digital information ecosystem requires addressing deeper elements of how shared reality is formed:

Rebuild shared conditions of sense-making.
Platforms and public institutions must strengthen the minimal conceptual scaffolding for interpreting events: evidence, probability, transparency, causality. These are not ideological positions but the preconditions for a shared world.

Misinformation regulation must become worldview-aware.
Legislation that focuses solely on removal or correction misses the deeper cognitive and cultural forces shaping online interpretation.

Fact-checking needs reframing.
Instead of just presenting “objective truth” to communities living in different epistemic worlds, analysts should target the assumptions behind interpretation.

Platforms must identify epistemic divergence as a structural harm.
Content personalisation amplifies the fragmentation of shared reality. Platform responsibility must extend beyond content labelling to understanding how algorithmic design reshapes cognition.

Digital literacy must include meta-cognition.
Users need tools not just to spot falsehoods but to understand how they know what they think they know.

The fight for a shared world

Revisiting our work through the lens of Kantian philosophy shows why misinformation is so entrenched. We are trying to repair a broken digital public sphere at the informational level, when the deeper fracture lies at the metaphysical level: in the loss of shared structures through which reality appears. The real challenge is not “How do we correct false claims?” but: How do we restore a common world of shared appearance in which truth is once again possible? Until we rebuild the metaphysical basis for public sense-making, misinformation will continue to flourish — not because falsehood is persuasive, but because we no longer inhabit the same world long enough to recognise truth together.

link to original on the LSE Impact Blog