Take me to the moon (if you can — and it really is)
How to think clearly about conspiracy theories… without believing them or debunking them
The problem I didn’t realise I was solving
A few years ago I wrote an essay with a deliberately provocative title: 9/11, Apollo, COVID — Lies, Lies, Lies.
It came out of a very human experience. You notice an anomaly, pull on the thread, and suddenly you’re facing a much larger question than the one you started with. Not just what happened, but how am I supposed to know what happened?
If you’ve been through that process, you’ll recognise the feeling. It isn’t just intellectual; it’s visceral. Once the mismatch appears, you can’t unsee it. For a while the world feels oddly unstable, as if the ground under consensus reality has developed hairline cracks. The official story seems too neat, the alternative stories too wild, and you’re stuck in the middle wanting the one thing that seems unavailable: certainty.
That earlier essay was honest about the emotional reality of the journey. But it didn’t solve the deeper problem. It described what it feels like to be pulled into contested domains, not how to get back out again.
I didn’t realise that was what I needed until this morning.
What happened this morning
I spent several hours debating three ‘hot topics’ with an AI system: the Holocaust, Rwanda, and HIV/AIDS.
I’m not going to re-litigate any of them here. That’s not the point.
The point is that these topics sit in the highest-stakes category of public discourse: morally loaded, politically weaponised, surrounded by taboo, and defended with enormous institutional force. They’re also the kind of topics where AI systems have strong guardrails.
What struck me wasn’t simply that the AI refused certain framings. It was something more revealing.
The system could summarise official narratives, repeat consensus positions, and warn me about misinformation. But it couldn’t do the one thing I actually needed if I was going to stay sane in these domains: help me distinguish between what is solid, what is plausible but uncheckable, what is speculative, and what is simply unknown.
In other words, it couldn’t help me reach a sane stopping point.
Once I saw that, I realised this wasn’t just an AI limitation. It’s the same failure mode that derails most conspiracy debates. We keep trying to extract certainty from domains where certainty is structurally unavailable — and then we turn the discomfort into ideology.
Why I stepped sideways into Apollo
At that point I stepped sideways, deliberately, into Apollo.
Apollo is unusually well suited as neutral ground: prestigious, technically complex, deeply secretive, embedded in Cold War power, and backed by a vast public record that still contains conspicuous gaps. Crucially, it’s distant enough that I can analyse it without the emotional heat overwhelming clear thought.
If I could build a way of thinking that reached a stable stopping point on Apollo, I could probably generalise it to almost anything else.
Apollo became my laboratory.
The goal I discovered: certainty about uncertainty
Most conspiracy debates assume the goal is to reach a final verdict. True or false. Real or fake. Lie or fact.
But in domains involving states, war, intelligence, institutional power, or historical trauma, final truth may simply not be accessible. Archives degrade. Raw records disappear. Chains of custody go dark. Key decisions are never recorded in auditable form. Public narratives harden into something closer to civic religion.
In those conditions, demanding certainty is a category error. It pushes the mind toward blind trust on one side or infinite suspicion on the other.
Both feel comforting. Both are traps.
The goal I needed was different: epistemic closure — a stable understanding of what can be known, what cannot, and why. Not certainty about the event, but certainty about the shape of uncertainty.
The differentiator the AI didn’t have
The AI had plenty of knowledge. That wasn’t the issue.
What it lacked was differentiation. It couldn’t separate kinds of claims from kinds of support without collapsing everything into a binary fight. It couldn’t comfortably say that some claims rest on inspectable evidence, others on coherence with constraints, others on institutional authority, and still others on speculation — and that these differences matter.
When a topic is morally charged, people crave a decisive ending. If one can’t be had, they reach for a tribe or a posture instead.
I wanted a third option.
The model Apollo forced me to build
Working through Apollo forced me to build a simple model — not complicated, just disciplined.
First, I separated the claim-space into layers.
Apollo isn’t one claim; it’s a stack. At the base are baseline facts that are extremely hard to dispute: the US ran a major space program, Saturn V launched, Apollo was a massive Cold War operation. Above that are event claims, representation claims, capability claims, and strategic claims about why the story was shaped as it was. Then there’s the outer ring: anomaly claims — possible in principle, but easy to contaminate and hard to verify.
These layers are logically independent. You can have a real program with a managed public story. Humans could have landed even if footage was edited. Cold War deception can exist without a hoax.
Separating layers prevents everything from rising or falling together.
Second, I examined how belief in each claim is actually supported. Inspectable evidence, coherence with known constraints, institutional authority, narrative force — these supports differ fundamentally, and mixing them is what fuels endless argument.
Third, I looked for termination points: where knowing stops.
For each claim, I asked what independent verification would require, and whether that evidence exists in a clean, accessible form. Apollo makes this uncomfortably clear. Despite the huge record, there are hard boundaries: missing raw materials, broken chains of custody, archives that are simply gone. The public posture often feels more checkable than it really is.
That doesn’t prove fraud. It shows the record was never designed to be fully auditable.
Once you see that boundary, you can stop pushing past it.
Silence is a signal, not a verdict
Apollo also taught me how to treat silence.
Silence can look like missing archives, erased tapes, institutional refusal to engage, or adversaries declining to challenge something they might be expected to attack. In personal disputes, silence often means ignorance. In state systems, silence is often functional.
With Apollo, Soviet silence matters. It places a floor under claims of wholesale fabrication — the USSR had some ability to observe events and did not call it a hoax. But that observability was coarse. It constrains possibilities without settling details.
That’s the discipline: silence narrows the space of explanations; it does not select one.
Where Apollo finally lands
Once I applied this model, Apollo lost its mythic glow — not because it became small, but because it became sane.
The most conservative, defensible picture looks like this: a real, strategically vital Cold War program; a managed public story; a bounded archive with some losses accidental, some useful; and the greatest epistemic risk not fantasy, but overconfidence — mistaking authority and coherence for fully inspectable proof.
What this picture leaves open is the deeper possibility space that people are really fighting about, even when they pretend they’re arguing about photographs. Under a conservative reading, Apollo still allows that the public narrative was a simplified interface to a more complex reality — and that some of what was learned was never meant to become part of public knowledge.
Concretely, it leaves open possibilities such as:
that Apollo had classified intelligence and military objectives beyond “beating the Soviets”;
that there were discoveries about the Moon’s environment or properties that were strategically sensitive;
that certain technical capabilities demonstrated (or implied) by Apollo were intentionally obscured;
that some mission outcomes were kept secret because they affected how decision-makers understood national vulnerability; and,
at the outer edge, that there were observations suggesting the lunar domain was not as empty, inert, or uninteresting as the public story implies.
None of these possibilities automatically overturn the core achievement claim. They simply imply that the public-facing story was never the complete operational truth — which is exactly what you would expect of a Cold War state, even while potentially telling the truth about the headline.
That is the residual possibility space. It is bounded, not infinite. And it does not need to be resolved in order to be acknowledged.
That isn’t debunking. It isn’t blind belief.
It’s epistemic closure.
Why this matters beyond Apollo
Once you see Apollo this way, you can’t unsee it.
Every major conspiracy debate contains the same traps: mixed layers, weak claims explaining strong ones, authority treated as proof, scepticism treated as pathology, and no stopping rule.
This way of thinking doesn’t tell you what to believe. It tells you how to avoid being forced into belief or disbelief.
You can say, calmly and honestly: this part is solid; this part fits but can’t be independently checked; this part relies on authority; this part is speculation; this is where I stop unless something genuinely new appears.
The real end state
Many people start this journey the way I did: they notice anomalies and want certainty. They assume the job is to decide what really happened.
But in the domains where so-called “conspiracy theories” thrive, the best you can often get is something else: certainty about where certainty ends.
Apollo was just the training ground.
Epistemic closure isn’t about settling scores.
It’s about reclaiming your mind from the pull of needing to know.


