simulation theory and the power of waking up
what it means to be human in a world that feels increasingly generated
Simulation theory is something I think and write about often, but I rarely share these thoughts publicly because they never feel “finished” and they tend to spiral in non-linear ways. But lately, I’ve been preoccupied with the idea of what it means to live on autopilot and move through life unconsciously, repeating patterns without realizing it. What some might call being an NPC. The sense that many people (and at times, myself) are moving through life in a kind of non-playing state. Unaware, reactive, and detached. And every time I try to explore that feeling, I end up circling back to sim theory. Not just as a metaphor, but as a way to think about agency, attention, consciousness, and the structure of reality itself.
So I figured I’d start sharing more of it. It’s not meant to be comprehensive or conclusive by any means—it’s just my first attempt at articulating the themes I find myself returning to most often: The Matrix as a cultural anchor, real-world simulation technologies, computational theory of mind, and math as both structure and beauty. There will be more to come as I keep developing this thread, but here’s where I’ll begin. Consider this the beginning of a longer conversation.
Before simulation theory had a name, it had a mood.
It was the feeling of déjà vu. Of watching a stranger repeat your words. Of scrolling through the same five takes dressed in different fonts. It was the slow realization that something about modern life felt pre-written. Predictable. Like choreography you didn’t agree to, but performed anyway.
And in 1999, that feeling got a visual language: The Matrix.
When people talk about The Matrix now, it’s usually as a cultural reference point—something vaguely sci-fi, vaguely political, and entirely overquoted. But when you strip away the noise and watch it on its own terms, what the movie offers is much more interesting. It isn’t just a story about technology or rebellion. It’s a study in simulated perception. And more specifically, in how people begin to notice that their reality has been constructed for them.
In the film, Neo is a computer hacker living a gray, repetitive life—working a corporate job by day, writing code and following breadcrumbs by night. He has a nagging sense that something isn’t right, but he can’t quite articulate it. Eventually, he’s contacted by Morpheus, a guide of sorts, who offers him a choice: take the blue pill, and return to your normal life, questions unresolved. Or take the red pill, and see the truth of what your world actually is.
Neo chooses the red pill.
The scene that follows is jarring. His perception begins to glitch. His body rejects the simulated input. And then he wakes up—literally—inside a pod, in a vast field of identical human bodies, all suspended in fluid and plugged into a neural interface. The life he thought was real turns out to have been a simulation created by machines. Every sensation, every interaction, every choice he believed was his own was actually part of a program designed to keep him docile.
Once Neo escapes the pod and enters the real world, the story shifts. He joins a small group of humans who’ve also broken free from the simulation. But what’s striking is that their goal isn’t just to destroy the system—they also need to learn how to re-enter it without being consumed by it again. They train their minds to see through the illusion. They learn to bend the simulated world to their will, not because they’ve escaped it permanently, but because they understand what it is.
That’s the part I keep coming back to.
The real message of The Matrix isn’t that we’re all trapped and need to fight our way out. It’s that most of us are walking through life asleep. We’re following scripts, responding to cues, completing sequences. The simulation in the film is literal, but it mirrors something psychological. A person can live for years doing what’s expected, saying what’s rewarded, and never stopping to ask whether any of it is truly chosen.
In that sense, the red pill is less of a plot device and more of a metaphor for awareness. Taking it means asking harder questions. It means feeling disoriented, even betrayed, by the realization that much of what you took for granted might not be real in the way you thought it was. And once you’ve seen that—really seen it—you can’t go back.
You’re forced to participate differently, even if you’re still inside the same system.
What’s powerful about that framework is that you don’t have to physically escape anything to experience it. You don’t need to overthrow a government or join a rebellion. You just have to start noticing. You have to start interrupting your own loops. Whether that means questioning a belief you inherited, breaking a habit that no longer serves you, or simply pausing before reacting, the moment you do it, something changes. You’re no longer just playing the role. You’re aware that it is a role. And from that place, you gain the ability to rewrite it.
That’s what the movie teaches, beneath the action and visual effects. Not that you’re in a simulation and should panic, but that you’re already inside a system that’s been shaping your choices—and you can either sleepwalk through it or wake up and engage with it on your own terms.
There’s another film that cracks open the simulation metaphor in a softer, but still revealing way: Wreck-It Ralph. Ralph is a video game character who’s literally coded to wreck things, to be the antagonist in someone else’s victory narrative. He’s not trying to escape the game entirely. He just doesn’t want to be defined solely by the function he was programmed to serve.
The brilliance of the film is that it doesn’t pretend he can break the system. He can’t. But he can glitch it. He can bend the logic. He can find a new purpose inside the same environment by subverting his role without abandoning the rules. And there’s something quietly profound in that: not transcending the code, but repurposing it. Not escaping the simulation, but reclaiming authorship from within.
So, what is simulation theory?
Long before it had a foothold in culture, the basic premise had been explored in sci-fi and philosophy for decades. But in 2003, philosopher Nick Bostrom gave it formal structure. In a now-famous paper, he proposed that one of the following three statements must be true:
Civilizations never reach a level of technology capable of simulating reality.
Civilizations do reach that level—but lose interest in running such simulations.
Civilizations do run simulations—and we’re probably inside one.
The logic is clean, if a little uncomfortable. If it’s ever possible to simulate sentient beings, and if there’s any incentive to do so—scientific, historical, entertainment-based—then the number of simulations would vastly outnumber base realities. Statistically, that would make it far more likely we’re living inside one of those simulations than at the very top of the stack.
It’s a framework that’s been picked apart, memed to death, and stretched far beyond what Bostrom originally intended. But it’s still useful. Because at its core, it raises a question that goes deeper than philosophy or technology:
If our world is simulated, would we know? And if we started to suspect it, what patterns would give it away?
This is where things get interesting really interesting to me—not because I think we’re living inside a cartoonish video game, but because our universe behaves eerily like code.
From the Fibonacci sequence in sunflowers to the hexagonal storms on Saturn, from the spiral of a seashell to the web of a galaxy, the physical world is saturated with repeating structures. Mathematics doesn’t just describe reality—it seems to generate it. Whether you’re looking through a telescope or a microscope, you’ll find the same geometries echoed across scale. Symmetry. Fractals. Ratios. Logarithmic spirals. These aren’t just beautiful. They’re recursive. Self-similar. Optimized.
They are literal rules—laws applied across scale and space, like a rendering engine applying physics to everything it touches, with such consistency, it’s hard not to see them as part of a designed system.
At first glance, it sounds like science fiction. But it becomes harder to dismiss once you start paying attention to how we build and use models of the world ourselves. Climate simulations, economic models, weather forecasting, protein folding, high-speed physics—all of them rely on simulations. Not because we’re indulging in fantasy, but because it’s the only way to understand complexity at scale. We can’t control the full system, so we approximate it. We test how it behaves under different inputs. We let it run and see what emerges. And as computing power increases, our simulations become more accurate, more responsive, and more real.
That’s the part that starts to feel really strange. If we’re already building simulations that mirror reality, what makes us so sure we’re not already inside one?
Because we *do* already live inside simulations of our own making.
Not just metaphorically, but quite literally—through the tools we use to model everything from weather systems to pandemics. These aren’t artistic renderings or abstract guesses. They are probabilistic engines, built to mirror the complexity of reality so closely that we can test futures against them. When you see a hurricane projected to hit landfall at 2:37 p.m. in four days, that’s not a psychic vision. It’s a simulation—fed by millions of data points, parsed through mathematical rules, and adjusted by real-time feedback.
We simulate because we have to. The systems we study—climate, biology, economics, geophysics—are too complex to observe in totality. So instead, we build representations. We craft mini-universes inside machines. We tune the inputs, run the clock forward, and see what unfolds.
And the better our simulations get, the more they begin to blur with reality. When a weather model predicts an atmospheric river days before it forms, and when that model is updated second-by-second based on satellite telemetry and ground-level sensors, it starts to feel less like a guess and more like an extension of the system itself. The boundary between map and territory gets fuzzy. We act on the model as if it were real—because increasingly, it is.
It wasn’t until I read The Universe in a Box that I started to understand how real simulations work. The book gets into how these models are actually built, and how it’s not a matter of just mapping inputs and watching a visual appear. It takes staggering amounts of processing power to simulate even a few days of global weather, and the margin of error compounds quickly. Still, the models work—not because they perfectly replicate nature, but because the patterns they do capture are consistent enough to forecast the near future.
That matters because it tells us something important. You don’t need to simulate every atom of a system to get useful behavior. You just need to approximate it well enough at the scale that matters. You simplify the terrain, compress the variables, and run the system forward. What you get isn’t reality, but something close enough to act like it.
If we take that seriously, then we can start to imagine what our own simulation might look like. Because if this world is a simulation, it doesn’t have to be flawless. It only has to be convincing at the level we can perceive. The world doesn’t need to render the interior of a building you’ll never walk into. It doesn’t need to simulate the memories of people you’ll never meet. It just needs to give you enough data to maintain continuity. Maybe it drops resolution on the parts of your environment you’re not looking at. Maybe memory itself is sparsely stored, and only rebuilt when accessed—something we already suspect happens in the brain.
None of this proves anything. But it adds weight to the feeling that experience doesn’t behave like something static. It behaves like something generated.
Which brings us back to the strange loop of simulation theory: if the tools we build to understand the world are simulations, and the behaviors of those simulations mirror the physical world so closely that they shape our decisions… then how do we draw the line between the simulation and the thing being simulated?
And what, exactly, is the original?
And that’s just the physical layer.
If the outside world follows programmable laws—symmetry, recurrence, scale invariance—what about the inside world? What if the mind, too, is running on something similar to code?