Neural NetworksPhysicsConsciousnessFuturismAI

Neural Networks, Physics, and the Path to Modifying Reality

A Theoretical Timeline and Challenges

Jhonatan Serna
June 15, 2025
12 min read
Neural Networks, Physics, and the Path to Modifying Reality

There is a premise I keep returning to: the universe is a structure that represents itself, and the human brain is the instrument through which it attempts to make sense of its own existence. If that sounds like philosophy dressed in a lab coat, it is, partially. But the interesting part is that several lines of hard research are now converging in ways that make this premise less metaphorical and more operational.

What follows is a theoretical timeline. It is speculative by nature, but each step is anchored in work that either already exists or is credibly underway. The question it tries to answer: what would it take for neural networks, artificial or biological, to not just model reality, but to modify it?

Step 1: The Scaling Era (2020–2030)

We are living inside this step right now. The dominant story of AI in this decade has been scaling: more parameters, more data, more compute. And to be fair, it has worked remarkably well. Large language models went from curiosities to infrastructure in under five years. The AI Species channel has been tracking this progression with unusual clarity, documenting how capability benchmarks keep falling faster than even optimistic forecasts predicted. The emerging framing is not just artificial general intelligence, but AGI as a service: cognitive labour abstracted into an API, available on demand, priced per token. If that sounds mundane, it is worth remembering that mundane is how transformative technologies actually arrive.

But the honest question is whether scaling alone gets us to something qualitatively different. METR's research on AI capabilities suggests we are approaching systems that can autonomously conduct multi-step research. Yann LeCun argues we are missing something fundamental about world models. Both positions might be correct simultaneously. The scaling era is necessary but probably not sufficient.

In his conversation with Lex Fridman, LeCun argued that autoregressive language models lack a world model. They predict tokens, not physics. His proposed alternative, the Joint Embedding Predictive Architecture (JEPA), aims to learn representations of how the world actually works rather than how language describes it. Whether JEPA specifically is the answer matters less than the underlying insight: we need architectures that model causality, not just correlation.

Meanwhile, the AI research community itself is divided on timelines. 80,000 Hours aggregates expert forecasts suggesting a median expectation for transformative AI around 2040–2050, while Katja Grace et al.'s survey of AI researchers found a 50% probability assigned to human-level AI by 2059. There is genuine uncertainty here, which is itself informative. As roon put it rather well: "We don't have the language yet for what's happening. We just keep being surprised in the same direction."

The challenge for this decade is not just to scale, but to figure out what the next paradigm actually looks like. Spiking neural networks, neuromorphic hardware, energy-based models: the candidates exist. What is missing is the conceptual breakthrough that ties them together.

Step 2: Mapping and Embodying the Brain (2024–2040)

This is where things get genuinely interesting, because this step is no longer hypothetical. It is happening now.

In October 2024, a team published the complete connectome of the adult Drosophila brain in Nature. 139,255 neurons, 54.5 million synaptic connections, fully mapped. This is the most complex complete brain wiring diagram ever produced. It is a fruit fly, not a human, but the methodological leap is enormous. We went from C. elegans (302 neurons, mapped in 1986) to this in roughly four decades. The curve is not linear.

But what happened next is what changes the narrative. EON Systems took that connectome and placed it inside a virtual body, in a simulated 3D environment. Sensory input flows into the neural network, propagates through the connectome, and produces motor output. A closed loop. The virtual fly behaves with roughly 95% accuracy compared to its biological counterpart.

Let that sit for a moment. We mapped a brain, gave it a body, and watched it behave. Not a simplified model. The actual wiring, running in simulation, producing recognisable behaviour. This is whole-brain emulation, achieved for the first time, in 2025.

The human brain, of course, is a different order of magnitude: approximately 86 billion neurons and 100 trillion synapses. But the pattern established by EON (map, embody, simulate, validate) is now a proven methodology, not a theoretical aspiration. The question has shifted from "can we do this?" to "how long until we scale it?"

Ray Kurzweil would point out that this is precisely the kind of exponential progression he has been predicting for decades. In The Singularity Is Nearer, he maintains that full brain emulation becomes feasible by the late 2030s, contingent on continued advances in scanning resolution and computational capacity. One can argue with his specific dates. It is harder to argue with the trajectory.

Step 2a: Dreams as Reality Simulators

There is a related thread worth pulling here. Every night, the human brain constructs immersive, multi-sensory environments from scratch, with no external input whatsoever. We call them dreams, and we tend to dismiss them as cognitive noise. But from an engineering perspective, dreaming is real-time reality generation running on approximately 20 watts.

If the brain can build convincing realities internally, then the machinery for reality construction already exists in biological neural networks. We do not need to invent it. We need to understand it, and eventually, to interface with it.

This is not as far-fetched as it sounds. Brain-computer interfaces are advancing rapidly, and the work on neural decoding (reconstructing images and even rough video from fMRI signals) suggests that reading the brain's internal simulations is an engineering problem, not a philosophical impossibility.

Step 3: Quantum-Enhanced Computation (2035–2050)

Quantum computing enters the timeline not as a magic accelerator, but as a qualitatively different kind of information processing. The principles of superposition and entanglement allow quantum systems to explore solution spaces in ways that classical computers fundamentally cannot. For simulating quantum-mechanical phenomena, which includes, at the lowest level, all of physics, this is not just faster. It is the right tool for the job.

The practical challenges remain significant. Quantum decoherence, error correction, and the sheer difficulty of scaling qubit counts are all unsolved at production scale. But the trajectory here, too, is encouraging. IBM, Google, and several less visible players are making steady progress on fault-tolerant quantum architectures.

The intersection that matters for this timeline is quantum-enhanced neural networks: systems that combine the pattern-recognition strengths of neural architectures with the computational properties of quantum mechanics. This hybrid could enable simulations of physical reality at resolutions that are currently unthinkable. Not just modelling particle interactions, but modelling the substrate of reality itself.

Step 4: Consciousness as Interface (2045–2060)

This is where the hard problem lives. We can map neurons, simulate brains, and build quantum computers, but none of that explains why there is something it is like to be conscious. And yet, if we want to modify reality, not just simulate it, consciousness is almost certainly the key variable.

Here, psychedelics offer something that no other research tool currently provides: a reproducible method for radically altering the parameters of conscious experience. Psilocybin, LSD, and DMT do not just change what you perceive. They change the structure of perception itself. The boundaries between self and world become fluid. Time distorts. Novel geometric and spatial experiences emerge that have no correlate in ordinary waking life.

The Qualia Research Institute and researchers like Andrés Gómez Emilsson are attempting to build a rigorous science around these experiences. They are mapping the geometry of conscious states, developing mathematical frameworks for qualia, and treating subjective experience as data rather than noise. This is early-stage work, but it addresses a gap that mainstream neuroscience has largely avoided: what is the formal structure of experience, and can we manipulate it systematically?

The premise here is that consciousness is not epiphenomenal, not just along for the ride. If the brain is a reality-generating engine (as dreaming suggests), and if consciousness is the interface through which that reality is experienced and shaped, then understanding consciousness is not a philosophical luxury. It is an engineering requirement.

"The universe is not only queerer than we suppose, but queerer than we can suppose." — J.B.S. Haldane

Step 5: Modifying Reality (2060 and Beyond)

The final step is the most speculative, and I want to be honest about that. But it follows logically from the preceding steps. If we can map the brain completely, simulate it faithfully, enhance its computational substrate with quantum processing, and understand consciousness as a formal system, then we are describing a closed loop in which the universe, through us, gains the capacity to rewrite its own parameters.

What does "modifying reality" actually mean in practice? I am not entirely sure, and I think anyone who claims certainty here is selling something. But there is a useful thread that connects back to the dream modulation work in Step 2a. If consciousness already generates reality simulations every night, indistinguishable from waking experience while they last, then "modifying reality" might not require rewriting physics. It might require making the dream substrate controllable, persistent, and shareable.

David Pearce and the paradise engineering tradition have been thinking about this for decades: the idea that subjective experience is not a fixed given but a design space. Pearce's work, alongside the Qualia Research Institute's formal models of consciousness, suggests that the quality and structure of experience can be deliberately engineered. Not metaphorically. Architecturally. If QRI's symmetry theory of valence is even partially correct, then states of consciousness have a geometry, and that geometry is modifiable.

Making dreams true for real. That is the simplest way to describe what Step 5 amounts to. Not escaping reality, but expanding what reality includes.

What I do think is that the feedback loop (universe models itself through brain, brain models itself through AI, AI models reality through physics, physics reveals the programmability of the substrate) is not circular reasoning. It is a spiral, and each pass brings us closer to the centre.

The Honest Caveats

None of this is guaranteed. The timelines could be wildly optimistic. Consciousness might resist formalisation entirely. Quantum computing might plateau. The ethical implications of reality modification, if it becomes possible, are staggering and largely unexamined.

But the direction of travel is clear. The fly brain is mapped. The virtual body works. AI capabilities are doubling on seven-month cycles. Psychedelic research is producing reproducible data on altered states of consciousness. These are not speculations. They are publications.

The question is not whether these threads will converge. It is whether we will be ready when they do.