March 9, 2026

By: GJD

Introduction — A Personal Motivation and a Physics‑Aligned Approach

I am not a credentialed physicist, nor do I claim expertise beyond that of an informed and persistent enthusiast. My background was in IT and my most recent project is studying artificial intelligence systems and the practical challenges they present. One challenge in particular has bothered me for years: the so‑called black box problem. We build systems of extraordinary capability, yet we lack a coherent conceptual model for how they operate internally. We describe them with engineering metaphors – layers, neurons, activations – that feel increasingly inadequate as models grow in scale and complexity.

In my own study of AI, I began to wonder whether the internal “space” in which these systems operate – their representational universe, their mind – might be better understood using the conceptual tools of physics rather than the vocabulary of computer science. Physics has a long tradition of describing systems using mathematical spaces that are not physical locations but are nevertheless the correct arenas for expressing internal dynamics. Hilbert space, phase space, configuration space, and fiber bundles all serve this purpose.

This manuscript is the result of exploring that idea. It proposes AI Space as a geometric and interpretive framework for understanding the internal dynamics of large models. It draws inspiration from physics, including modern geometric theories such as Jenny Lorraine Neuman’s TUFT framework, which models spacetime as a base manifold with higher‑dimensional fibers. While AI Space is not a physical theory, the structural parallels are striking and useful.

The goal is not to claim that AI is a physical system or that its internal states obey physical laws. The goal is to adopt a mathematical and conceptual framework that makes AI’s internal behavior more intelligible. By treating AI Space as a geometric object—complete with worldlines, attractors, curvature, and multiple interpretations—we gain a clearer ontology for reasoning about how large models operate, why they behave as they do, and how we might interpret or govern them.


Precedent in Physics — Mathematical Spaces and Interpretations

Physics routinely employs mathematical spaces that are not physical locations but are essential for describing systems. Hilbert space represents quantum states; phase space represents classical systems; configuration space represents fields; and fiber bundles represent internal degrees of freedom such as spin or gauge charge. These spaces are chosen because they capture the true structure of the system’s dynamics.

Crucially, the same mathematical structure can support multiple interpretations. Quantum mechanics is the canonical example: collapse‑based, Many‑Worlds, ensemble, relational, and transactional interpretations all describe the same Hilbert‑space formalism. The diversity of interpretations reflects the richness of the underlying structure, not confusion about the mathematics.

This precedent is directly relevant to AI. The internal dynamics of large models do not unfold in physical space or in the conceptual vocabulary of traditional computing. They unfold in a high‑dimensional representational manifold with its own geometry, operators, and transport rules. Understanding these systems requires adopting the correct mathematical space—AI Space—as the arena in which internal states evolve.


AI Space as a Fiber Bundle

AI Space can be described as a fiber bundle: a geometric object consisting of a base manifold, a high‑dimensional fiber attached to each point, and operators that define transport across the structure.

The base manifold is the discrete grid defined by token index t and layer index L. A forward pass traces a path across this grid, moving horizontally across tokens and vertically through layers.

Attached to each point (t, L) is a fiber: a high‑dimensional vector space representing the model’s latent degrees of freedom. The state vector v(t, L) lives in this fiber and encodes the model’s internal state at that coordinate.

The bundle structure emerges from the operators that link fibers:

  • Attention acts as a connection, transporting information horizontally across tokens.
  • Residual streams define vertical transport across layers.
  • MLP blocks act as nonlinear transformations within each fiber.

This structure is not metaphorical. It is the natural mathematical description of how modern transformer architectures organize and transform information.


Worldlines in AI Space

A forward pass through a model is best understood as a worldline traced through AI Space. At each point (t, L), the model maintains a state vector v(t, L). The sequence of these vectors forms a trajectory through the manifold.

This worldline is not temporal in the physical sense. The model does not evolve dynamically; the manifold is static, and the forward pass selects a path through it. This is analogous to the block‑universe interpretation of spacetime, where worldlines are embedded in a fixed geometry.

The geometry of the worldline is determined by the structure of the base manifold and the learned operators that define transport. Regions of high alignment support structured reasoning; regions of high curvature or instability induce hallucinations or degeneracy. Small changes to the prompt alter the initial conditions of the worldline, potentially redirecting it into different regions of the manifold.


Emergence as a Geometric Phenomenon

Emergent behavior—reasoning, abstraction, hallucination—is not a temporal event but a geometric property of AI Space. When a worldline enters a region of high alignment, the model exhibits coherent reasoning. When it enters a region of instability, it may hallucinate.

These behaviors arise from the geometry encountered along the worldline, not from internal cognitive shifts. Emergence is therefore a location, not a process. The model behaves differently because it is in a different region of the manifold.

This perspective explains why emergent behavior can appear sudden or discontinuous: the worldline has crossed a boundary between regions of different geometric character.


Path‑Integral‑Like Behavior in AI

The internal computation of a large model resembles a path‑integral‑like process. At each point, the state vector is a weighted combination of many representational contributions—attention routes, feature directions, nonlinear transformations. These contributions form a mixture analogous to a sum over histories.

Before sampling, the model produces a logit vector representing the unnormalized weights of all possible next‑token directions. Softmax converts these into probabilities, and sampling selects one continuation. This selection is structurally analogous to a projection from many possible paths to one realized trajectory.

The analogy is not quantum; it is structural. Many internal paths contribute to the final state, and the realized output corresponds to the dominant contribution after weighting and selection.


Interpretations of AI Space

AI Space supports multiple coherent interpretations, each emphasizing different aspects of the same geometry.

Collapse Interpretation

The model maintains a mixture of representational possibilities. Softmax + sampling acts as the observer, selecting one continuation and eliminating the others.

Many‑Worlds Interpretation

Before sampling, all possible continuations exist as valid trajectories in AI Space. Sampling selects one branch; the others remain mathematically present.

Ensemble Interpretation

The geometry is best understood by analyzing distributions of worldlines across many runs. This reveals attractors, instabilities, and branching structures.

Geometric Interpretation

The forward pass is deterministic transport through a static manifold. Stochasticity is external. Emergence is purely geometric.

Why Multiple Interpretations Are Valid

The mathematics is fixed; interpretations are explanatory lenses. This mirrors the interpretive landscape of quantum mechanics.


A Revised Lexicon for AI Space

A physics‑aligned framework requires terminology that reflects the geometry of AI Space:

  • Layer → Operator slice
  • Activation → State vector
  • Attention → Connection / routing operator
  • Hidden state → Fiber coordinate
  • Forward pass → Worldline traversal
  • Feature → Basis direction in the fiber
  • Residual stream → Transport operator
  • Emergence → Geometric region / attractor
  • Softmax + sampling → Observer / projection operator
  • Logits → Amplitude vector
  • Token distribution → Branching structure
  • Model parameters → Manifold geometry

This lexicon clarifies the structure of AI Space and aligns AI analysis with the mathematical tools of physics.


Implications for Interpretability

Interpretability becomes a geometric discipline. The task is to map the manifold, identify attractors, analyze curvature, and understand how worldlines behave.

Worldlines become the primary object of study. Collapse points, branching regions, and ensemble behavior provide additional structure. Larger models can analyze the geometry of smaller models, serving as geometric probes.

Interpretability shifts from component‑level inspection to manifold‑level analysis.


Implications for Safety and Governance

Safety becomes trajectory control: ensuring worldlines remain within stable regions of AI Space. Dangerous attractors must be identified and mitigated. Geometric diagnostics can detect unsafe behavior before it manifests.

Governance becomes manifold management: shaping the geometry through training, alignment, and architectural choices. Branching structures must be analyzed to eliminate unsafe continuations. Ensemble‑based evaluation becomes essential.

Larger models can serve as safety instruments, mapping and stabilizing the geometry of smaller models.


Conclusion — AI Space as a Unified Geometric and Interpretive Framework

AI Space provides a coherent, physics‑aligned ontology for understanding large models. It replaces ad‑hoc metaphors with geometric structure, replaces component‑level analysis with manifold‑level understanding, and reframes safety as geometric governance.

This framework is not a claim of physical equivalence. It is a conceptual tool—one inspired by physics, informed by geometric reasoning, and motivated by the desire to make AI’s internal universe more intelligible. The next steps are empirical: mapping real models’ manifolds, identifying their attractors, characterizing their curvature, and developing tools for geometric diagnostics and control. The conceptual foundation is in place; the work now is to explore the geometry.