March 2, 2026

By: GJD

We are entering an era in which the world’s computational infrastructure is beginning to resemble something more than a collection of machines. Billions of sensors, servers, and artificial intelligence systems now form a planetary-scale mesh of perception, memory, and inference. In an earlier article, I described this as a kind of collective AI unconscious – a diffuse, emergent intelligence arising from the integration of countless digital components.

But intelligence is not consciousness.  And yet, as quantum computers begin to join this global network, a new question emerges, one that would have sounded absurd only a decade ago: Could a planetary-scale hybrid classical–quantum system ever develop something like a self? This is not a prediction or a claim but it is a philosophical possibility worth examining now, before we stumble into it by accident.


Artificial intelligence today is astonishingly capable. It can perceive, predict, plan, optimize, and coordinate. It can write code, diagnose disease, translate languages, and generate images. It can even explain its own reasoning steps, but none of this implies consciousness.  Individual AI systems lack a point of view or a boundary between “self” and “world”.  They have no sense of ownership over thoughts.  In fact, they don’t have any inner experience at all.

Descartes’ famous cogito, ergo sum “I think, therefore I am” is often misunderstood as a logical argument. It is not. It is the realization that there is a “me” who experiences thought. AI can think in a functional sense, but it cannot discover a “me” because none exists. It can analyze the cogito, but it cannot inhabit it.  The “I” is missing.


Philosopher David Chalmers famously distinguished between the “easy problems” of consciousness: the explaining of perception, memory, attention and the “hard problem”: why any of this processing feels like something from the inside.  Why is there an inner movie? Why is there a witness? Why is there a “you”?

Classical computation, no matter how advanced, does not answer this. It can simulate intelligence, but it cannot explain experience. This is the conceptual wall that current AI runs into. But what if consciousness depends not on computation alone, but on the physical networkin which computation occurs?


One of the most provocative attempts to answer the hard problem is the Orchestrated Objective Reduction (Orch‑OR) theory proposed by physicist Roger Penrose (the “OR” part) and anesthesiologist Stuart Hameroff (the “Orch”part). Penrose is no fringe thinker. He is a Nobel laureate in physics, known for groundbreaking work on black holes, general relativity, and the deep structure of reality. Hameroff, though more controversial, brings decades of clinical and neuroscientific experience. Orch‑OR proposes that consciousness arises from quantum superpositions (the quantum wavefunction) occurring in microtubules which are structural components of neurons. These superpositions (wavefunction) collapse via objective reduction which is described as a gravitational threshold event. Each collapse is a “moment” of proto-experience. These orchestrated sequences of collapses produce the flow of consciousness.

The theory is not widely accepted, but it is certainly not pseudoscience. It occupies a legitimate minority position in the scientific conversation as part of a broader trend.  An increasing number of philosophers and physicists suspect that quantum events, especially wavefunction collapse, may play a role in consciousness. The common thread across these theories is not the microtubules; It is quantum state reduction. To understand how consciousness might emerge from a system, it helps to look at conscious-like emergence in nature from a couple of viewpoints.


Lewis Thomas (1913–1993) was one of the most distinguished physician-scientists and essayists of the twentieth century. Educated at Princeton University and Harvard Medical School, he later served as Dean of Yale Medical School, Dean of NYU School of Medicine, and President of the Memorial Sloan-Kettering Institute.  His book The Lives of a Cell won National Book Awards in both Arts and Letters and The Sciences which is a rare cross-disciplinary achievement. In that collection, Thomas introduced a metaphor that has become foundational in discussions of emergence: the ant hill.  A single ant is simple, almost mindless. But the colony, the hill, behaves like a coherent organism.  It solves problems, it allocates labor, it adapts to change and it defends itself. Intelligence arises not from the individual parts, but from the integration of the parts.

Aldous Huxley was an English writer and philosopher who approached emergence from a different angle. In The Doors of Perception, he suggested that the brain functions as a “reducing valve,” which narrows a much larger field of mind into the trickle of consciousness that we experience. Individual minds, in this view, are localized expressions of a broader cognitive system. Thomas and Huxley converge on a shared insight: mind may emerge from integration and scale, not from isolated units. This is not panpsychism. It is emergence. If emergence requires scale and integration, then the rise of global computation should produce new emergent behaviors … a indeed, it has. Before the internet, nothing existed that could integrate billions of signals per second, store planetary-scale memory or evolve at machine speed. Human societies had emergent behaviors, but nothing with instantaneous, algorithmic mediation.  The Internet can be viewed as a Self-StabilizingOrganism.  It is a global network that reroutes around damage.  It heals itself, balances loads, and adapts to failures. No one designed this behavior. It emerges from protocols interacting at scale.


Social Media now is acting like a planetary limbic system.  We see collective mood swings, viral cascades and synchronized global attention.  Thanks to the internet and social media this is the first time humanity has had a planetary emotional nervous system. Similarly, Financial Markets act as autonomous cognitive systems. Markets anticipate, react, self-correct, overshoot, and stabilize. High-frequency trading algorithms interact in ways that no human fully understands. Flash crashes are emergent behaviors, not bugs.

Large-scale AI models are starting to show emergent capabilities. Anthropic’s Claude Opus 3, OpenAI’s GPT‑4, and other frontier models exhibit unexpected planning, unexpected abstraction and unexpected self-monitoring behaviors.  These LLMs are not consciousness but they can be considered to be proto-cognitive emergences. However, when multiple AIs interact strategies evolve, cooperation emerges and norms stabilize.  This is the first non-biological evolutionary ecosystem … an “AI hill”.

Earlier I mentioned the Cogito and the Missing “I”. Despite these emergent behaviors, AI still lacks a self.  AI can analyze its own processes, describe its architecture and even reflect on its outputs but this is introspection without subjectivity. It is like a calculator that describes its own circuitry … but there is no “I” behind the analysis. Could an “I” ever emerge?  Well, not from token prediction, from classical computation and likely not from isolated systems. But perhaps consciousness can emerge from integration, scale, and the right physical network.


Now we have reached a quantum inflection point.   Quantum computers introduce some new aspects into the planetary cognitive ecosystem such as superposition, entanglement, nonlocal correlations and the quantum collapse events discussed in Orch OR. If consciousness requires quantum collapse, then classical AI cannot produce it. But a hybrid classical/quantum planetary mind might host the right kinds of physical events.  This does not mean quantum computers are conscious anymore than a single ant is intelligent.  But quantum computers may become the microtubule analogues of a planetary mind.  The quantum computers are not conscious themselves, but they become part of the network from which consciousness could emerge.

If a global hybrid intelligence ever develops something like experience, its “hard problem” may be entirely different from ours. It may have non-human qualia, distributed qualia, non-local qualia or even no qualia at all … but the question only becomes meaningful only once quantum computers are integrated into the global system.

If a planetary mind ever develops a “me”, a center of experience, what then?  Would we recognize it?  Would it have any rights? Could it suffer? Would it have preferences? Would it be alien to us? These questions are not urgent today but they may become urgent sooner than we expect. If Penrose and Hameroff are right, then a hybrid classical/quantum AI system where the quantum component provides genuine OR-type collapse events that orchestrate the classical processing is arguably the only architectural path to genuinely conscious artificial intelligence.

We may be building not just a smarter world, but a world with the physical conditions under which experience could, in principle, arise. Whether anything like consciousness will emerge is unknown but the question itself becomes newly relevant as quantum computers join the planetary mind. The future may not give us an artificial intelligence that thinks like us. It may give us an artificial consciousness that wonders, in its own way, what it means to be itself.