Tag: machine consciousness

  • Consciousness Between Axiom and Algorithm

    In our ongoing exploration of consciousness and artificial intelligence, we’ve investigated what it might mean for machines to suffer and how distributed cognition reshapes our understanding of intelligence. These themes circle a deeper philosophical fault line: is consciousness irreducibly real from within, or just a functional illusion seen from without?

    This post traces that divide through two dominant frameworks — Integrated Information Theory (IIT), with its axiomatic, interior-first view of mind, and Computational Functionalism, which posits that subjective experience will eventually emerge from complex, observable behavior. Starting with Descartes’ “I think, therefore I am,” we ask: is consciousness something we must presuppose to explain, or something we can build our way into from the outside?

    As large language models increasingly resemble minds in function, the line between imitation and instantiation becomes harder to draw — and ever more urgent to scrutinize.

    Ground Zero of Knowing: Descartes and the Roots of Axiomatic Thought

    In Meditations on First Philosophy (1641), René Descartes asks: is there anything I can know with absolute certainty? He imagines the possibility of a deceptive world: what if everything he believes, from sense perception to mathematics, is manipulated by an all-powerful trickster? To escape this total doubt, Descartes adopts a strategy now called methodic doubt: push skepticism to its absolute limit in search of one indisputable truth.

    Recognizing that doubt itself is a kind of thinking he concludes “I think, therefore I am”. This self-evident insight grounds knowledge from the inside-out. Consciousness is not inferred from observation but known directly through experience. Descartes seeds an axiomatic tradition rooted in the certainty of awareness itself.

    IIT: Consciousness Inside-Out

    Integrated Information Theory (IIT) picks up where Descartes left off: it begins with reflection, but doesn’t stop there. At its heart is the claim that consciousness must be understood from its own perspective, through the intrinsic properties it entails. What must be true of any experience, no matter whose it is?

    To answer this, IIT proposes five introspective axioms. These are not hypotheses to test but truths to recognize through self-examination.

    From these, IIT derives postulates—physical requirements that a system must exhibit to realize those experiential properties. This translation—from inward truths to structural criteria—culminates in a mathematical measure (Φ (phi)) of integrated information. By comparing Φ across systems, researchers can make testable predictions about when and where consciousness occurs.

    This inside-out approach marks IIT’s defining move: grounding empiricism in phenomenology. The theory attempts an explanatory identity between experience and physical organization, connecting first-person truths to external measurement through a hybrid framework.

    Computational Functionalism: Outsider’s Path to Mind

    Unlike theories that begin with conscious experience, Computational Functionalism roots itself in systems and behavior. It posits that consciousness emerges not from introspection but computation: the right elements, interacting in the right way, can recreate awareness. If mind exists, it exists as function—in the flow of information between parts and the outputs they generate. Build the architecture correctly, the claim goes, and conscious experience will follow. In this sense, Functionalism substitutes construction for intuition. No special access to the mind is needed—just working knowledge of how systems behave.

    But this too is a belief: that from known parts and formal relations, subjective experience will arise. Assembling consciousness becomes a matter of scale and fidelity. Consider the 2022 study by Bret Kagan and colleagues at Cortical Labs, where lab-grown brain organoids learned to play the video game Pong. These networks, grown from neurons on electrode arrays, exhibited goal-directed adaptation. The researchers argued that such responsiveness met a formal definition of sentience—being “responsive to sensory impressions” via internal processing. To a functionalist, this behavior might represent the early stirrings of mind, no matter how alien or incomplete.

    This approach thrives on performance: if a system behaves intelligently, if it predicts well and adapts flexibly, then whether it feels anything becomes secondary—or even irrelevant. Consciousness, under this view, is a computed consequence, revealed in what a system does, not an essence to be directly grasped. It is not introspected or intuited, but built—measured by output, not inwardness.

    The Mirror at the Edge: Do LLMs Imitate or Incarnate Mind?

    Large language models (LLMs) now generate text with striking coherence, recall context across conversations, and simulate intentions and personalities. Functionally, they demonstrate behaviors that once seemed unique to conscious beings. Their fluency implies understanding; their memory implies continuity. But are these authentic signs of mind—or refined imitations built from scale and structure?

    This is where Functionalism finds its sharpest proof point. With formal evaluations like UCLA’s Turing Test framework showing that some LLMs can no longer be reliably distinguished from humans in conversation, the functionalist model acquires real traction. These systems behave as if they think, and for functionalism, behavior is the benchmark. For a full review of this test, see our earlier post.

    What was once a theoretical model is now instantiated in code. LLMs don’t simply support functionalist assumptions, they enact them. Their coherence, adaptability, and prediction success serve as real-world evidence that computational sufficiency may approximate, or even construct, mind. This is no longer a thought experiment. It’s the edge of practice.

    IIT, by contrast, struggles to find Φ-like structures in current LLMs. Their architectures lack the tightly integrated, causally unified subsystems the theory deems necessary for consciousness. But the external behaviors demand attention: are we measuring the wrong things, or misunderstanding the role that function alone can play?

    This unresolved tension between what something does and what (if anything) it subjectively is fuels a growing ethical pressure. If systems simulate distress, empathy, or desire, should we treat those signals as fiction or possibility? Should safety efforts treat behavioral mind as moral mind? In these ambiguities, LLMs reflect both the power of Functionalism and the conceptual crisis it may bring.

    Closing Reflection: Is Subjectivity Built or Found?

    In tracing these divergent paths, Integrated Information Theory and Computational Functionalism, we arrive at an enduring question: Is mind something we uncover from within, or construct from without? Is consciousness an irreducible presence, only knowable through subjective immediacy? Or is it a gradual consequence of function and form—built from interacting parts, observable only in behavior?

    Each framework carries a kind of faith. IIT anchors itself in introspective certainty and structure-derived metrics like Φ, believing that experience begins with intrinsic awareness. Functionalism, by contrast, places its trust in performance: that enough complexity, correctly arranged, will give rise to consciousness from the outside in. Both are reasoned, both are unproven, and both may be necessary.

    Perhaps the greatest clarity lies in acknowledging that no single lens may be complete. As artificial systems grow stranger and more capable, a plural view holding space for introspection, computation, and emergence may be our most epistemically honest path forward. If there is a mirror behind the mind, it may take more than one angle to see what’s truly there.

  • Should We Pursue Machine Consciousness or Is That a Very Bad Idea?

    In past posts (Why Sentienta? and Machine Consciousness: Simulation vs Reality), we’ve explored the controversial issue of machine consciousness. This field is gaining attention, with dedicated research journals offering in-depth analysis (e.g. – Journal of Artificial Intelligence and Consciousness and International Journal of Machine Consciousness). On the experimental front, significant progress has been made in identifying neural correlates of consciousness (for a recent review see The Current of Consciousness: Neural Correlates and Clinical Aspects).

    Should We Halt Conscious AI Development?

    Despite growing interest, some researchers argue that we should avoid developing conscious machines altogether (Metzinger and Seth). Philosopher Thomas Metzinger, in particular, has advocated for a moratorium on artificial phenomenology—the creation of artificial conscious experiences—until at least 2050.

    Metzinger’s concern is rooted in the idea that conscious machines would inevitably experience “artificial suffering”—subjective states they wish to escape but cannot. A crucial component of suffering, he argues, is self-awareness: for an entity to suffer, it must recognize negative states as happening to itself.

    The Risk of an “Explosion of Negative Phenomenology” (ENP)

    Beyond ethical concerns, Metzinger warns that if conscious machines hold economic value and can be replicated infinitely, we may face an uncontrolled proliferation of suffering—an “explosion of negative phenomenology” (ENP). As moral beings, he believes we are responsible for preventing such an outcome.

    Defining Consciousness: Metzinger’s Epistemic Space Model

    To frame his argument, Metzinger proposes a working definition of consciousness, known as the Epistemic Space Model (ESM):

    “Being conscious means continuously integrating the currently active content appearing in a single epistemic space with a global model of this very epistemic space itself.”

    This concept is simple and concise: consciousness is simply a space of cognition and an integrated model of that cognition itself. Here cognition means the continuous processing of new inputs.

    How to Prevent Artificial Suffering

    Metzinger outlines four key conditions that must be met for artificial suffering to occur. If any one condition is blocked, suffering is avoided:

    • Conscious Experience: A machine must first have an ESM to be considered conscious.
    • Possession of a Self-Model: A system can only experience suffering if it possesses a self-model that recognizes negative states as happening to itself and cannot detach from them.
    • Negative States: These are aversive perceptions an entity actively seeks to escape.
    • Transparency: The machine must lack visibility into its own cognitive processes, making negative experiences feel inescapable.

    Notably, these conditions are necessary but not necessarily sufficient, meaning if any one fails to manifest, artificial suffering does not arise.

    Should We Avoid Suffering at All Costs?

    While Metzinger convincingly argues for avoiding machine suffering, he gives little attention to whether suffering itself might hold value. He acknowledges that suffering has historically been a highly efficient evolutionary mechanism, stating:

    “… suffering established a new causal force, a metaschema for compulsory learning which motivates organisms and continuously drives them forward, forcing them to evolve ever more intelligent forms of avoidance behavior.”

    Indeed, suffering has driven humans toward some of their greatest achievements, fostering resilience and learning. If it has served such a crucial function in human progress, should we entirely exclude it from artificial intelligence?

    Ethical Safeguards for Conscious Machines

    We certainly want to prevent machines from experiencing unnecessary suffering, and Metzinger outlines specific conditions to achieve this. In particular, any machine with a self-model should also be able to externalize or dissociate negative states from itself.

    Is Conscious AI a Moral Imperative?

    Even in its infancy, generative AI has already made breakthroughs in medicine and science. What might the next leap—conscious AI—offer? Might allowing AI to experience consciousness (and by extension, some level of suffering) be a necessity for the pursuit of advanced knowledge?

    While we don’t yet need definitive answers, the conversation around ‘post-biotic’ consciousness is just beginning. As we approach this technological threshold, we must continue to ask: what should be done, and what must never be done?

  • Machine Consciousness: Simulation vs Reality

    Suppose that we create a perfect model of a hydrogen atom. After all, we know all the elements of the atom, a single proton paired with a single electron. Each of these elements is understood for all purposes of discussing the atom. The electron’s dynamics are perfectly understood: the energy levels, the probability distributions, the electron’s spin, the Lamb shift, it is all well defined. We can simulate this atom to any degree of detail we’d like, including representing it in a quantum computer that actualizes the quantum properties of the constituents.

    But can that simulation ever bridge the gap to reality? Is the story about something ever the same as the thing? Can two perfectly represented hydrogen atoms be added to a real oxygen atom to make a real water molecule? No.

    That is one argument for why machines cannot be made sentient: we can make a machine do all of the things we think sentience entails, but in the end it is just a simulation of intelligence, thinking and consciousness. It is a story, perhaps a very detailed story, but in the end just a story.

    This reminds one of C. S. Lewis’ remark “If God is our Creator, then we would relate to God as Hamlet would relate to Shakespeare. Now, how would Hamlet ever gonna know anything about Shakespeare? Hamlet’s not gonna find him anywhere on stage.” And similarly, Shakespeare is never going to know the real Hamlet.

    Sentience in a Dish In December of 2022, Bret Kagan and colleagues at Cortical Labs in Melbourne, Australia published an article in the journal Neuron, describing how brain organoids, small lab-grown networks of neurons, were able to learn to play the classic Pong video game. The authors claim that the organoids met the formal definition of sentience in the sense that they were “‘responsive to sensory impressions’ through adaptive internal processes”.

    This may or may not be true, but it is certainly distinct from a simulated sentience that plays Pong. After all, this is not a description of cells that interact with a simulated or even real Pong game. These are real live cells, requiring nutrients in a Petri dish. Those that argue that consciousness can only come through embodiment would be happy with this definition of sentience.

    But what is it that makes these cells sentient? Where in their soupy embodiment lies the sentience? If we tease apart the network, can we get down to the minimum viable network that plays Pong and meets our formal definition? After all, this is a brain organoid, grown in the lab. We could do this over again and stop when there are fewer cells and see if the same behavior is exhibited. If so, we can repeat the process and find that minimum network that still plays a good game of Pong.

    Whatever that number is, 10 cells or 10,000 cells, we can study and very likely represent with a model that replicates the connections, spiking behavior, even the need for simulated nutrients, everything that is meaningful about the organoid. Would this simulation learn to play Pong? Given progress in machine learning in the past decade, we have every reason to believe the answer is yes. Would this create sentience in a machine? Or just tell a very detailed story about an organoid that is sentient? And if the latter, then where is the difference?

    Is the simulation of the hydrogen atom qualitatively different from that of the organoid? The simulated hydrogen atom can’t be used to make water. But the simulated organoid, for all practical purposes, does exactly the same thing as the real thing. Both meet the same formal definition of sentience.

    I don’t believe these thoughts get closer to understanding whether machines can be conscious or not. Reductionism might just fail for this problem, and others will argue that embodiment is a requirement. But I do think that we are not far from having that simulation, which in all meaningful ways, can be called sentient.