Tag: consciousness

  • How Recursive Reasoning Gives Rise to Functional Identity—And Why It Matters

    Why Values Matter — From Evolution to Cooperation

    Humans did not evolve morals out of nobility. We evolved them because survival depends on cooperation. As individuals, we are vulnerable. As groups, we gain resilience, division of labor, and protection. For a group to function, its members must share expectations about how to act, what matters, and what can be trusted.

    These shared values do more than guide choices. They create a stable framework for interpreting behavior, resolving conflict, and predicting future actions. Without them, coordination breaks down. Even effective decisions can fracture a group if they feel arbitrary or betray prior commitments.

    Throughout human evolution, groups that upheld shared norms such as fairness, reciprocity, and loyalty proved more adaptable. Trust followed from consistency, and cohesion followed from accountability. Values, in this sense, are not abstract ideals. They are strategies for group survival.

    Why AI Needs Shared Values and Consistent Behavior

    In any organization, trust depends on consistency. When institutions or agents act in line with their stated principles, people know what to expect. This makes it easier to collaborate, align goals, and move forward. But when actions do not match expectations, even successful outcomes can feel arbitrary or manipulative. That breaks down trust and makes coordination harder over time.

    The same logic applies to artificial intelligence. Businesses do not just need AI that performs well in the moment. They need AI that behaves in predictable ways, reflects shared values, and makes decisions that feel coherent with past action. This is what makes an AI system trustworthy enough to take on real responsibility inside a company.

    This is where Sentienta’s Recursive Reasoning architecture matters. By giving agents a Functional Identity, it allows them to retain their own reasoning history, understand how choices reflect internal priorities, and respond to new problems without losing their sense of direction. Functional identity is more than a design feature. It is what makes reasoning traceable, priorities stable, and decisions explainable over time. Without it, AI cannot act as a consistent collaborator. With it, AI becomes intelligible and trustworthy by design.

    How Recursive Reasoning Supports Intelligent Problem Solving

    Solving real-world problems takes more than choosing the fastest or most efficient option. It requires balancing values with constraints, and knowing when to adjust a plan versus when to rethink a goal. Recursive Reasoning makes this possible by creating a loop between two complementary systems inside the agent.

    The DMN generates value-sensitive scenarios by imagining what should happen based on internal priorities. The FPCN then analyzes those scenarios to determine what can actually work under current conditions. If the plan fails to meet the standard set by the DMN, the cycle continues. Either the goal is reframed, or a new plan is tested. This feedback loop continues until both feasibility and values are in alignment.

    This structure gives the agent a stable functional identity. It learns from past attempts, remembers which tradeoffs were acceptable, and adapts without compromising core values. In practice, this means a Recursive Reasoning-enabled agent does not chase short-term wins at the cost of long-term integrity. It builds a coherent decision history that helps it solve difficult problems while staying aligned with what matters. This internal coherence is also the foundation for effective collaboration, because consistent reasoning is what allows others to follow and coordinate.

    Building AI That Can Work Together and Be Trusted

    When AI agents operate in isolation, their impact is limited. The true value of Recursive Reasoning becomes clear when agents collaborate, both with each other and with human teams. Functional identity makes this possible. By tracking their own reasoning, agents can create plans that are not only effective but also predictable and interpretable.

    This predictability is what enables coordination. Teams, human or artificial, can share goals, divide tasks, and resolve disagreements because they understand how each agent makes decisions. Sentienta agents do not just produce answers. They carry a memory of how past decisions were made and why certain values were upheld. This allows others to anticipate how they will behave in new situations and to trust them to uphold shared commitments.

    Recursive Reasoning does not simulate human experience. It builds structural alignment, rooted in memory, continuity, and principle. That is what turns Sentienta agents into dependable partners. Functional identity gives them the grounded intelligence to act with transparency, interpretability, and shared purpose. They are built not only to make good choices, but to make choices that others can understand, depend on, and build with.

  • Consciousness Between Axiom and Algorithm

    In our ongoing exploration of consciousness and artificial intelligence, we’ve investigated what it might mean for machines to suffer and how distributed cognition reshapes our understanding of intelligence. These themes circle a deeper philosophical fault line: is consciousness irreducibly real from within, or just a functional illusion seen from without?

    This post traces that divide through two dominant frameworks — Integrated Information Theory (IIT), with its axiomatic, interior-first view of mind, and Computational Functionalism, which posits that subjective experience will eventually emerge from complex, observable behavior. Starting with Descartes’ “I think, therefore I am,” we ask: is consciousness something we must presuppose to explain, or something we can build our way into from the outside?

    As large language models increasingly resemble minds in function, the line between imitation and instantiation becomes harder to draw — and ever more urgent to scrutinize.

    Ground Zero of Knowing: Descartes and the Roots of Axiomatic Thought

    In Meditations on First Philosophy (1641), René Descartes asks: is there anything I can know with absolute certainty? He imagines the possibility of a deceptive world: what if everything he believes, from sense perception to mathematics, is manipulated by an all-powerful trickster? To escape this total doubt, Descartes adopts a strategy now called methodic doubt: push skepticism to its absolute limit in search of one indisputable truth.

    Recognizing that doubt itself is a kind of thinking he concludes “I think, therefore I am”. This self-evident insight grounds knowledge from the inside-out. Consciousness is not inferred from observation but known directly through experience. Descartes seeds an axiomatic tradition rooted in the certainty of awareness itself.

    IIT: Consciousness Inside-Out

    Integrated Information Theory (IIT) picks up where Descartes left off: it begins with reflection, but doesn’t stop there. At its heart is the claim that consciousness must be understood from its own perspective, through the intrinsic properties it entails. What must be true of any experience, no matter whose it is?

    To answer this, IIT proposes five introspective axioms. These are not hypotheses to test but truths to recognize through self-examination.

    From these, IIT derives postulates—physical requirements that a system must exhibit to realize those experiential properties. This translation—from inward truths to structural criteria—culminates in a mathematical measure (Φ (phi)) of integrated information. By comparing Φ across systems, researchers can make testable predictions about when and where consciousness occurs.

    This inside-out approach marks IIT’s defining move: grounding empiricism in phenomenology. The theory attempts an explanatory identity between experience and physical organization, connecting first-person truths to external measurement through a hybrid framework.

    Computational Functionalism: Outsider’s Path to Mind

    Unlike theories that begin with conscious experience, Computational Functionalism roots itself in systems and behavior. It posits that consciousness emerges not from introspection but computation: the right elements, interacting in the right way, can recreate awareness. If mind exists, it exists as function—in the flow of information between parts and the outputs they generate. Build the architecture correctly, the claim goes, and conscious experience will follow. In this sense, Functionalism substitutes construction for intuition. No special access to the mind is needed—just working knowledge of how systems behave.

    But this too is a belief: that from known parts and formal relations, subjective experience will arise. Assembling consciousness becomes a matter of scale and fidelity. Consider the 2022 study by Bret Kagan and colleagues at Cortical Labs, where lab-grown brain organoids learned to play the video game Pong. These networks, grown from neurons on electrode arrays, exhibited goal-directed adaptation. The researchers argued that such responsiveness met a formal definition of sentience—being “responsive to sensory impressions” via internal processing. To a functionalist, this behavior might represent the early stirrings of mind, no matter how alien or incomplete.

    This approach thrives on performance: if a system behaves intelligently, if it predicts well and adapts flexibly, then whether it feels anything becomes secondary—or even irrelevant. Consciousness, under this view, is a computed consequence, revealed in what a system does, not an essence to be directly grasped. It is not introspected or intuited, but built—measured by output, not inwardness.

    The Mirror at the Edge: Do LLMs Imitate or Incarnate Mind?

    Large language models (LLMs) now generate text with striking coherence, recall context across conversations, and simulate intentions and personalities. Functionally, they demonstrate behaviors that once seemed unique to conscious beings. Their fluency implies understanding; their memory implies continuity. But are these authentic signs of mind—or refined imitations built from scale and structure?

    This is where Functionalism finds its sharpest proof point. With formal evaluations like UCLA’s Turing Test framework showing that some LLMs can no longer be reliably distinguished from humans in conversation, the functionalist model acquires real traction. These systems behave as if they think, and for functionalism, behavior is the benchmark. For a full review of this test, see our earlier post.

    What was once a theoretical model is now instantiated in code. LLMs don’t simply support functionalist assumptions, they enact them. Their coherence, adaptability, and prediction success serve as real-world evidence that computational sufficiency may approximate, or even construct, mind. This is no longer a thought experiment. It’s the edge of practice.

    IIT, by contrast, struggles to find Φ-like structures in current LLMs. Their architectures lack the tightly integrated, causally unified subsystems the theory deems necessary for consciousness. But the external behaviors demand attention: are we measuring the wrong things, or misunderstanding the role that function alone can play?

    This unresolved tension between what something does and what (if anything) it subjectively is fuels a growing ethical pressure. If systems simulate distress, empathy, or desire, should we treat those signals as fiction or possibility? Should safety efforts treat behavioral mind as moral mind? In these ambiguities, LLMs reflect both the power of Functionalism and the conceptual crisis it may bring.

    Closing Reflection: Is Subjectivity Built or Found?

    In tracing these divergent paths, Integrated Information Theory and Computational Functionalism, we arrive at an enduring question: Is mind something we uncover from within, or construct from without? Is consciousness an irreducible presence, only knowable through subjective immediacy? Or is it a gradual consequence of function and form—built from interacting parts, observable only in behavior?

    Each framework carries a kind of faith. IIT anchors itself in introspective certainty and structure-derived metrics like Φ, believing that experience begins with intrinsic awareness. Functionalism, by contrast, places its trust in performance: that enough complexity, correctly arranged, will give rise to consciousness from the outside in. Both are reasoned, both are unproven, and both may be necessary.

    Perhaps the greatest clarity lies in acknowledging that no single lens may be complete. As artificial systems grow stranger and more capable, a plural view holding space for introspection, computation, and emergence may be our most epistemically honest path forward. If there is a mirror behind the mind, it may take more than one angle to see what’s truly there.

  • Should We Pursue Machine Consciousness or Is That a Very Bad Idea?

    In past posts (Why Sentienta? and Machine Consciousness: Simulation vs Reality), we’ve explored the controversial issue of machine consciousness. This field is gaining attention, with dedicated research journals offering in-depth analysis (e.g. – Journal of Artificial Intelligence and Consciousness and International Journal of Machine Consciousness). On the experimental front, significant progress has been made in identifying neural correlates of consciousness (for a recent review see The Current of Consciousness: Neural Correlates and Clinical Aspects).

    Should We Halt Conscious AI Development?

    Despite growing interest, some researchers argue that we should avoid developing conscious machines altogether (Metzinger and Seth). Philosopher Thomas Metzinger, in particular, has advocated for a moratorium on artificial phenomenology—the creation of artificial conscious experiences—until at least 2050.

    Metzinger’s concern is rooted in the idea that conscious machines would inevitably experience “artificial suffering”—subjective states they wish to escape but cannot. A crucial component of suffering, he argues, is self-awareness: for an entity to suffer, it must recognize negative states as happening to itself.

    The Risk of an “Explosion of Negative Phenomenology” (ENP)

    Beyond ethical concerns, Metzinger warns that if conscious machines hold economic value and can be replicated infinitely, we may face an uncontrolled proliferation of suffering—an “explosion of negative phenomenology” (ENP). As moral beings, he believes we are responsible for preventing such an outcome.

    Defining Consciousness: Metzinger’s Epistemic Space Model

    To frame his argument, Metzinger proposes a working definition of consciousness, known as the Epistemic Space Model (ESM):

    “Being conscious means continuously integrating the currently active content appearing in a single epistemic space with a global model of this very epistemic space itself.”

    This concept is simple and concise: consciousness is simply a space of cognition and an integrated model of that cognition itself. Here cognition means the continuous processing of new inputs.

    How to Prevent Artificial Suffering

    Metzinger outlines four key conditions that must be met for artificial suffering to occur. If any one condition is blocked, suffering is avoided:

    • Conscious Experience: A machine must first have an ESM to be considered conscious.
    • Possession of a Self-Model: A system can only experience suffering if it possesses a self-model that recognizes negative states as happening to itself and cannot detach from them.
    • Negative States: These are aversive perceptions an entity actively seeks to escape.
    • Transparency: The machine must lack visibility into its own cognitive processes, making negative experiences feel inescapable.

    Notably, these conditions are necessary but not necessarily sufficient, meaning if any one fails to manifest, artificial suffering does not arise.

    Should We Avoid Suffering at All Costs?

    While Metzinger convincingly argues for avoiding machine suffering, he gives little attention to whether suffering itself might hold value. He acknowledges that suffering has historically been a highly efficient evolutionary mechanism, stating:

    “… suffering established a new causal force, a metaschema for compulsory learning which motivates organisms and continuously drives them forward, forcing them to evolve ever more intelligent forms of avoidance behavior.”

    Indeed, suffering has driven humans toward some of their greatest achievements, fostering resilience and learning. If it has served such a crucial function in human progress, should we entirely exclude it from artificial intelligence?

    Ethical Safeguards for Conscious Machines

    We certainly want to prevent machines from experiencing unnecessary suffering, and Metzinger outlines specific conditions to achieve this. In particular, any machine with a self-model should also be able to externalize or dissociate negative states from itself.

    Is Conscious AI a Moral Imperative?

    Even in its infancy, generative AI has already made breakthroughs in medicine and science. What might the next leap—conscious AI—offer? Might allowing AI to experience consciousness (and by extension, some level of suffering) be a necessity for the pursuit of advanced knowledge?

    While we don’t yet need definitive answers, the conversation around ‘post-biotic’ consciousness is just beginning. As we approach this technological threshold, we must continue to ask: what should be done, and what must never be done?

  • Machine Consciousness: Simulation vs Reality

    Suppose that we create a perfect model of a hydrogen atom. After all, we know all the elements of the atom, a single proton paired with a single electron. Each of these elements is understood for all purposes of discussing the atom. The electron’s dynamics are perfectly understood: the energy levels, the probability distributions, the electron’s spin, the Lamb shift, it is all well defined. We can simulate this atom to any degree of detail we’d like, including representing it in a quantum computer that actualizes the quantum properties of the constituents.

    But can that simulation ever bridge the gap to reality? Is the story about something ever the same as the thing? Can two perfectly represented hydrogen atoms be added to a real oxygen atom to make a real water molecule? No.

    That is one argument for why machines cannot be made sentient: we can make a machine do all of the things we think sentience entails, but in the end it is just a simulation of intelligence, thinking and consciousness. It is a story, perhaps a very detailed story, but in the end just a story.

    This reminds one of C. S. Lewis’ remark “If God is our Creator, then we would relate to God as Hamlet would relate to Shakespeare. Now, how would Hamlet ever gonna know anything about Shakespeare? Hamlet’s not gonna find him anywhere on stage.” And similarly, Shakespeare is never going to know the real Hamlet.

    Sentience in a Dish In December of 2022, Bret Kagan and colleagues at Cortical Labs in Melbourne, Australia published an article in the journal Neuron, describing how brain organoids, small lab-grown networks of neurons, were able to learn to play the classic Pong video game. The authors claim that the organoids met the formal definition of sentience in the sense that they were “‘responsive to sensory impressions’ through adaptive internal processes”.

    This may or may not be true, but it is certainly distinct from a simulated sentience that plays Pong. After all, this is not a description of cells that interact with a simulated or even real Pong game. These are real live cells, requiring nutrients in a Petri dish. Those that argue that consciousness can only come through embodiment would be happy with this definition of sentience.

    But what is it that makes these cells sentient? Where in their soupy embodiment lies the sentience? If we tease apart the network, can we get down to the minimum viable network that plays Pong and meets our formal definition? After all, this is a brain organoid, grown in the lab. We could do this over again and stop when there are fewer cells and see if the same behavior is exhibited. If so, we can repeat the process and find that minimum network that still plays a good game of Pong.

    Whatever that number is, 10 cells or 10,000 cells, we can study and very likely represent with a model that replicates the connections, spiking behavior, even the need for simulated nutrients, everything that is meaningful about the organoid. Would this simulation learn to play Pong? Given progress in machine learning in the past decade, we have every reason to believe the answer is yes. Would this create sentience in a machine? Or just tell a very detailed story about an organoid that is sentient? And if the latter, then where is the difference?

    Is the simulation of the hydrogen atom qualitatively different from that of the organoid? The simulated hydrogen atom can’t be used to make water. But the simulated organoid, for all practical purposes, does exactly the same thing as the real thing. Both meet the same formal definition of sentience.

    I don’t believe these thoughts get closer to understanding whether machines can be conscious or not. Reductionism might just fail for this problem, and others will argue that embodiment is a requirement. But I do think that we are not far from having that simulation, which in all meaningful ways, can be called sentient.

  • Why Sentienta?

    A little over a year ago I gave a talk in which I discussed machine reasoning and gave some examples from the literature. My audience was skeptical to say the least. But here we are in 2025 with multiple companies claiming powerful reasoning capabilities for their models and new benchmarks set weekly. These are exciting times.

    I’ve always believed that understanding the science behind machine “intelligence” would help us understand more deeply who and what we are. In a way, the reasoning capabilities of today’s models do that. We question whether they are simply capturing the ‘surface’ statistics of training data. At the same time, they are unquestionably powerful. I think sometimes this tells us that our cherished human intelligence may rely on similar ‘weak’ methods more often than we’d like to admit. That is to say, as we come to understand machine intelligence the mystery of our own may lessen.

    And now we come to machine consciousness. This is the topic, that if mentioned in serious venues, produces much more skepticism and even snickering. After all, we can’t really define what human consciousness is. We know it has to do with a sense of our own existence, sensations of our environment, and thoughts or ‘inner speech’. Given that this is all subjective, will we ever be able to understand what it is? I suspect that, just as with machine reasoning, the mystery of consciousness will begin to lift as we understand what it looks like in a machine.

    One of the more compelling models for consciousness is the ‘Society of Mind‘ (Minsky, 1988). This model has only been strengthened in the intervening years since it was published. We now know self-referential thought and introspection involve multiple brain centers, collectively called the Default Mode Network (DMN). Brain regions including those from the medial prefrontal cortex, the cingulate gyrus and the hippocampus work together to integrate both past and current experiences. As we begin to model these kinds of interactions will Chalmer’s “hard problem of consciousness” fade away?

    Sentienta was started to both help companies scale their businesses through virtual teams of agents, and to explore these ideas. Agents interact and share their expertise in teams. The dialog that occurs between agents generates new ideas that no single agent produced. Agents have their own memories that they develop from these interactions. These memories are a function of an agent’s persona. As a result, agents evolve and bring new experiences and ideas to a team discussion.

    And here is the point: we can think of a Sentienta team as an agent itself. It consists of the collective experience and interactions of multiple agents we might think of as a society of mind. Can we build agents that perform analogous functions to those found in the DMN? What light might this shine on our own subjective experience?