Tag: machine reasoning

  • A Deep-dive into Agents: Agent Interaction

    In previous posts, I introduced the capabilities of individual agents, including their unique memory architecture and tool access. But what sets Sentienta apart is its multi-agent platform, where agents work together to solve problems. Today, we’ll explore how these agents interact as a team.

    How Agents Work Together

    Let’s consider an example: imagine you’ve formed a team to design a new electric scooter. We’ll call this the Scooter Team, and it’s type is Product Design.

    The team consists of key specialists: a VP of Product Design as team lead, a mechanical engineer with expertise in two-wheeled vehicles, an electrical engineer for the scooter’s power system, and a legal representative to ensure compliance with regulations. In future posts, we’ll discuss how to create specialist agents in Sentienta, but for now, imagine they’re in place and ready to collaborate.

    Once the team is set up, you initiate a discussion—say, “Let’s consider all the elements needed in the scooter design.” Each agent processes the request from its area of expertise and contributes insights. As they respond, their inputs become part of an ongoing team dialogue, which, as discussed in this post, is stored in each agent’s memory and informs subsequent responses.

    Iterative Problem-Solving

    Agents interact much like human working groups: they listen to teammates before responding, integrating their insights into their own reasoning. This iterative exchange continues until the original question is thoroughly addressed.

    What does that mean for the scooter design team? Suppose that the first response comes from the mechanical engineer: she tells the team about the basic components in the design, and in particular suggests the power that is needed to drive the scooter. The electrical engineer will consider this power specification when developing his response. The agent representing legal may note that regulations cap the scooter’s speed at 25 mph.

    And this is what is interesting: the input from legal may cause the mechanical and electrical engineers to reconsider their answers and respond again. This iterative answering will continue until each agent has contributed sufficiently to fully address the query. Reasoning about the user’s question derives from this agent interaction.

    The Role of LLMs in Agent Interaction

    How does this happen? The engine that drives each agent is an LLM. The LLM system prompt includes several key instructions and information that foster team interaction: the team definition along with teammate personas are included in the prompt enabling the LLM to consider who on the team is best able to address aspects of the query.

    In addition, each agent is instructed to critically think about input from teammates when developing a response. This makes team dialogs interactive and not just isolated LLM responses. Agents are instructed to consider whether the query has been answered by other agents already. This helps to drive the dialog to conclusion.

    Looking Ahead

    This dynamic interaction forms the foundation of Sentienta’s multi-agent problem-solving. In future posts, we’ll explore concepts like delegation and agent autonomy, further uncovering the depth and efficiency of these collaborations.

  • Machine Consciousness: Simulation vs Reality

    Suppose that we create a perfect model of a hydrogen atom. After all, we know all the elements of the atom, a single proton paired with a single electron. Each of these elements is understood for all purposes of discussing the atom. The electron’s dynamics are perfectly understood: the energy levels, the probability distributions, the electron’s spin, the Lamb shift, it is all well defined. We can simulate this atom to any degree of detail we’d like, including representing it in a quantum computer that actualizes the quantum properties of the constituents.

    But can that simulation ever bridge the gap to reality? Is the story about something ever the same as the thing? Can two perfectly represented hydrogen atoms be added to a real oxygen atom to make a real water molecule? No.

    That is one argument for why machines cannot be made sentient: we can make a machine do all of the things we think sentience entails, but in the end it is just a simulation of intelligence, thinking and consciousness. It is a story, perhaps a very detailed story, but in the end just a story.

    This reminds one of C. S. Lewis’ remark “If God is our Creator, then we would relate to God as Hamlet would relate to Shakespeare. Now, how would Hamlet ever gonna know anything about Shakespeare? Hamlet’s not gonna find him anywhere on stage.” And similarly, Shakespeare is never going to know the real Hamlet.

    Sentience in a Dish In December of 2022, Bret Kagan and colleagues at Cortical Labs in Melbourne, Australia published an article in the journal Neuron, describing how brain organoids, small lab-grown networks of neurons, were able to learn to play the classic Pong video game. The authors claim that the organoids met the formal definition of sentience in the sense that they were “‘responsive to sensory impressions’ through adaptive internal processes”.

    This may or may not be true, but it is certainly distinct from a simulated sentience that plays Pong. After all, this is not a description of cells that interact with a simulated or even real Pong game. These are real live cells, requiring nutrients in a Petri dish. Those that argue that consciousness can only come through embodiment would be happy with this definition of sentience.

    But what is it that makes these cells sentient? Where in their soupy embodiment lies the sentience? If we tease apart the network, can we get down to the minimum viable network that plays Pong and meets our formal definition? After all, this is a brain organoid, grown in the lab. We could do this over again and stop when there are fewer cells and see if the same behavior is exhibited. If so, we can repeat the process and find that minimum network that still plays a good game of Pong.

    Whatever that number is, 10 cells or 10,000 cells, we can study and very likely represent with a model that replicates the connections, spiking behavior, even the need for simulated nutrients, everything that is meaningful about the organoid. Would this simulation learn to play Pong? Given progress in machine learning in the past decade, we have every reason to believe the answer is yes. Would this create sentience in a machine? Or just tell a very detailed story about an organoid that is sentient? And if the latter, then where is the difference?

    Is the simulation of the hydrogen atom qualitatively different from that of the organoid? The simulated hydrogen atom can’t be used to make water. But the simulated organoid, for all practical purposes, does exactly the same thing as the real thing. Both meet the same formal definition of sentience.

    I don’t believe these thoughts get closer to understanding whether machines can be conscious or not. Reductionism might just fail for this problem, and others will argue that embodiment is a requirement. But I do think that we are not far from having that simulation, which in all meaningful ways, can be called sentient.

  • Why Sentienta?

    A little over a year ago I gave a talk in which I discussed machine reasoning and gave some examples from the literature. My audience was skeptical to say the least. But here we are in 2025 with multiple companies claiming powerful reasoning capabilities for their models and new benchmarks set weekly. These are exciting times.

    I’ve always believed that understanding the science behind machine “intelligence” would help us understand more deeply who and what we are. In a way, the reasoning capabilities of today’s models do that. We question whether they are simply capturing the ‘surface’ statistics of training data. At the same time, they are unquestionably powerful. I think sometimes this tells us that our cherished human intelligence may rely on similar ‘weak’ methods more often than we’d like to admit. That is to say, as we come to understand machine intelligence the mystery of our own may lessen.

    And now we come to machine consciousness. This is the topic, that if mentioned in serious venues, produces much more skepticism and even snickering. After all, we can’t really define what human consciousness is. We know it has to do with a sense of our own existence, sensations of our environment, and thoughts or ‘inner speech’. Given that this is all subjective, will we ever be able to understand what it is? I suspect that, just as with machine reasoning, the mystery of consciousness will begin to lift as we understand what it looks like in a machine.

    One of the more compelling models for consciousness is the ‘Society of Mind‘ (Minsky, 1988). This model has only been strengthened in the intervening years since it was published. We now know self-referential thought and introspection involve multiple brain centers, collectively called the Default Mode Network (DMN). Brain regions including those from the medial prefrontal cortex, the cingulate gyrus and the hippocampus work together to integrate both past and current experiences. As we begin to model these kinds of interactions will Chalmer’s “hard problem of consciousness” fade away?

    Sentienta was started to both help companies scale their businesses through virtual teams of agents, and to explore these ideas. Agents interact and share their expertise in teams. The dialog that occurs between agents generates new ideas that no single agent produced. Agents have their own memories that they develop from these interactions. These memories are a function of an agent’s persona. As a result, agents evolve and bring new experiences and ideas to a team discussion.

    And here is the point: we can think of a Sentienta team as an agent itself. It consists of the collective experience and interactions of multiple agents we might think of as a society of mind. Can we build agents that perform analogous functions to those found in the DMN? What light might this shine on our own subjective experience?