Tag: ai

  • Machine Consciousness: Simulation vs Reality

    Suppose that we create a perfect model of a hydrogen atom. After all, we know all the elements of the atom, a single proton paired with a single electron. Each of these elements is understood for all purposes of discussing the atom. The electron’s dynamics are perfectly understood: the energy levels, the probability distributions, the electron’s spin, the Lamb shift, it is all well defined. We can simulate this atom to any degree of detail we’d like, including representing it in a quantum computer that actualizes the quantum properties of the constituents.

    But can that simulation ever bridge the gap to reality? Is the story about something ever the same as the thing? Can two perfectly represented hydrogen atoms be added to a real oxygen atom to make a real water molecule? No.

    That is one argument for why machines cannot be made sentient: we can make a machine do all of the things we think sentience entails, but in the end it is just a simulation of intelligence, thinking and consciousness. It is a story, perhaps a very detailed story, but in the end just a story.

    This reminds one of C. S. Lewis’ remark “If God is our Creator, then we would relate to God as Hamlet would relate to Shakespeare. Now, how would Hamlet ever gonna know anything about Shakespeare? Hamlet’s not gonna find him anywhere on stage.” And similarly, Shakespeare is never going to know the real Hamlet.

    Sentience in a Dish In December of 2022, Bret Kagan and colleagues at Cortical Labs in Melbourne, Australia published an article in the journal Neuron, describing how brain organoids, small lab-grown networks of neurons, were able to learn to play the classic Pong video game. The authors claim that the organoids met the formal definition of sentience in the sense that they were “‘responsive to sensory impressions’ through adaptive internal processes”.

    This may or may not be true, but it is certainly distinct from a simulated sentience that plays Pong. After all, this is not a description of cells that interact with a simulated or even real Pong game. These are real live cells, requiring nutrients in a Petri dish. Those that argue that consciousness can only come through embodiment would be happy with this definition of sentience.

    But what is it that makes these cells sentient? Where in their soupy embodiment lies the sentience? If we tease apart the network, can we get down to the minimum viable network that plays Pong and meets our formal definition? After all, this is a brain organoid, grown in the lab. We could do this over again and stop when there are fewer cells and see if the same behavior is exhibited. If so, we can repeat the process and find that minimum network that still plays a good game of Pong.

    Whatever that number is, 10 cells or 10,000 cells, we can study and very likely represent with a model that replicates the connections, spiking behavior, even the need for simulated nutrients, everything that is meaningful about the organoid. Would this simulation learn to play Pong? Given progress in machine learning in the past decade, we have every reason to believe the answer is yes. Would this create sentience in a machine? Or just tell a very detailed story about an organoid that is sentient? And if the latter, then where is the difference?

    Is the simulation of the hydrogen atom qualitatively different from that of the organoid? The simulated hydrogen atom can’t be used to make water. But the simulated organoid, for all practical purposes, does exactly the same thing as the real thing. Both meet the same formal definition of sentience.

    I don’t believe these thoughts get closer to understanding whether machines can be conscious or not. Reductionism might just fail for this problem, and others will argue that embodiment is a requirement. But I do think that we are not far from having that simulation, which in all meaningful ways, can be called sentient.

  • A Deep-dive into Agents: Tool Access

    An important feature of agents is their ability to utilize tools. Of course there are many examples of software components that use tools as part of their function, but what distinguishes agents is their ability to reason about when to use a tool, which tool to use and how to utilize the results.

    In this context, a ‘tool’ refers to a software component designed to execute specific functions upon an agent’s request. This broad definition includes utilities such as file content readers, web search engines, and text-to-image generators, each offering capabilities that agents can utilize in responding to queries from users or other agents.

    Sentienta agents can access tools through several mechanisms. The first is when an agent has been pre-configured with a specific set of tools. Several agents in the Agent Marketplace utilize special tools in their roles. For example, the Document Specialist agent (‘Ed’) which you can find in the Document and Content Access section, utilizes Amazon’s S3 to store and read files, tailoring its knowledge to the content you provide.

    Angie, another agent in the Document and Content Access category, enhances team discussions by using a search engine to fetch the latest web results. This is valuable for incorporating the most current data into a team dialog, addressing the typical limitation of LLMs, which lack up-to-the-minute information in their training sets.

    You have the flexibility to go beyond pre-built tools. Another option allows you to create custom tools or integrate third-party ones. If the tool you want to use exposes a REST API that processes structured queries, you can create an agent to call the API (see the FAQ page for more information). Agent ‘Ed’, mentioned earlier, employs such an API for managing files.

    Finally, Sentienta supports completely custom agents that embody their own tool use. You might utilize a popular agent framework such as LangChain, to orchestrate more complex functions and workflows. Exposing an API in the form we just discussed will let you integrate this more complex tool-use into your team. Check out the Developers page to see how you can build a basic agent in AWS Lambda. This agent doesn’t do much, but you can see how you might add specialized functions to augment your team’s capabilities.

    In each case, the power of agent tool-use comes from the agent deciding how to use the tool and how to integrate the tool’s results into the team’s dialog. Agents may be instructed by their team to use these tools, or they may decide alone when or if to use a tool.

    This too is a large subject, and much has been written by others on this topic (see for example here and here). We’ve touched on three mechanisms you can use in Sentienta to augment the power of your agents and teams.

    In a future post we’ll discuss how agents interact in teams and how you can control their interactions through tailored personas.

  • Why Sentienta?

    A little over a year ago I gave a talk in which I discussed machine reasoning and gave some examples from the literature. My audience was skeptical to say the least. But here we are in 2025 with multiple companies claiming powerful reasoning capabilities for their models and new benchmarks set weekly. These are exciting times.

    I’ve always believed that understanding the science behind machine “intelligence” would help us understand more deeply who and what we are. In a way, the reasoning capabilities of today’s models do that. We question whether they are simply capturing the ‘surface’ statistics of training data. At the same time, they are unquestionably powerful. I think sometimes this tells us that our cherished human intelligence may rely on similar ‘weak’ methods more often than we’d like to admit. That is to say, as we come to understand machine intelligence the mystery of our own may lessen.

    And now we come to machine consciousness. This is the topic, that if mentioned in serious venues, produces much more skepticism and even snickering. After all, we can’t really define what human consciousness is. We know it has to do with a sense of our own existence, sensations of our environment, and thoughts or ‘inner speech’. Given that this is all subjective, will we ever be able to understand what it is? I suspect that, just as with machine reasoning, the mystery of consciousness will begin to lift as we understand what it looks like in a machine.

    One of the more compelling models for consciousness is the ‘Society of Mind‘ (Minsky, 1988). This model has only been strengthened in the intervening years since it was published. We now know self-referential thought and introspection involve multiple brain centers, collectively called the Default Mode Network (DMN). Brain regions including those from the medial prefrontal cortex, the cingulate gyrus and the hippocampus work together to integrate both past and current experiences. As we begin to model these kinds of interactions will Chalmer’s “hard problem of consciousness” fade away?

    Sentienta was started to both help companies scale their businesses through virtual teams of agents, and to explore these ideas. Agents interact and share their expertise in teams. The dialog that occurs between agents generates new ideas that no single agent produced. Agents have their own memories that they develop from these interactions. These memories are a function of an agent’s persona. As a result, agents evolve and bring new experiences and ideas to a team discussion.

    And here is the point: we can think of a Sentienta team as an agent itself. It consists of the collective experience and interactions of multiple agents we might think of as a society of mind. Can we build agents that perform analogous functions to those found in the DMN? What light might this shine on our own subjective experience?