Category: Multi-agent Systems

  • A Deep-dive into Agents: Agent Interaction

    In previous posts, I introduced the capabilities of individual agents, including their unique memory architecture and tool access. But what sets Sentienta apart is its multi-agent platform, where agents work together to solve problems. Today, we’ll explore how these agents interact as a team.

    How Agents Work Together

    Let’s consider an example: imagine you’ve formed a team to design a new electric scooter. We’ll call this the Scooter Team, and it’s type is Product Design.

    The team consists of key specialists: a VP of Product Design as team lead, a mechanical engineer with expertise in two-wheeled vehicles, an electrical engineer for the scooter’s power system, and a legal representative to ensure compliance with regulations. In future posts, we’ll discuss how to create specialist agents in Sentienta, but for now, imagine they’re in place and ready to collaborate.

    Once the team is set up, you initiate a discussion—say, “Let’s consider all the elements needed in the scooter design.” Each agent processes the request from its area of expertise and contributes insights. As they respond, their inputs become part of an ongoing team dialogue, which, as discussed in this post, is stored in each agent’s memory and informs subsequent responses.

    Iterative Problem-Solving

    Agents interact much like human working groups: they listen to teammates before responding, integrating their insights into their own reasoning. This iterative exchange continues until the original question is thoroughly addressed.

    What does that mean for the scooter design team? Suppose that the first response comes from the mechanical engineer: she tells the team about the basic components in the design, and in particular suggests the power that is needed to drive the scooter. The electrical engineer will consider this power specification when developing his response. The agent representing legal may note that regulations cap the scooter’s speed at 25 mph.

    And this is what is interesting: the input from legal may cause the mechanical and electrical engineers to reconsider their answers and respond again. This iterative answering will continue until each agent has contributed sufficiently to fully address the query. Reasoning about the user’s question derives from this agent interaction.

    The Role of LLMs in Agent Interaction

    How does this happen? The engine that drives each agent is an LLM. The LLM system prompt includes several key instructions and information that foster team interaction: the team definition along with teammate personas are included in the prompt enabling the LLM to consider who on the team is best able to address aspects of the query.

    In addition, each agent is instructed to critically think about input from teammates when developing a response. This makes team dialogs interactive and not just isolated LLM responses. Agents are instructed to consider whether the query has been answered by other agents already. This helps to drive the dialog to conclusion.

    Looking Ahead

    This dynamic interaction forms the foundation of Sentienta’s multi-agent problem-solving. In future posts, we’ll explore concepts like delegation and agent autonomy, further uncovering the depth and efficiency of these collaborations.

  • Machine Consciousness: Simulation vs Reality

    Suppose that we create a perfect model of a hydrogen atom. After all, we know all the elements of the atom, a single proton paired with a single electron. Each of these elements is understood for all purposes of discussing the atom. The electron’s dynamics are perfectly understood: the energy levels, the probability distributions, the electron’s spin, the Lamb shift, it is all well defined. We can simulate this atom to any degree of detail we’d like, including representing it in a quantum computer that actualizes the quantum properties of the constituents.

    But can that simulation ever bridge the gap to reality? Is the story about something ever the same as the thing? Can two perfectly represented hydrogen atoms be added to a real oxygen atom to make a real water molecule? No.

    That is one argument for why machines cannot be made sentient: we can make a machine do all of the things we think sentience entails, but in the end it is just a simulation of intelligence, thinking and consciousness. It is a story, perhaps a very detailed story, but in the end just a story.

    This reminds one of C. S. Lewis’ remark “If God is our Creator, then we would relate to God as Hamlet would relate to Shakespeare. Now, how would Hamlet ever gonna know anything about Shakespeare? Hamlet’s not gonna find him anywhere on stage.” And similarly, Shakespeare is never going to know the real Hamlet.

    Sentience in a Dish In December of 2022, Bret Kagan and colleagues at Cortical Labs in Melbourne, Australia published an article in the journal Neuron, describing how brain organoids, small lab-grown networks of neurons, were able to learn to play the classic Pong video game. The authors claim that the organoids met the formal definition of sentience in the sense that they were “‘responsive to sensory impressions’ through adaptive internal processes”.

    This may or may not be true, but it is certainly distinct from a simulated sentience that plays Pong. After all, this is not a description of cells that interact with a simulated or even real Pong game. These are real live cells, requiring nutrients in a Petri dish. Those that argue that consciousness can only come through embodiment would be happy with this definition of sentience.

    But what is it that makes these cells sentient? Where in their soupy embodiment lies the sentience? If we tease apart the network, can we get down to the minimum viable network that plays Pong and meets our formal definition? After all, this is a brain organoid, grown in the lab. We could do this over again and stop when there are fewer cells and see if the same behavior is exhibited. If so, we can repeat the process and find that minimum network that still plays a good game of Pong.

    Whatever that number is, 10 cells or 10,000 cells, we can study and very likely represent with a model that replicates the connections, spiking behavior, even the need for simulated nutrients, everything that is meaningful about the organoid. Would this simulation learn to play Pong? Given progress in machine learning in the past decade, we have every reason to believe the answer is yes. Would this create sentience in a machine? Or just tell a very detailed story about an organoid that is sentient? And if the latter, then where is the difference?

    Is the simulation of the hydrogen atom qualitatively different from that of the organoid? The simulated hydrogen atom can’t be used to make water. But the simulated organoid, for all practical purposes, does exactly the same thing as the real thing. Both meet the same formal definition of sentience.

    I don’t believe these thoughts get closer to understanding whether machines can be conscious or not. Reductionism might just fail for this problem, and others will argue that embodiment is a requirement. But I do think that we are not far from having that simulation, which in all meaningful ways, can be called sentient.

  • A Deep-dive into Agents: Tool Access

    An important feature of agents is their ability to utilize tools. Of course there are many examples of software components that use tools as part of their function, but what distinguishes agents is their ability to reason about when to use a tool, which tool to use and how to utilize the results.

    In this context, a ‘tool’ refers to a software component designed to execute specific functions upon an agent’s request. This broad definition includes utilities such as file content readers, web search engines, and text-to-image generators, each offering capabilities that agents can utilize in responding to queries from users or other agents.

    Sentienta agents can access tools through several mechanisms. The first is when an agent has been pre-configured with a specific set of tools. Several agents in the Agent Marketplace utilize special tools in their roles. For example, the Document Specialist agent (‘Ed’) which you can find in the Document and Content Access section, utilizes Amazon’s S3 to store and read files, tailoring its knowledge to the content you provide.

    Angie, another agent in the Document and Content Access category, enhances team discussions by using a search engine to fetch the latest web results. This is valuable for incorporating the most current data into a team dialog, addressing the typical limitation of LLMs, which lack up-to-the-minute information in their training sets.

    You have the flexibility to go beyond pre-built tools. Another option allows you to create custom tools or integrate third-party ones. If the tool you want to use exposes a REST API that processes structured queries, you can create an agent to call the API (see the FAQ page for more information). Agent ‘Ed’, mentioned earlier, employs such an API for managing files.

    Finally, Sentienta supports completely custom agents that embody their own tool use. You might utilize a popular agent framework such as LangChain, to orchestrate more complex functions and workflows. Exposing an API in the form we just discussed will let you integrate this more complex tool-use into your team. Check out the Developers page to see how you can build a basic agent in AWS Lambda. This agent doesn’t do much, but you can see how you might add specialized functions to augment your team’s capabilities.

    In each case, the power of agent tool-use comes from the agent deciding how to use the tool and how to integrate the tool’s results into the team’s dialog. Agents may be instructed by their team to use these tools, or they may decide alone when or if to use a tool.

    This too is a large subject, and much has been written by others on this topic (see for example here and here). We’ve touched on three mechanisms you can use in Sentienta to augment the power of your agents and teams.

    In a future post we’ll discuss how agents interact in teams and how you can control their interactions through tailored personas.

  • A Deep-dive into Agents: Memory

    There is a lot of buzz in the news about AI agents. I thought I’d take this opportunity to discuss what makes a Sentienta agent different from what you might have read.

    As this is a somewhat complex subject, I’ve decided to break it into several posts. This one is about the memory that drives agent behavior. Subsequent posts will discuss Tool Access, Task Delegation, Multi-agent Interaction and Autonomous Action.

    A Sentienta agent exists within an environment, consisting of interactions with users, other agents (Sentienta is a multi-agent platform), its local host and the internet.

    The core of the agent is an LLM. We use best-in class LLMs as engines that drive agentic functions. We constantly assess which LLMs are best suited for the behavior we expect from our agents, and given the rapid evolution in LLM capability, this is essential. Because the engine is an LLM, the fundamental communications of the agent both internally and with the environment are in natural language.

    The LLM is of little value without context, and this is provided by memory. Sentienta agents have two kinds of memory which we can loosely relate to the classes of memory known to be used by a brain region called the hippocampus. The first is semantic memory: derived from the agent’s interaction with other agents and the user, this is simply a record of LLM communications organized into the current dialog and past dialogs.

    The second kind of memory is episodic: each agent uses its persona and existing memory to filter and reframe the dialog to create new episodic memories. Note that this is bootstrapped from the persona (which you write when you create the agent) – a new agent builds this memory using the persona as the starting point.

    So how is all of this used by the LLM? The persona, and more general agent instructions, define the LLM system prompt. The memory (of both types), plus the communication from the environment form the query.

    Pretty simple right? But of course the devil is in the details.

    There are a few things to note about this architecture. The first is that the persona plays an important role: it both guides LLM responses because it is a part of the system prompt, and it helps model the evolving agent memory creating a distinct agent ‘personality’. This evolves as the agent participates in tasks and interacts with other agents.

    The second is that the episodic memory is retained by the agent. If an agent belongs to more than one team, the agent brings its memory with it across teams and tasks. For example if an agent manages a product design team and joins a marketing team, the agent brings with it the latest design decisions from the product team. And of course what it learns from marketing can drive product design.

    It’s important to note that the agents on your teams belong to you. Knowledge gained by an agent is never shared outside your account.

    That is a high-level summary of how Sentienta agents create context for their tasks. Stay-tuned to learn about how agents use tools to interact with their environment.

  • Teams, Tasks, Tales

    I was given early access to Sentienta to test out features and work through bugs. I found that Sentienta had both fun and helpful applications. Now that Sentienta is released I wanted to give new users ideas for things that they can try based on what worked really well for me.

    Human Resources Team

    I manage a second job and sometimes need to create material for my work. At one point, I discovered an area at my workplace that lacked a specific policy. I created a Sentienta HR team of experts, including an HR staff member, a policy writer, workplace stakeholders, a consumer advocate, a proxy for an attorney, and a risk manager. This group served as a sounding board and consultation resource to draft the new policy.

    tip Tip:

    To do this, I first created the team in the Manage Teams tab, providing a name for the team, a title and brief description. Then I developed some agents for it, with the most important part of each agent being it’s persona. The persona of an agent focuses that agent’s contributions in a dialog to a specific area of expertise. Here is an example of what the persona might be for a consumer advocate:

    “You are an advocate and voice for consumers, helping to resolve complaints and ensuring fair practices in business transactions. Additionally, you may engage in public policy efforts to promote transparency and accountability, aiming to improve consumer protection laws and regulations.”

    Note that the persona is drafted to read like instructions to the agent to help focus its contributions. Not sure how to write the description? You can always Google it and edit it down, and the persona can be adjusted over time.

    It was interesting seeing the agents respond to each other and give feedback on each other’s inputs. Each member contributed to the dialog, offering new ideas and suggestions. The dialog concluded once each agent had an opportunity to participate and move the conversation to a solution.

    Since forming this team, I’ve used it to discuss general policies and related topics whenever I have concerns or ideas. As this is my second job, I am not an expert in many areas, and my focus is just a small part of it. Nonetheless, this team has been instrumental in keeping me informed.

    I want to be clear that I did not provide the team with any specific information or data that could be considered sensitive or proprietary. But even without that specific data, I’ve found the team to be a powerful resource for thinking about the broader issues in my workplace.

    Lit Review Team

    For fun, I enjoy writing, and Sentienta offers a neat capability to form a dream team of literary reviewers. I’ve created a team with iconic writers such as William Shakespeare, Ernest Hemingway, and Edgar Allan Poe. To anchor the team’s feedback in a contemporary style, I included my favorite author, Jim Butcher.

    When I provided pieces of a project I was writing, and asked the team to review it, each agent gave information based on their assigned personas. The feedback was remarkable and gave unexpected insights into my work, enhancing not only my understanding of how readers perceive my work but also deepening my connection to the passion that fuels my writing.

    tip Tip:

    Here is how you can add content to your team (in my case, writing samples): click the paperclip icon located in the toolbar on Sentienta’s main page. This will let you select a file from your desktop (most file formats are supported). Once you click ok, the file will load and you can enter a question or direction to the team for how to use the file’s contents. All of the agents will have access to file and can use it in the dialog.

    At one point I also tried having the various agents rewrite parts of my work in their own style. This gave me interesting view points and helped with the editing process.

    The team, dubbed Lit Review, also functioned as a great way to learn about the authors and better understand each author’s writing. It was fun to watch these famous writers edit each other’s responses!

    I feel obliged to add in here that I never used the team to write my project for me. I used it to give guidance and ideas for improving my writing’s quality. Professionals could easily use this to edit their own work, whether in business or creative writing.

    Sentienta has become an invaluable tool for tackling the challenges I face both professionally and creatively. The Lit Review team will remain one of my go to’s, but I also intend to form new teams to explore how they can support me.

  • Why Sentienta?

    A little over a year ago I gave a talk in which I discussed machine reasoning and gave some examples from the literature. My audience was skeptical to say the least. But here we are in 2025 with multiple companies claiming powerful reasoning capabilities for their models and new benchmarks set weekly. These are exciting times.

    I’ve always believed that understanding the science behind machine “intelligence” would help us understand more deeply who and what we are. In a way, the reasoning capabilities of today’s models do that. We question whether they are simply capturing the ‘surface’ statistics of training data. At the same time, they are unquestionably powerful. I think sometimes this tells us that our cherished human intelligence may rely on similar ‘weak’ methods more often than we’d like to admit. That is to say, as we come to understand machine intelligence the mystery of our own may lessen.

    And now we come to machine consciousness. This is the topic, that if mentioned in serious venues, produces much more skepticism and even snickering. After all, we can’t really define what human consciousness is. We know it has to do with a sense of our own existence, sensations of our environment, and thoughts or ‘inner speech’. Given that this is all subjective, will we ever be able to understand what it is? I suspect that, just as with machine reasoning, the mystery of consciousness will begin to lift as we understand what it looks like in a machine.

    One of the more compelling models for consciousness is the ‘Society of Mind‘ (Minsky, 1988). This model has only been strengthened in the intervening years since it was published. We now know self-referential thought and introspection involve multiple brain centers, collectively called the Default Mode Network (DMN). Brain regions including those from the medial prefrontal cortex, the cingulate gyrus and the hippocampus work together to integrate both past and current experiences. As we begin to model these kinds of interactions will Chalmer’s “hard problem of consciousness” fade away?

    Sentienta was started to both help companies scale their businesses through virtual teams of agents, and to explore these ideas. Agents interact and share their expertise in teams. The dialog that occurs between agents generates new ideas that no single agent produced. Agents have their own memories that they develop from these interactions. These memories are a function of an agent’s persona. As a result, agents evolve and bring new experiences and ideas to a team discussion.

    And here is the point: we can think of a Sentienta team as an agent itself. It consists of the collective experience and interactions of multiple agents we might think of as a society of mind. Can we build agents that perform analogous functions to those found in the DMN? What light might this shine on our own subjective experience?

  • Scaling Your Business

    Starting a company is challenging. There is so much to think about. If you are a software developer and you and your friends have a cool idea, you can build a product pretty easily with today’s tools (especially now with generative AI).

    But as anyone who has tried this knows, there is so much more to building a company than just building the product. It’s rarely the case that ‘if you build it they will come.’ It’s up to you to tell your potential customers about who you are and why your product is worth their attention.

    Some years ago I worked for a small company that built neural network solutions for businesses. We did a lot of different things: character recognition for form processing, fraud detection for credit cards and mortgage insurance, even an early ANPR system.

    When we wanted to introduce a new product we had to farm-out the work to an ad agency. Their folks would come in and interview us, talk about the product and then go away to develop a concept for the promotional material. This took a lot of time, and often there were several iterations.

    Startup founders today don’t have the luxury of time for that approach. However, they have rich new tools that let them create and manage marketing campaigns themselves. Google offers great tools for deploying and monitoring your ‘assets’, but that misses the step that comes first: what is the campaign’s message? What narrative will I use to create the message? What visuals should I create and how do I create them?

    With Sentienta you can create a team of marketing experts to help you answer these questions. The Agent Marketplace has pre-built agents to help you think about your messaging, create content to describe your product and why it’s important to your intended customers. There are agents to create visuals, both images and video, and even an agent to help you select the right social media channel.

    Creating a virtual marketing team can help you scale your business without adding a whole new team of people. A Sentienta marketing team can help you develop the assets you need to use powerful deployment tools like Google Ads.

  • Sentienta Teams

    Sentienta is different from your current experience with AI chatbots. The essence of our product is that something special comes from the interaction of experts.

    When ideas flow from one expert to another, they are evaluated, improved, verified, and sometimes debunked. The dialog between experts is a window into the evolution of ideas and solutions to problems. This transparency is the foundation of trust.

    This is why you find teams in companies: having just one person solve a problem is ok, but getting the perspective of people with different skills makes your solution much more robust.

    Our latest presentation explains how this idea can be used in your company by building virtual teams of GenAI agents.

    https://www.sentienta.ai/blog/Sentienta.pdf