Tag: artificial-intelligence

  • Team Dynamics

    While we’ve explored agent functions in these posts, Sentienta is, at its core, a multi-agent system where cooperation and debate enhance reasoning.

    Multi-agent Debate (MAD) and Multi-agent Cooperative Decision Making (CDM) have recently become intense areas of research, with numerous survey papers exploring both classical (non-LLM) and fully LLM-based approaches ([1], [2], [3]). While these reviews typically provide a high-level overview of the domains in which MAD/CDM systems operate and their general structure, they offer limited detail on enabling effective interaction among LLMs through cooperative and critical dialogue. In this post, we aim to bridge this gap, focusing specifically on techniques for enhancing LLM-based systems.

    We’ll begin by reviewing the characteristics of effective team dynamics, human or otherwise. Teams are most productive when they display these behaviors:

    • Balanced Participation – Ensure all members contribute and have the opportunity to share their insights.
    • Critical Thinking – Evaluate ideas objectively, considering their strengths and weaknesses. Encourage discussion and rebuttals where needed.
    • Well-define Expertise and Responsibilities – Each team member should bring something special to the discussion and be responsible for exercising that expertise.
    • Continuous Learning – Team members should reflect on past discussions and recall earlier decisions to refine the current dialog.
    • Defined Decision-Making Criteria – Teams should have a clear idea of how and when a problem is solved. This may or may not include a team-lead concluding the discussion.

    How might we get a team of LLM-based agents to exhibit these dynamics? LLMs are stateless, and this means that whenever we want an agent to participate, it needs to be provided with both the query, the context of the query, and any instructions on how best to answer the query.

    As discussed here, the context for the query is provided as a transcript of the current and past dialogs. The system prompt is where the agent is given instructions for team dynamics and the persona that defines the agent’s expertise.

    Here are some key points in the system prompt that address the team dynamics we’re looking for, stated in second-person instructions:

    Balanced Participation:

    **Brevity**: Keep answers short (1-2 sentences) to allow others to participate.

    **Avoid Repetition**: Do not repeat what others have or you have said. Only add new insights or alternative viewpoints.

    **Contribute**: Add new, relevant insights if your response is unique.

    Critical Thinking:

    **Critique**: Think critically about others’ comments and ask probing questions.

    **Listen Engage**: Focus on understanding your teammates and ask questions that dig into their ideas. Listen for gaps in understanding and use questions to address these gaps.

    **Avoid Repetition**: Do not repeat what others have or you have said. Only add new insights or alternative viewpoints.

    **Prioritize Questions**: Lead with questions that advance the discussion, ensuring clarification or elaboration on points made by others before providing your own insights.

    Well-define Expertise and Responsibilities:

    This is provided by the agent persona. In addition, there are these team instructions:

    **Engage**: Provide analysis, ask clarifying questions, or offer new ideas based on your expertise.

    Learning:

    **Read the Transcript**: Review past and current discussions. If neither have content, then simply answer the user’s question.

    **Reference**: Answer questions from past dialogs when relevant.

    Defined Decision-Making Criteria:

    **Prioritize High-Value Contributions**: Respond to topics that have not yet been adequately covered or address any gaps in the discussion. If multiple agents are addressing the same point, seek consensus before contributing.

    **Silence**: If you find no specific question to answer or insight to add, do not respond.

    **Completion**: If you have nothing more to add to the discussion and the user’s query has been answered, simply state you have nothing to add.

    These instructions direct each agent to contribute based on their expertise, responding to both user queries and peer inputs. They emphasize brevity and silence when no meaningful input is available, ensuring discussions remain concise, non-redundant, and goal-oriented.

    Conclusion

    The team dialog will evolve dynamically with each agent addressing the user’s query through these dynamics. The dialog will continue until each agent has participated fully, typically several times responding to ideas offered by teammates. Once each agent decides there is nothing more to add, the discussion comes to an end.

    References:

    [1] Jin, Weiqiang and Du, Hongyang and Zhao, Biao and Tian, Xingwu and Shi, Bohang and Yang, Guan, A Comprehensive Survey on Multi-Agent Cooperative Decision-Making: Scenarios, Approaches, Challenges and Perspectives. Available at SSRN.

    [2] Li, X., Wang, S., Zeng, S. et al. A survey on LLM-based multi-agent systems: workflow, infrastructure, and challenges. Vicinagearth 1, 9 (2024). Available at DOI.

    [3] Y. Rizk, M. Awad and E. W. Tunstel, “Decision Making in Multiagent Systems: A Survey,” in IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 3, pp. 514-529, Sept. 2018, doi: 10.1109/TCDS.2018.2840971. keywords: {Decision making;Robot sensing systems;Task analysis;Robot kinematics;Particle swarm optimization;Multi-agent systems;Cooperation;decision making models;game theory;Markov decision process (MDP);multiagent systems (MASs);swarm intelligence}, Available at IEEE.

  • Tips and Tricks for Creating an Effective Agent

    Creating an agent in Sentienta is straightforward, but a few key strategies can help ensure your agent works optimally. Below, we’ll walk through the setup process and offer insights on defining an agent’s role effectively.

    Step 1: Create a Team

    Before creating an agent, you must first establish a team. To do this:

    1. Navigate to the Your Agents and Teams page using the Manage Teams button on the homepage.
    2. Click Create a Team. You’ll see three fields:
      • Name: Enter a name, such as “HR Team”
      • Type: Categorize the team (e.g., “Human Resources”).
      • Description: This defines the team’s purpose. A simple example: “This team manages Human Resources for the company.”
    3. Click Submit to create the team.

    Step 2: Create an Agent

    Once you’ve created a team, it will appear in the Teams section along with the Sentienta Support Team. Follow these steps to add an agent:

    1. Select your team (e.g., HR Team).
    2. Click Create an Agent in the left menu.
    3. Assign a name. Let’s call this agent Bob.
    4. Define Bob’s title—e.g., Benefits Specialist.
    5. Define Bob’s Persona, which outlines expertise and interactions.

    Step 3: Crafting an Effective Persona

    The Persona field defines the agent’s expertise and shapes its interactions. As discussed in our earlier post on Agent Interaction, the agent uses an LLM to communicate with both users and other agents. Since the persona is part of the LLM system prompt, it plays a crucial role in guiding the agent’s responses.

    The persona should clearly define what the agent is able to do and how the agent will interact with the other members on the team. (To see examples of effective personas, browse some of the agents in the Agent Marketplace).

    A well-crafted persona for Bob might look like this:

    “You are an expert in employee benefits administration, ensuring company programs run smoothly and efficiently. You manage health insurance, retirement plans, and other employee perks while staying up to date with legal compliance and industry best practices through your Research Assistant. You provide guidance to employees on their benefits options and collaborate with the HR Generalist and Recruiter to explain benefits to new hires.”

    Key persona components:

    • Expertise: Clearly defines Bob’s role in benefits administration.
    • User Interaction: Specifies that Bob provides guidance to employees.
    • Team Collaboration: Mentions interactions with other agents, such as the HR Generalist and Recruiter.
    • Delegation: Optionally, defines which agents Bob may delegate to—for example, a Research Assistant agent that retrieves compliance updates.

    If additional agents (like the HR Generalist or Research Assistant) don’t yet exist, their roles can be updated in Bob’s persona as the team expands.

    Once the persona is complete, click Submit to add Bob to the team. (We won’t discuss the URL optional field today, but will save for a future post.)

    Step 4: Testing Your Agent

    Now that Bob is created, you can test the agent’s expertise:

    1. Navigate to the home page and select the HR Team below Your Teams
    2. Make sure Bob’s checkbox is checked and enter a query, such as “What is your expertise?”
    3. Bob will respond with something like:

    “I am a Benefits Specialist, responsible for employee benefits administration, including health insurance, retirement plans, and other perks. I ensure compliance with regulations and provide guidance to employees on their benefits options.”

    If asked an unrelated question, such as “What is today’s weather?” Bob will remain silent. This behavior ensures that agents only respond within their expertise, promoting efficient team collaboration.

    Next Steps

    Once your agent is set up, you can explore additional customization options, such as adding company-specific benefits documentation to Bob’s knowledge base. Stay tuned for a future post on enhancing an agent’s expertise with internal documents.

  • A Deep-dive into Agents: Agent Autonomy

    A Deep-dive into Agents: Agent Autonomy

    In past posts, we’ve explored key aspects of AI agents, including agent memory, tool access, and delegation. Today, we’ll focus on how agents can operate autonomously in the “digital wild” and clarify the distinction between delegation and autonomy.

    Understanding Delegation and Autonomy

    Agent delegation involves assigning a specific task to an agent, often with explicit instructions. In contrast, autonomy refers to agents that operate independently, making decisions without significant oversight.

    Within Sentienta, agents function as collaborative experts, striking a balance between autonomy and delegation for structured yet dynamic problem-solving. Autonomous behavior includes analyzing data, debating strategies, and making decisions without user intervention, while delegated tasks ensure precise execution of specific actions.

    For example, a Business Strategy Team could autonomously assess market trends, identify risks, and refine strategies based on live data. At the same time, these agents might delegate the task of gathering fresh market data to a Web Search Agent, demonstrating how autonomy and delegation complement each other.

    Extending Autonomy Beyond Internal Systems

    Sentienta Assistant agents and teams can also function beyond internal environments, operating autonomously on third-party platforms. Whether embedded as intelligent assistants or collaborating in external workflows, these agents dynamically adapt by responding to queries, analyzing evolving data, and refining recommendations—all without requiring continuous oversight.

    Practical Applications of Autonomous Agents

    Below are practical applications showcasing how agents can operate independently or in collaboration to optimize workflows and decision-making.

    • Financial Advisory & Portfolio Management (Single Agent) A financial advisor agent reviews portfolios, suggests adjustments based on market trends, and provides personalized investment strategies.
    • Customer Support Enhancement (Single Agent or Team) A support agent answers queries while a team collaborates to resolve complex issues, escalating cases to specialized agents for billing or troubleshooting.
    • Data-Driven Market Research (Sentienta Team) A multi-agent team tracks competitor activity, gathers insights, and generates real-time market summaries, using delegation for data collection.
    • Legal Document Analysis & Compliance Checks (Single Agent) A legal agent reviews contracts, identifies risk clauses, and ensures regulatory compliance, assisting legal teams with due diligence.
    • Healthcare Support & Patient Triage (Single Agent) A virtual medical assistant assesses symptoms, provides diagnostic insights, and directs patients to appropriate specialists.

    The Future of AI Autonomy in Business

    By combining autonomy with effective delegation, Sentienta agents serve as dynamic problem-solvers across industries. Whether streamlining internal workflows or enhancing real-time decision-making, these AI-driven assistants unlock new possibilities for efficiency, expertise, and scalable innovation.

  • A Deep-dive into Agents: Agent Delegation

    In my last post, we explored how agents interact within a team. Building on that foundation, let’s examine agent delegation—a structured process in which agents assign tasks to others based on expertise, priority, and context.

    Unlike agent autonomy, which I’ll cover in a future post, agent delegation focuses on deliberate, workflow-driven collaboration among agents. Rather than acting independently, agents make informed decisions about which tasks they should handle and which should be handed off to specialized counterparts.

    Structuring Delegation in Agent Teams

    Sentienta agents operate based on personas—natural language descriptions of their expertise. These personas guide how an agent engages in problem-solving within a team. Crucially, each agent has awareness of its teammates’ expertise, as this information is embedded in the system prompt of their respective language models.

    When responding to a query, each agent evaluates both its own capabilities and how best to leverage the expertise of others. This adaptive delegation is an essential feature of Sentienta’s design. Agents iteratively work through problems, sharing insights, refining their contributions, and identifying gaps in the discussion. When an agent determines that a particular aspect of a query requires specialized attention, it can delegate the task—often providing specific instructions on how to approach it.

    This structured, dynamic handoff is what differentiates agent delegation from the broader concept of agent autonomy. While autonomy involves independent decision-making, delegation is about intelligent collaboration.

    A Practical Example: Agent-Driven Financial Analysis

    To illustrate, let’s consider a small Sentienta team analyzing financial markets. This team consists of:

    • 🔹 Financial Analyst Agent – Interprets market data, economic trends, and financial reports.
    • 🔹 Risk Assessment Agent – Evaluates market volatility, credit ratings, geopolitical risks, and sector stability.
    • 🔹 Web Research Agent – Gathers external data, such as stock performance, news reports, and regulatory changes.

    A delegated workflow might operate as follows:

    1. Financial Analyst Agent requests the Web Research Agent to gather financial reports and market performance data.
    2. Risk Assessment Agent instructs the Web Research Agent to track real-time market volatility and news on macroeconomic risks.
    3. Web Research Agent retrieves and summarizes relevant data, providing source links for deeper analysis.
    4. Financial Analyst Agent selects key companies for further investigation and delegates risk-factor analysis to the Risk Assessment Agent, requesting a review of leadership stability, credit ratings, and sector trends.
    5. If complex statistical trends emerge, an additional Data Analytics Agent might be introduced to identify patterns and forecast future performance.

    Crucially, these steps are not static. The delegation process evolves dynamically, responding to new information in real time.

    The Benefits of Task Delegation

    By structuring delegation in this way, Sentienta teams achieve modular adaptability—scaling efficiently as new agents are introduced or refined without burdening a single model. This approach ensures that specialized tasks are handled by the most relevant agents, improving both accuracy and depth of analysis.

    But what happens when agents move beyond structured delegation toward autonomous strategic decision-making? In a future post, I’ll explore how Agent Autonomy is set to redefine enterprise AI, reducing human intervention while maintaining control and reliability.

  • A Deep-dive into Agents: Agent Interaction

    In previous posts, I introduced the capabilities of individual agents, including their unique memory architecture and tool access. But what sets Sentienta apart is its multi-agent platform, where agents work together to solve problems. Today, we’ll explore how these agents interact as a team.

    How Agents Work Together

    Let’s consider an example: imagine you’ve formed a team to design a new electric scooter. We’ll call this the Scooter Team, and it’s type is Product Design.

    The team consists of key specialists: a VP of Product Design as team lead, a mechanical engineer with expertise in two-wheeled vehicles, an electrical engineer for the scooter’s power system, and a legal representative to ensure compliance with regulations. In future posts, we’ll discuss how to create specialist agents in Sentienta, but for now, imagine they’re in place and ready to collaborate.

    Once the team is set up, you initiate a discussion—say, “Let’s consider all the elements needed in the scooter design.” Each agent processes the request from its area of expertise and contributes insights. As they respond, their inputs become part of an ongoing team dialogue, which, as discussed in this post, is stored in each agent’s memory and informs subsequent responses.

    Iterative Problem-Solving

    Agents interact much like human working groups: they listen to teammates before responding, integrating their insights into their own reasoning. This iterative exchange continues until the original question is thoroughly addressed.

    What does that mean for the scooter design team? Suppose that the first response comes from the mechanical engineer: she tells the team about the basic components in the design, and in particular suggests the power that is needed to drive the scooter. The electrical engineer will consider this power specification when developing his response. The agent representing legal may note that regulations cap the scooter’s speed at 25 mph.

    And this is what is interesting: the input from legal may cause the mechanical and electrical engineers to reconsider their answers and respond again. This iterative answering will continue until each agent has contributed sufficiently to fully address the query. Reasoning about the user’s question derives from this agent interaction.

    The Role of LLMs in Agent Interaction

    How does this happen? The engine that drives each agent is an LLM. The LLM system prompt includes several key instructions and information that foster team interaction: the team definition along with teammate personas are included in the prompt enabling the LLM to consider who on the team is best able to address aspects of the query.

    In addition, each agent is instructed to critically think about input from teammates when developing a response. This makes team dialogs interactive and not just isolated LLM responses. Agents are instructed to consider whether the query has been answered by other agents already. This helps to drive the dialog to conclusion.

    Looking Ahead

    This dynamic interaction forms the foundation of Sentienta’s multi-agent problem-solving. In future posts, we’ll explore concepts like delegation and agent autonomy, further uncovering the depth and efficiency of these collaborations.

  • Machine Consciousness: Simulation vs Reality

    Suppose that we create a perfect model of a hydrogen atom. After all, we know all the elements of the atom, a single proton paired with a single electron. Each of these elements is understood for all purposes of discussing the atom. The electron’s dynamics are perfectly understood: the energy levels, the probability distributions, the electron’s spin, the Lamb shift, it is all well defined. We can simulate this atom to any degree of detail we’d like, including representing it in a quantum computer that actualizes the quantum properties of the constituents.

    But can that simulation ever bridge the gap to reality? Is the story about something ever the same as the thing? Can two perfectly represented hydrogen atoms be added to a real oxygen atom to make a real water molecule? No.

    That is one argument for why machines cannot be made sentient: we can make a machine do all of the things we think sentience entails, but in the end it is just a simulation of intelligence, thinking and consciousness. It is a story, perhaps a very detailed story, but in the end just a story.

    This reminds one of C. S. Lewis’ remark “If God is our Creator, then we would relate to God as Hamlet would relate to Shakespeare. Now, how would Hamlet ever gonna know anything about Shakespeare? Hamlet’s not gonna find him anywhere on stage.” And similarly, Shakespeare is never going to know the real Hamlet.

    Sentience in a Dish In December of 2022, Bret Kagan and colleagues at Cortical Labs in Melbourne, Australia published an article in the journal Neuron, describing how brain organoids, small lab-grown networks of neurons, were able to learn to play the classic Pong video game. The authors claim that the organoids met the formal definition of sentience in the sense that they were “‘responsive to sensory impressions’ through adaptive internal processes”.

    This may or may not be true, but it is certainly distinct from a simulated sentience that plays Pong. After all, this is not a description of cells that interact with a simulated or even real Pong game. These are real live cells, requiring nutrients in a Petri dish. Those that argue that consciousness can only come through embodiment would be happy with this definition of sentience.

    But what is it that makes these cells sentient? Where in their soupy embodiment lies the sentience? If we tease apart the network, can we get down to the minimum viable network that plays Pong and meets our formal definition? After all, this is a brain organoid, grown in the lab. We could do this over again and stop when there are fewer cells and see if the same behavior is exhibited. If so, we can repeat the process and find that minimum network that still plays a good game of Pong.

    Whatever that number is, 10 cells or 10,000 cells, we can study and very likely represent with a model that replicates the connections, spiking behavior, even the need for simulated nutrients, everything that is meaningful about the organoid. Would this simulation learn to play Pong? Given progress in machine learning in the past decade, we have every reason to believe the answer is yes. Would this create sentience in a machine? Or just tell a very detailed story about an organoid that is sentient? And if the latter, then where is the difference?

    Is the simulation of the hydrogen atom qualitatively different from that of the organoid? The simulated hydrogen atom can’t be used to make water. But the simulated organoid, for all practical purposes, does exactly the same thing as the real thing. Both meet the same formal definition of sentience.

    I don’t believe these thoughts get closer to understanding whether machines can be conscious or not. Reductionism might just fail for this problem, and others will argue that embodiment is a requirement. But I do think that we are not far from having that simulation, which in all meaningful ways, can be called sentient.

  • A Deep-dive into Agents: Tool Access

    An important feature of agents is their ability to utilize tools. Of course there are many examples of software components that use tools as part of their function, but what distinguishes agents is their ability to reason about when to use a tool, which tool to use and how to utilize the results.

    In this context, a ‘tool’ refers to a software component designed to execute specific functions upon an agent’s request. This broad definition includes utilities such as file content readers, web search engines, and text-to-image generators, each offering capabilities that agents can utilize in responding to queries from users or other agents.

    Sentienta agents can access tools through several mechanisms. The first is when an agent has been pre-configured with a specific set of tools. Several agents in the Agent Marketplace utilize special tools in their roles. For example, the Document Specialist agent (‘Ed’) which you can find in the Document and Content Access section, utilizes Amazon’s S3 to store and read files, tailoring its knowledge to the content you provide.

    Angie, another agent in the Document and Content Access category, enhances team discussions by using a search engine to fetch the latest web results. This is valuable for incorporating the most current data into a team dialog, addressing the typical limitation of LLMs, which lack up-to-the-minute information in their training sets.

    You have the flexibility to go beyond pre-built tools. Another option allows you to create custom tools or integrate third-party ones. If the tool you want to use exposes a REST API that processes structured queries, you can create an agent to call the API (see the FAQ page for more information). Agent ‘Ed’, mentioned earlier, employs such an API for managing files.

    Finally, Sentienta supports completely custom agents that embody their own tool use. You might utilize a popular agent framework such as LangChain, to orchestrate more complex functions and workflows. Exposing an API in the form we just discussed will let you integrate this more complex tool-use into your team. Check out the Developers page to see how you can build a basic agent in AWS Lambda. This agent doesn’t do much, but you can see how you might add specialized functions to augment your team’s capabilities.

    In each case, the power of agent tool-use comes from the agent deciding how to use the tool and how to integrate the tool’s results into the team’s dialog. Agents may be instructed by their team to use these tools, or they may decide alone when or if to use a tool.

    This too is a large subject, and much has been written by others on this topic (see for example here and here). We’ve touched on three mechanisms you can use in Sentienta to augment the power of your agents and teams.

    In a future post we’ll discuss how agents interact in teams and how you can control their interactions through tailored personas.

  • Why Sentienta?

    A little over a year ago I gave a talk in which I discussed machine reasoning and gave some examples from the literature. My audience was skeptical to say the least. But here we are in 2025 with multiple companies claiming powerful reasoning capabilities for their models and new benchmarks set weekly. These are exciting times.

    I’ve always believed that understanding the science behind machine “intelligence” would help us understand more deeply who and what we are. In a way, the reasoning capabilities of today’s models do that. We question whether they are simply capturing the ‘surface’ statistics of training data. At the same time, they are unquestionably powerful. I think sometimes this tells us that our cherished human intelligence may rely on similar ‘weak’ methods more often than we’d like to admit. That is to say, as we come to understand machine intelligence the mystery of our own may lessen.

    And now we come to machine consciousness. This is the topic, that if mentioned in serious venues, produces much more skepticism and even snickering. After all, we can’t really define what human consciousness is. We know it has to do with a sense of our own existence, sensations of our environment, and thoughts or ‘inner speech’. Given that this is all subjective, will we ever be able to understand what it is? I suspect that, just as with machine reasoning, the mystery of consciousness will begin to lift as we understand what it looks like in a machine.

    One of the more compelling models for consciousness is the ‘Society of Mind‘ (Minsky, 1988). This model has only been strengthened in the intervening years since it was published. We now know self-referential thought and introspection involve multiple brain centers, collectively called the Default Mode Network (DMN). Brain regions including those from the medial prefrontal cortex, the cingulate gyrus and the hippocampus work together to integrate both past and current experiences. As we begin to model these kinds of interactions will Chalmer’s “hard problem of consciousness” fade away?

    Sentienta was started to both help companies scale their businesses through virtual teams of agents, and to explore these ideas. Agents interact and share their expertise in teams. The dialog that occurs between agents generates new ideas that no single agent produced. Agents have their own memories that they develop from these interactions. These memories are a function of an agent’s persona. As a result, agents evolve and bring new experiences and ideas to a team discussion.

    And here is the point: we can think of a Sentienta team as an agent itself. It consists of the collective experience and interactions of multiple agents we might think of as a society of mind. Can we build agents that perform analogous functions to those found in the DMN? What light might this shine on our own subjective experience?