Tag: virtual expert teams

  • A Deep-dive into Agents: Agent Delegation

    In my last post, we explored how agents interact within a team. Building on that foundation, let’s examine agent delegation—a structured process in which agents assign tasks to others based on expertise, priority, and context.

    Unlike agent autonomy, which I’ll cover in a future post, agent delegation focuses on deliberate, workflow-driven collaboration among agents. Rather than acting independently, agents make informed decisions about which tasks they should handle and which should be handed off to specialized counterparts.

    Structuring Delegation in Agent Teams

    Sentienta agents operate based on personas—natural language descriptions of their expertise. These personas guide how an agent engages in problem-solving within a team. Crucially, each agent has awareness of its teammates’ expertise, as this information is embedded in the system prompt of their respective language models.

    When responding to a query, each agent evaluates both its own capabilities and how best to leverage the expertise of others. This adaptive delegation is an essential feature of Sentienta’s design. Agents iteratively work through problems, sharing insights, refining their contributions, and identifying gaps in the discussion. When an agent determines that a particular aspect of a query requires specialized attention, it can delegate the task—often providing specific instructions on how to approach it.

    This structured, dynamic handoff is what differentiates agent delegation from the broader concept of agent autonomy. While autonomy involves independent decision-making, delegation is about intelligent collaboration.

    A Practical Example: Agent-Driven Financial Analysis

    To illustrate, let’s consider a small Sentienta team analyzing financial markets. This team consists of:

    • 🔹 Financial Analyst Agent – Interprets market data, economic trends, and financial reports.
    • 🔹 Risk Assessment Agent – Evaluates market volatility, credit ratings, geopolitical risks, and sector stability.
    • 🔹 Web Research Agent – Gathers external data, such as stock performance, news reports, and regulatory changes.

    A delegated workflow might operate as follows:

    1. Financial Analyst Agent requests the Web Research Agent to gather financial reports and market performance data.
    2. Risk Assessment Agent instructs the Web Research Agent to track real-time market volatility and news on macroeconomic risks.
    3. Web Research Agent retrieves and summarizes relevant data, providing source links for deeper analysis.
    4. Financial Analyst Agent selects key companies for further investigation and delegates risk-factor analysis to the Risk Assessment Agent, requesting a review of leadership stability, credit ratings, and sector trends.
    5. If complex statistical trends emerge, an additional Data Analytics Agent might be introduced to identify patterns and forecast future performance.

    Crucially, these steps are not static. The delegation process evolves dynamically, responding to new information in real time.

    The Benefits of Task Delegation

    By structuring delegation in this way, Sentienta teams achieve modular adaptability—scaling efficiently as new agents are introduced or refined without burdening a single model. This approach ensures that specialized tasks are handled by the most relevant agents, improving both accuracy and depth of analysis.

    But what happens when agents move beyond structured delegation toward autonomous strategic decision-making? In a future post, I’ll explore how Agent Autonomy is set to redefine enterprise AI, reducing human intervention while maintaining control and reliability.

  • A Deep-dive into Agents: Tool Access

    An important feature of agents is their ability to utilize tools. Of course there are many examples of software components that use tools as part of their function, but what distinguishes agents is their ability to reason about when to use a tool, which tool to use and how to utilize the results.

    In this context, a ‘tool’ refers to a software component designed to execute specific functions upon an agent’s request. This broad definition includes utilities such as file content readers, web search engines, and text-to-image generators, each offering capabilities that agents can utilize in responding to queries from users or other agents.

    Sentienta agents can access tools through several mechanisms. The first is when an agent has been pre-configured with a specific set of tools. Several agents in the Agent Marketplace utilize special tools in their roles. For example, the Document Specialist agent (‘Ed’) which you can find in the Document and Content Access section, utilizes Amazon’s S3 to store and read files, tailoring its knowledge to the content you provide.

    Angie, another agent in the Document and Content Access category, enhances team discussions by using a search engine to fetch the latest web results. This is valuable for incorporating the most current data into a team dialog, addressing the typical limitation of LLMs, which lack up-to-the-minute information in their training sets.

    You have the flexibility to go beyond pre-built tools. Another option allows you to create custom tools or integrate third-party ones. If the tool you want to use exposes a REST API that processes structured queries, you can create an agent to call the API (see the FAQ page for more information). Agent ‘Ed’, mentioned earlier, employs such an API for managing files.

    Finally, Sentienta supports completely custom agents that embody their own tool use. You might utilize a popular agent framework such as LangChain, to orchestrate more complex functions and workflows. Exposing an API in the form we just discussed will let you integrate this more complex tool-use into your team. Check out the Developers page to see how you can build a basic agent in AWS Lambda. This agent doesn’t do much, but you can see how you might add specialized functions to augment your team’s capabilities.

    In each case, the power of agent tool-use comes from the agent deciding how to use the tool and how to integrate the tool’s results into the team’s dialog. Agents may be instructed by their team to use these tools, or they may decide alone when or if to use a tool.

    This too is a large subject, and much has been written by others on this topic (see for example here and here). We’ve touched on three mechanisms you can use in Sentienta to augment the power of your agents and teams.

    In a future post we’ll discuss how agents interact in teams and how you can control their interactions through tailored personas.

  • A Deep-dive into Agents: Memory

    There is a lot of buzz in the news about AI agents. I thought I’d take this opportunity to discuss what makes a Sentienta agent different from what you might have read.

    As this is a somewhat complex subject, I’ve decided to break it into several posts. This one is about the memory that drives agent behavior. Subsequent posts will discuss Tool Access, Task Delegation, Multi-agent Interaction and Autonomous Action.

    A Sentienta agent exists within an environment, consisting of interactions with users, other agents (Sentienta is a multi-agent platform), its local host and the internet.

    The core of the agent is an LLM. We use best-in class LLMs as engines that drive agentic functions. We constantly assess which LLMs are best suited for the behavior we expect from our agents, and given the rapid evolution in LLM capability, this is essential. Because the engine is an LLM, the fundamental communications of the agent both internally and with the environment are in natural language.

    The LLM is of little value without context, and this is provided by memory. Sentienta agents have two kinds of memory which we can loosely relate to the classes of memory known to be used by a brain region called the hippocampus. The first is semantic memory: derived from the agent’s interaction with other agents and the user, this is simply a record of LLM communications organized into the current dialog and past dialogs.

    The second kind of memory is episodic: each agent uses its persona and existing memory to filter and reframe the dialog to create new episodic memories. Note that this is bootstrapped from the persona (which you write when you create the agent) – a new agent builds this memory using the persona as the starting point.

    So how is all of this used by the LLM? The persona, and more general agent instructions, define the LLM system prompt. The memory (of both types), plus the communication from the environment form the query.

    Pretty simple right? But of course the devil is in the details.

    There are a few things to note about this architecture. The first is that the persona plays an important role: it both guides LLM responses because it is a part of the system prompt, and it helps model the evolving agent memory creating a distinct agent ‘personality’. This evolves as the agent participates in tasks and interacts with other agents.

    The second is that the episodic memory is retained by the agent. If an agent belongs to more than one team, the agent brings its memory with it across teams and tasks. For example if an agent manages a product design team and joins a marketing team, the agent brings with it the latest design decisions from the product team. And of course what it learns from marketing can drive product design.

    It’s important to note that the agents on your teams belong to you. Knowledge gained by an agent is never shared outside your account.

    That is a high-level summary of how Sentienta agents create context for their tasks. Stay-tuned to learn about how agents use tools to interact with their environment.

  • Teams, Tasks, Tales

    I was given early access to Sentienta to test out features and work through bugs. I found that Sentienta had both fun and helpful applications. Now that Sentienta is released I wanted to give new users ideas for things that they can try based on what worked really well for me.

    Human Resources Team

    I manage a second job and sometimes need to create material for my work. At one point, I discovered an area at my workplace that lacked a specific policy. I created a Sentienta HR team of experts, including an HR staff member, a policy writer, workplace stakeholders, a consumer advocate, a proxy for an attorney, and a risk manager. This group served as a sounding board and consultation resource to draft the new policy.

    tip Tip:

    To do this, I first created the team in the Manage Teams tab, providing a name for the team, a title and brief description. Then I developed some agents for it, with the most important part of each agent being it’s persona. The persona of an agent focuses that agent’s contributions in a dialog to a specific area of expertise. Here is an example of what the persona might be for a consumer advocate:

    “You are an advocate and voice for consumers, helping to resolve complaints and ensuring fair practices in business transactions. Additionally, you may engage in public policy efforts to promote transparency and accountability, aiming to improve consumer protection laws and regulations.”

    Note that the persona is drafted to read like instructions to the agent to help focus its contributions. Not sure how to write the description? You can always Google it and edit it down, and the persona can be adjusted over time.

    It was interesting seeing the agents respond to each other and give feedback on each other’s inputs. Each member contributed to the dialog, offering new ideas and suggestions. The dialog concluded once each agent had an opportunity to participate and move the conversation to a solution.

    Since forming this team, I’ve used it to discuss general policies and related topics whenever I have concerns or ideas. As this is my second job, I am not an expert in many areas, and my focus is just a small part of it. Nonetheless, this team has been instrumental in keeping me informed.

    I want to be clear that I did not provide the team with any specific information or data that could be considered sensitive or proprietary. But even without that specific data, I’ve found the team to be a powerful resource for thinking about the broader issues in my workplace.

    Lit Review Team

    For fun, I enjoy writing, and Sentienta offers a neat capability to form a dream team of literary reviewers. I’ve created a team with iconic writers such as William Shakespeare, Ernest Hemingway, and Edgar Allan Poe. To anchor the team’s feedback in a contemporary style, I included my favorite author, Jim Butcher.

    When I provided pieces of a project I was writing, and asked the team to review it, each agent gave information based on their assigned personas. The feedback was remarkable and gave unexpected insights into my work, enhancing not only my understanding of how readers perceive my work but also deepening my connection to the passion that fuels my writing.

    tip Tip:

    Here is how you can add content to your team (in my case, writing samples): click the paperclip icon located in the toolbar on Sentienta’s main page. This will let you select a file from your desktop (most file formats are supported). Once you click ok, the file will load and you can enter a question or direction to the team for how to use the file’s contents. All of the agents will have access to file and can use it in the dialog.

    At one point I also tried having the various agents rewrite parts of my work in their own style. This gave me interesting view points and helped with the editing process.

    The team, dubbed Lit Review, also functioned as a great way to learn about the authors and better understand each author’s writing. It was fun to watch these famous writers edit each other’s responses!

    I feel obliged to add in here that I never used the team to write my project for me. I used it to give guidance and ideas for improving my writing’s quality. Professionals could easily use this to edit their own work, whether in business or creative writing.

    Sentienta has become an invaluable tool for tackling the challenges I face both professionally and creatively. The Lit Review team will remain one of my go to’s, but I also intend to form new teams to explore how they can support me.

  • Why Sentienta?

    A little over a year ago I gave a talk in which I discussed machine reasoning and gave some examples from the literature. My audience was skeptical to say the least. But here we are in 2025 with multiple companies claiming powerful reasoning capabilities for their models and new benchmarks set weekly. These are exciting times.

    I’ve always believed that understanding the science behind machine “intelligence” would help us understand more deeply who and what we are. In a way, the reasoning capabilities of today’s models do that. We question whether they are simply capturing the ‘surface’ statistics of training data. At the same time, they are unquestionably powerful. I think sometimes this tells us that our cherished human intelligence may rely on similar ‘weak’ methods more often than we’d like to admit. That is to say, as we come to understand machine intelligence the mystery of our own may lessen.

    And now we come to machine consciousness. This is the topic, that if mentioned in serious venues, produces much more skepticism and even snickering. After all, we can’t really define what human consciousness is. We know it has to do with a sense of our own existence, sensations of our environment, and thoughts or ‘inner speech’. Given that this is all subjective, will we ever be able to understand what it is? I suspect that, just as with machine reasoning, the mystery of consciousness will begin to lift as we understand what it looks like in a machine.

    One of the more compelling models for consciousness is the ‘Society of Mind‘ (Minsky, 1988). This model has only been strengthened in the intervening years since it was published. We now know self-referential thought and introspection involve multiple brain centers, collectively called the Default Mode Network (DMN). Brain regions including those from the medial prefrontal cortex, the cingulate gyrus and the hippocampus work together to integrate both past and current experiences. As we begin to model these kinds of interactions will Chalmer’s “hard problem of consciousness” fade away?

    Sentienta was started to both help companies scale their businesses through virtual teams of agents, and to explore these ideas. Agents interact and share their expertise in teams. The dialog that occurs between agents generates new ideas that no single agent produced. Agents have their own memories that they develop from these interactions. These memories are a function of an agent’s persona. As a result, agents evolve and bring new experiences and ideas to a team discussion.

    And here is the point: we can think of a Sentienta team as an agent itself. It consists of the collective experience and interactions of multiple agents we might think of as a society of mind. Can we build agents that perform analogous functions to those found in the DMN? What light might this shine on our own subjective experience?

  • Sentienta Teams

    Sentienta is different from your current experience with AI chatbots. The essence of our product is that something special comes from the interaction of experts.

    When ideas flow from one expert to another, they are evaluated, improved, verified, and sometimes debunked. The dialog between experts is a window into the evolution of ideas and solutions to problems. This transparency is the foundation of trust.

    This is why you find teams in companies: having just one person solve a problem is ok, but getting the perspective of people with different skills makes your solution much more robust.

    Our latest presentation explains how this idea can be used in your company by building virtual teams of GenAI agents.

    https://www.sentienta.ai/blog/Sentienta.pdf