Tag: Agent Memory

  • Integrating Your Own Data into Team Dialogs

    Sentienta agents are based on best-of-class LLMs, which means that they have been trained on vast stores of online content. However, this training does not include current data, nor does it include your proprietary content. In a future post, we’ll discuss how your agent teams can access and utilize current online data, but today I want to talk about loading your own content into your team dialogs.

    An Easy Way: Copy-and-Paste

    Sentienta provides several mechanisms for entering your content into team discussions. Perhaps the easiest method is to simply copy text that you want your team to know about onto the clipboard and paste it into the query box.

    You can add a question about the content to the end of what you’ve pasted so that the team has some context for what you added. This method works for short passages when you want to add perhaps a few paragraphs to the discussion, but is impractical when working with larger documents.

    Loading Files for the Team

    For larger documents, a better method is to load the file into the dialog. This is done by clicking the paperclip button (located in the toolbar below the query box), and browsing for the file you’d like to load. You can also simply drag-and-drop a file onto the query box.

    The query box will tell you that the file content has been loaded, and you can append questions and comments to the content to aid the agents in determining how to use the content for discussion.

    The advantage of this approach is that it ensures that all the agents on the team see the same content and have the same context for discussing and using it in subsequent dialogs.

    A disadvantage of both this method and the first is that the content doesn’t persist indefinitely. Team dialogs become part of each agent’s semantic memory (as discussed here), but this memory is limited in both size and time.

    Persisting Your Content

    There are many cases where you want your agents to retain document knowledge indefinitely. For example, an HR agent that maintains company policies and procedures—since these rarely change. Manually reloading these documents regularly is impractical, so Sentienta offers an agent that can store and retrieve files from its own dedicated folder.

    To see this in action, add the ‘Ed’ agent from the Agent Marketplace under the Document and Content Access section. Simply select the Ed agent and assign it to a team. This agent provides tools for adding individual files or entire folders. You can manage stored files by listing them and removing any that are no longer needed.

    The Ed agent retains these files and can answer questions about them anytime. This approach allows you to load the files once and then add the agent to any team with the stored information. However, unlike the second method discussed, other agents on the team won’t automatically share Ed’s knowledge. Nevertheless, Ed can communicate its information to other agents through the dialog.

    Final Thoughts

    With the methods we’ve discussed here, you can integrate company-specific documents into team dialogs, ensuring that relevant information is always accessible when solving problems. This approach enhances collaboration and keeps your teams aligned with the most current data.

  • A Deep-dive into Agents: Memory

    There is a lot of buzz in the news about AI agents. I thought I’d take this opportunity to discuss what makes a Sentienta agent different from what you might have read.

    As this is a somewhat complex subject, I’ve decided to break it into several posts. This one is about the memory that drives agent behavior. Subsequent posts will discuss Tool Access, Task Delegation, Multi-agent Interaction and Autonomous Action.

    A Sentienta agent exists within an environment, consisting of interactions with users, other agents (Sentienta is a multi-agent platform), its local host and the internet.

    The core of the agent is an LLM. We use best-in class LLMs as engines that drive agentic functions. We constantly assess which LLMs are best suited for the behavior we expect from our agents, and given the rapid evolution in LLM capability, this is essential. Because the engine is an LLM, the fundamental communications of the agent both internally and with the environment are in natural language.

    The LLM is of little value without context, and this is provided by memory. Sentienta agents have two kinds of memory which we can loosely relate to the classes of memory known to be used by a brain region called the hippocampus. The first is semantic memory: derived from the agent’s interaction with other agents and the user, this is simply a record of LLM communications organized into the current dialog and past dialogs.

    The second kind of memory is episodic: each agent uses its persona and existing memory to filter and reframe the dialog to create new episodic memories. Note that this is bootstrapped from the persona (which you write when you create the agent) – a new agent builds this memory using the persona as the starting point.

    So how is all of this used by the LLM? The persona, and more general agent instructions, define the LLM system prompt. The memory (of both types), plus the communication from the environment form the query.

    Pretty simple right? But of course the devil is in the details.

    There are a few things to note about this architecture. The first is that the persona plays an important role: it both guides LLM responses because it is a part of the system prompt, and it helps model the evolving agent memory creating a distinct agent ‘personality’. This evolves as the agent participates in tasks and interacts with other agents.

    The second is that the episodic memory is retained by the agent. If an agent belongs to more than one team, the agent brings its memory with it across teams and tasks. For example if an agent manages a product design team and joins a marketing team, the agent brings with it the latest design decisions from the product team. And of course what it learns from marketing can drive product design.

    It’s important to note that the agents on your teams belong to you. Knowledge gained by an agent is never shared outside your account.

    That is a high-level summary of how Sentienta agents create context for their tasks. Stay-tuned to learn about how agents use tools to interact with their environment.