Tag: Semantic Memory

  • A Deep-dive into Agents: Memory

    There is a lot of buzz in the news about AI agents. I thought I’d take this opportunity to discuss what makes a Sentienta agent different from what you might have read.

    As this is a somewhat complex subject, I’ve decided to break it into several posts. This one is about the memory that drives agent behavior. Subsequent posts will discuss Tool Access, Task Delegation, Multi-agent Interaction and Autonomous Action.

    A Sentienta agent exists within an environment, consisting of interactions with users, other agents (Sentienta is a multi-agent platform), its local host and the internet.

    The core of the agent is an LLM. We use best-in class LLMs as engines that drive agentic functions. We constantly assess which LLMs are best suited for the behavior we expect from our agents, and given the rapid evolution in LLM capability, this is essential. Because the engine is an LLM, the fundamental communications of the agent both internally and with the environment are in natural language.

    The LLM is of little value without context, and this is provided by memory. Sentienta agents have two kinds of memory which we can loosely relate to the classes of memory known to be used by a brain region called the hippocampus. The first is semantic memory: derived from the agent’s interaction with other agents and the user, this is simply a record of LLM communications organized into the current dialog and past dialogs.

    The second kind of memory is episodic: each agent uses its persona and existing memory to filter and reframe the dialog to create new episodic memories. Note that this is bootstrapped from the persona (which you write when you create the agent) – a new agent builds this memory using the persona as the starting point.

    So how is all of this used by the LLM? The persona, and more general agent instructions, define the LLM system prompt. The memory (of both types), plus the communication from the environment form the query.

    Pretty simple right? But of course the devil is in the details.

    There are a few things to note about this architecture. The first is that the persona plays an important role: it both guides LLM responses because it is a part of the system prompt, and it helps model the evolving agent memory creating a distinct agent ‘personality’. This evolves as the agent participates in tasks and interacts with other agents.

    The second is that the episodic memory is retained by the agent. If an agent belongs to more than one team, the agent brings its memory with it across teams and tasks. For example if an agent manages a product design team and joins a marketing team, the agent brings with it the latest design decisions from the product team. And of course what it learns from marketing can drive product design.

    It’s important to note that the agents on your teams belong to you. Knowledge gained by an agent is never shared outside your account.

    That is a high-level summary of how Sentienta agents create context for their tasks. Stay-tuned to learn about how agents use tools to interact with their environment.