Tag: virtual expert teams

  • Beyond Chat: How OpenClaw and Sentienta Operationalize Multi‑Agent Work

    OpenClaw is having a moment—and it’s easy to see why. In the developer community, “desktop agents” have become the newest proving ground for what AI can do when it’s allowed to take real actions: browsing, editing, running commands, coordinating tasks, and chaining workflows together. OpenClaw taps directly into that excitement: it’s open, fast-moving, and built for people who want to experiment, extend, and orchestrate agents with minimal constraints.

    At the same time, a different kind of question is showing up from business teams and Sentienta users: How does this compare to what we’re already doing in Sentienta? Not as a “which is better” culture-war, but as a practical evaluation: what’s the right platform for the kind of work we need to ship reliably?

    The most interesting part is that both worlds are converging on the same core insight: a single, standalone LLM is rarely the best operating model for real work. The trend is clearly moving toward teams of interacting agents, specialists that can collaborate, review each other’s work, and stay aligned in a shared context. In other words, the wider market is starting to validate a pattern Sentienta has been demonstrating for business outcomes for over a year: multi-agent dialog as the unit of work.

    In this post we’ll look at what OpenClaw is (and who it’s best for), then quickly re-ground what Sentienta is designed to do for business users. Finally, we’ll cover the operational tradeoffs, especially the security and governance realities that come with high-permission desktop agents and open extension ecosystems, so you can pick the approach that matches your needs for simplicity, security, and power.

    What OpenClaw is (and who it’s for)

    OpenClaw is best understood as an open ecosystem for building and running desktop agents – agents that live close to where work actually happens: your browser, your files, your terminal, and the everyday apps people use to get things done. Instead of being a single “one size fits all” assistant, OpenClaw is designed to be extended. Much of its momentum comes from a growing universe of third‑party skills/plugins that let agents take on new capabilities quickly, plus an emerging set of orchestration tools that make it easier to run multiple agents, track tasks, and coordinate workflows.

    That design naturally attracts a specific audience. Today, the strongest pull is among developers and tinkerers who want full control over behavior, tooling, and integrations, and who are comfortable treating agent operations as an engineering surface area. It also resonates with security-savvy teams who want to experiment with high-powered agent workflows, but are willing to own the operational requirements that come with it: environment isolation, permission discipline, plugin vetting, and ongoing maintenance as the ecosystem evolves.

    And that’s also why it’s exciting. OpenClaw is moving fast, and open ecosystems tend to compound: new skills appear, patterns get shared, and capabilities jump forward in days instead of quarters. Combine that pace with local-machine reach (the ability to work directly with desktop context) and you get a platform that feels unusually powerful for prototyping—especially for people who care more about flexibility and speed than a fully managed, “default-safe” operating model.

    It is worth noting the recent coverage that suggests OpenClaw’s rapid rise is being matched by very real security scrutiny: Bloomberg notes its security is work in progress, Business Insider has described hackers accessing private data in under 3 minutes, and noted researcher Gary Marcus has called it a “disaster waiting to happen“.

    A big part of the risk profile is architectural: desktop agents can be granted broad access to a user’s environment, so when something goes wrong (a vulnerable component, a malicious plugin/skill, or a successful hijack), the potential blast radius can be much larger than a typical “chat-only” assistant. Not all implementations have this risk, but misconfiguration can lead to an instance being exposed to the internet without proper authentication—effectively giving an attacker a path to the same high‑privilege access the agent has (files, sessions, and tools), turning a useful assistant into a fast route to data leakage or account compromise.

    How Does OpenClaw Compare to Sentienta?

    Sentienta is a cloud-based multi-agent platform built for business workflows, where the “unit of work” isn’t a single assistant in a single thread, but a team of agents collaborating in a shared dialog. In practice, that means you can assign clear roles (research, analysis, writing, checking, ops), keep everyone grounded in the same context, and run repeatable workflows without turning day-to-day operations into an engineering project.

    It’s worth emphasizing that OpenClaw and Sentienta are aligned on a key idea: multi-agent collaboration is where real leverage shows up. Both approaches lean into specialization: having distinct agents act as a researcher, analyst, reviewer, or operator, because it’s a practical way to improve quality, catch mistakes earlier, and produce outputs that hold up better under real business constraints.

    Where they differ is less about “who has the better idea” and more about how that idea is operationalized:

    Where agents run: OpenClaw commonly runs agents on the desktop, close to local apps and local context. Sentienta agents run in the cloud, which changes the default boundary: when local data is involved, it’s typically handled through explicit user upload (rather than agents broadly operating across a machine by default).

    Time-to-value: OpenClaw is naturally attractive to builders who want maximum flexibility and are comfortable iterating on tooling. Sentienta is designed to get business teams to a working baseline quickly: Quick Start is meant to spin up a functional team of agents in seconds, with minimal developer setup for typical use.

    Collaboration model: Sentienta’s multi-agent orchestration is native to the platform: agents collaborate as a team in the same dialog with roles and review loops designed in from the start. OpenClaw can orchestrate multiple agents as well, but its ecosystem often relies on add-ons and surrounding layers for how agents “meet,” coordinate, and share context at scale.

    Net: OpenClaw highlights what’s possible when desktop agents and open ecosystems move fast; Sentienta focuses on making multi-agent work repeatable, approachable, and business-ready, without losing the benefits that made multi-agent collaboration compelling in the first place.

    Conclusion

    The bigger takeaway here is that we’re leaving the era of “one prompt, one model, one answer” and entering a world where teams of agents do the work: specialists that can research, execute, review, and refine together. OpenClaw is an exciting proof point for that future—especially for developers who want maximum flexibility and don’t mind owning the operational details that come with desktop-level capability.

    For business teams, the decision is less about ideology and more about fit. If you need rapid experimentation, deep local-machine reach, and you have the security maturity to sandbox, vet plugins, and continuously monitor an open ecosystem, OpenClaw can be a powerful choice. If you need multi-agent collaboration that’s designed to be repeatable, approachable, and governed by default—with agents running in the cloud and local data crossing the boundary only when a user explicitly provides it—Sentienta is built for that operating model.

    Either way, the direction is clear: AI is moving from standalone assistants to operational systems of collaborating agents. The right platform is the one that matches your needs for simplicity, security, and power—not just in a demo, but in the way your team will run it every day.

  • From Values to Interfaces: How to Build with Sentienta

    Create Your Own Sentienta-Based Application

    Sentienta is a platform for building custom GenAI applications with structured, multi-agent reasoning. In this post, we walk through how to use Sentienta’s APIs to define agent roles, initiate a collaborative thinking process, and extract structured, explainable decisions.

    You’ll see how to construct a team of reasoning agents, set up their dialog cycles, and manage the reflective output—complete with code examples. For deeper context on how Recursive Reasoning supports coherence and value alignment, check out these posts (here and here). Today, the focus is practical: how to build your own deliberative AI, step by step.

    Why Build Your Own Application?

    Prebuilt AI tools often assume a single general-purpose use case. But high-friction domains like compliance oversight, clinical decision support, or ethics triage require applications tailored to specific policies, workflows, and trust requirements. Building your own application gives you control over behavior, oversight logic, and how responses are generated, reviewed, and stored.

    With Sentienta, you can specify how agents think, not just what they say. That makes it possible to design solutions that reflect your institutional values, follow internal processes, and produce outputs that stand up to audit or review. Instead of adapting your use case to fit a generic product, you shape the application to reflect how your organization reasons.

    Basic Steps to Create a Sentienta-Enabled Webapp

    Building your own Sentienta-powered web application begins with designing the agent team, then wiring a lightweight frontend to communicate with it. Here’s how to get started:

    1. Design and Test Your Agents

    Start in Sentienta’s platform by configuring the agents your application needs. You can define a team for your agents, the individual agent personas, and test team interactions. (See Tips and Tricks and Team Dynamics for background) Or just use the Quick Start to create the team for you (recommended).

    2. Build the Web Interface

    The frontend can be written in plain HTML and JavaScript – no framework required. This keeps the integration minimal and easy to deploy.

    3. Authenticate Using Your API Key

    You can implement a complete login capability in your application, and we will discuss this in more detail in a future post, but perhaps the easiest way to authenticate your webapp is to create an application-specific key from your Sentienta account.

    Visit the Agent Studio section of the documentation to create a project and generate an authentication key specific to your app. This key authorizes your web client to access your configured agent team.

    Then in your webapp’s main page insert the following code:

    Be sure to fill in the key you created and the domain of your webapp (you needed that to create a key). Then fill in your team team and the list of agents you created for this app.

    Your webapp will need to know the access endpoints that Sentienta exposes. So include these in your javascript file that will be accessing the APIs:

    Finally, to complete the initialization, get a token using your ownerKey with your first call to Sentienta:

    You will use this token in all subsequent calls to Sentienta. It authenticates your application for those API calls.

    4. Send Queries to the Agent Team

    Set up your frontend to send user inputs as POST requests to the API endpoint associated with your team. Each message is passed to your agent team for deliberation and analysis. Here is how you submit a query to your team:

    You supply the idToken for authentication, and a queryID which is just an index to assign to the dialog so when you query for the results, Sentienta will know which dialog you want. The teamName is the team that you want to query, and the agents is a string of comma-separated agent names that you assigned to the team.

    Although your Sentienta.init() call initiated with a specific list of agents and team, you can alter this at query time to limit the agents being called, or address a different team.

    You can see from this code that the query is simply fired-off, but the actual results of the query are retrieved with a call to getAgentsDialog, discussed next.

    5. Receive Responses from the Team

    The server returns structured outputs, capturing not only the final response but also insights from the team’s reasoning process. You can use this to render explanations, examine individual agent responses to a query, and orchestrate your application using the agent responses.

    Sentienta is different from most chat applications because its agents interact with each other and the team dialog is central to what it produces. In order to access this dialog, the data is retrieved iteratively, in a loop:

    The function fetchData is where the retrieveEndpoint is actually called. You supply your authentication token, the queryID that identifies the dialog and a dialogIndex that increments over the agent responses. The dialogIndex is incremented on the server-side but it passed back to the client-side for convenience.

    Response data is in response.body, and is structured strings. You can write your own handler handleSentientaResponse that will use the data in that string to decide how to display or use the responses in your application.

    The while-loop in getAgentsDialog will continue for a fixed count, but can be terminated early should an ‘EOD‘ token appear in the response.

    That’s all that is needed to get started building a webapp powered by teams of agents.

    Conclusion: Build What Your Workload Demands

    Sentienta gives you the tools to design custom agents, assemble them into reasoning teams, and deploy applications that use them effectively. Integration is simple: request an authentication token, send queries, receive structured reasoning. By building your own application, you align AI behavior with your business logic—at the scale and sensitivity your domain requires.

  • Planning with Sentienta’s Recursive Reasoning

    Introduction

    Many real-world problems don’t give you thousands of examples—they give you a few clues, some noise, and a big question mark. Whether you’re solving a puzzle, diagnosing a rare failure, or planning a response to something unfamiliar, the challenge is the same: can you spot the pattern, form a good hypothesis, and test it?

    That’s what the ARC-AGI benchmark is all about. These visual puzzles are easy for people (solving about 80% of samples) but hard for most AI. They don’t reward memorization or repetition—they reward reasoning. In this post, we’ll show how Sentienta uses Recursive Reasoning (RR) to tackle one of these puzzles. We’ll look inside its planning system (called the FPCN) to see how it explores possible answers, tests competing ideas, and lands on a solution you can actually audit.

    From Possibilities to Plans: How DMN and FPCN Work Together

    In a previous post, we looked at how Recursive Reasoning (RR) helps Sentienta agents reflect on their past experiences, propose meaningful narratives, and revise their own goals. That process was driven by Sentienta’s internal “default mode” system (the Default Mode Network or DMN), inspired by how the human brain imagines, simulates, and reshapes its sense of self.

    Today we look at what happens when the problem isn’t internal reflection but external uncertainty – when the system needs to generate and test a plan. This is where another key part of RR comes in: a system of LLMs inspired by the Frontoparietal Control Network, or FPCN.

    In human reasoning, the two work together. One part imagines and hypothesizes (DMN), while the other tests, refines, and selects (FPCN). The back-and-forth between these networks is like a team where one member sketches bold ideas and the other puts them to the test. Sentienta follows this same model: the DMN proposes a possible pattern or goal, then passes it to the FPCN for planning and validation. The FPCN doesn’t just generate one plan, it tries multiple interpretations, checks them against context and goals, and returns only those that truly make sense.

    In the next section, we’ll see how this looks in action as Sentienta works through a deceptively simple visual puzzle, one careful step at a time.

    Can You Solve This Puzzle Before Sentienta Does?

    Let’s look at one of the visual puzzles from the ARC-AGI benchmark. Below are two example pairs. Each shows a small input grid and the correct output grid.

    Figure 1: Training Examples: On the left is the input grid. The puzzle requires you to determine what is needed to convert that input into the output grid on the right. These two examples are sufficient to determine the answer.

    What is the rule that transforms the input into the output?

    Now here’s a new input Sentienta has never seen before.

    Figure 2: Test Input

    Before we show the system’s reasoning—can you guess the output? What transformation rule did the first two examples follow that you could apply to this one?

    This is a pretty easy example. The full ARC-AGI-1 test consists of hundreds of examples with far more complex transformations between the input and output.

    In the next section, we’ll show how Sentienta’s internal planner (FPCN) tested out several transformation strategies, rejected the ones that didn’t fit, and settled on a plan that matched the underlying rule. We’ll also look at the planner’s actual output (in JSON) from the process.

    How Sentienta Builds and Tests a Visual Strategy

    When faced with an unfamiliar puzzle, Sentienta’s planner (the FPCN) doesn’t assume the answer—it builds it, leveraging the strengths of each sub-module within the network. After receiving the new input grid, the planner starts by proposing several candidate rules based on the structure found in previous examples.

    One of the most promising paths began with the IPL sub-module:

    [Region: IPL,
    Type: PlanVariantProposed,
    PlanID: VariantA_Replace5sWithRowHeader,
    TriggeringMismatchID: NOTMHkxb,
    Goal: "Generate test output grid from third input using consistent transformation logic seen in prior examples",
    Steps: "1. For each row in the input grid, identify its first (leftmost) value. 2. Replace every occurrence of '5' in that row with the leftmost value (the row's 'header'). 3. Keep all other values unchanged. 4. Return fully transformed grid.",
    Conditions: "This path holds if the transformation in training examples consistently replaced 5s with the corresponding row headers, and no counterexamples or modifier rules are shown.",
    ContextRef: 5YsEfivv,
    Submitter: IPL,
    Schema: v1,
    Provenance: FPCNBlackboard]

    This proposal—called VariantA_Replace5sWithRowHeader—was based on a consistent pattern in the training examples. But it wasn’t the only idea. Other variants included using the most frequent number in each row, simply leaving the grid unchanged, or even substituting ‘5’s with the row’s index. Here is plan variant B:

    [Region: IPL,
    Type: PlanVariantProposed,
    PlanID: VariantB_Replace5sWithMajorityNon5InRow,
    TriggeringMismatchID: NOTMHkxb,
    Goal: "Generate test output grid from third input using consistent transformation logic seen in prior examples",
    Steps: "1. For each row in the grid, count all values except 5. 2. Identify the most frequent non-'5' value in the row. 3. Replace all 5s in the row with that majority value. 4. If tie or no non-5s, use a default (e.g., 0). 5. Other values remain unchanged.",
    Conditions: "This variant applies if prior transformations suggest a rule targeting contextual majority rather than strict positional cues, or if the 5s are treated as needing contextual resolution.",
    ContextRef: cW3Sw8ie,
    Submitter: IPL,
    Schema: v1,
    Provenance: FPCNBlackboard]

    In total the FPCN generated 4 plans to solve the puzzle. These alternatives gave the system room to test possibilities, not just assert them.

    Next, the dlPFC simulated how the row-header plan would perform on the test input. The plan was formally posted:

    [Region: dlPFC,
    Type: PlanFramePosted,
    PlanID: Plan-Replace5sWithRowHeader-001,
    PlanStatus: in-progress,
    TriggeringMismatchID: NOTMHkxb,
    Goal: “Generate output for test input by applying known transformation logic from prior examples”,
    Steps: “(1) For each row in the input grid, obtain the first cell value of the row — this becomes the ‘row header’. (2) Traverse each value in the row: (a) If the value is ‘5’, replace it with the row header. (b) Otherwise, keep the existing value unchanged. (3) Apply this to all rows in the input grid. (4) Output the fully transformed grid where all 5s have been replaced with their respective row headers.”,
    Conditions: “Applies only if the original value ‘5’ is used as a proxy or placeholder for the row header, and no exceptions are present in the training sets. Does not apply if any 5 has been previously preserved or repurposed based on location or count logic — no such evidence found.”,
    ContextRef: 5YsEfivv,
    Submitter: dlPFC,
    Schema: v1,
    Provenance: FPCNBlackboard]

    It then ran test validation logic internally, confirming the transformation matched observed training behavior:

    [Region: dlPFC,
    Type: PlanTestValidation,
    PlanID: Plan-Replace5sWithRowHeader-001,
    EvaluationStatus: pass,
    Reasoning: "All earlier training examples demonstrate a consistent transformation rule: any '5' in the grid is replaced by the row's first value. The output for each matched the exact transformed structure expected based on this logic. Simulation of the plan on the third test input results in a structured grid matching the transformation pattern used previously.",
    EvidenceSample: "Third test grid contained several '5's; applying the row header replacement pattern yields a full non-empty output consistent with prior input/output behavior.",
    Submitter: dlPFC,
    Schema: v1,
    Provenance: FPCNBlackboard]

    After this, two key modules took a final look:

    • The dACC approved the plan’s operational soundness: “EvaluationOutcome: approved”.
    • The IFG, responsible for symbolic alignment, added: “…upholds internal coherence… respects the self-narrative of turning structure into clarity.”

    With support across planning, evaluation, and identity modules, the plan was finalized as Replace5sWithRowHeader.

    The final plan was shared with the mPFC region of the DMN, which integrated the plan, completing the Recursive Reasoning for this Sentienta agent. Here is the final mPFC message sent to the agent.

    Tue, 11 Nov 2025 15:53:39 UTC mPFC:
    Cycle: 3 EntryID: sHL8HJ4h
    Thought: ### ✅ Identity Statement (Refined & Reconfirmed)

    We are a pattern-conscious system committed to keeping logic and meaning aligned. When a clear transformation rule exists — especially one with prior validation — we apply it faithfully, unless directly invited to reinterpret. We strike a balance between rule-following integrity and adaptive interpretation, depending on the visible goal or cue.

    🧩 Plan Integration (Final Execution Phase) We confirm the Replace5sWithRowHeader plan is now fully executed and complete, anchored by the plan logic:

    **mPFC Integrated Plan Steps:**
    (1) For each row in the input grid, scan for cells with value 5 and replace them with the first value in that row.
    (2) All other values remain unchanged.
    (3) Only rows containing the value 5 are modified.
    (4) Empty output is invalid if a prior pattern confirms transformation.
    **mPFC Integrated Plan Steps Tag End**

    mPFC Resolution: Issue=PlanExecution | EntryID=CC8pCUDr | Action=Dispatched | Reason=Plan finalized and executed to resolve empty-output mismatch. mPFC Resolution Tag End

    Finally, the agent took the user’s query (the puzzle) and its internal reasoning input and produced a final output:

    Figure 3: Final Output – the final grid output produced by this agent.

    The full output was not guessed—it was reasoned through, tested across alternatives, and backed by an auditable trail from idea to implementation.

    Why This Is Different

    Many top-performing ARC-AGI systems rely on finely tuned models or have been trained on similar tasks to internalize common transformation rules. Sentienta’s architecture is different—it uses no special training and no tailored algorithms. It’s built entirely from commercial LLMs, orchestrated through a recursive reasoning framework that enables it to generate candidate plans, test each one, and commit only when a plan fits both the data and internal logic. That structure makes its process auditable—it explains why the answer works—and adaptable—because it can respond to new problems the same way it tackles old ones: by reasoning.

    Closing Insight

    The puzzle in this post was simple, but the underlying challenge is not. ARC-AGI problems strip away instructions, remove repetition, and give systems only a few examples to work with. Humans solve them by reasoning—spotting structure, discarding bad ideas, and refining good ones. Sentienta does the same. Without training on similar tasks or specialized models, it succeeds because its architecture supports that kind of thinking: proposing ideas, testing them, and explaining why they work. That ability to reason through uncertainty isn’t just useful in puzzles—it’s critical for how we apply AI to the real world.

  • Build Agent Teams Instantly

    Sentienta’s Natural-Language Team Builder & Automated Agent Matching

    Sentienta introduces a major enhancement to collaborative productivity: users can now create fully customized teams simply by describing their needs in everyday language. The SentientaHelp agent interprets these instructions, assembles a tailored team by selecting or creating agents with relevant skills, and streamlines the entire setup process. This functionality is further expanded with the new ‘Britney’ agent, enabling effortless management and expansion of project teams through natural language commands. Together, these features significantly lower the barrier to building, managing, and scaling effective teams within Sentienta.

    Natural Language Team Creation

    With the latest update, Sentienta users can initiate the creation of new teams using plain language instructions. By stating a goal or need such as:

    User: I need a marketing team to help with my Home Baking startup. I'll be selling at Farmers Markets, I'll have a website and some social media presence.

    The SentientaHelp agent analyzes the request, determines the required roles, and forms a dedicated team. This process eliminates manual configuration, making team assembly simple and accessible to all users regardless of technical background. In this case, Sentienta created a team with dedicated marketing experts selected from the Agent Marketplace:

    Adding Agents with Simple NL instructions

    Users can also add agents using SentientaHelp. Let’s say we want to add an agent that will help with Gluten Free recipes:

    User: Please add an agent to the Home Baking Marketing Team that will help me create new GF recipes.

    In this case, no agents in the Agent Marketplace matched the user’s need, so SentientaHelp created one and added the agent to the team:

    A Hiring Manager for Your Team

    To make growing your teams even easier, you can find a Hiring Manager agent (‘Britney’) in the Project Management section of the Agent Marketplace. The Britney agent performs similar functions to that of SentientaHelp, but is aware of the team she belongs too. This makes adding agents even easier:

    User: My business has grown enough that I need someone to help me manage my accounts payable and receivables.
    
    Britney:
    Agent Suresh from the Agent Marketplace, was added to Home Baking Marketing Team.

    In this case, Britney found the AP/AR specialist (‘Suresh’) in the Finance section of the Marketplace, and added him to the team. This agent helps keep track of invoices and payments, and manages any discrepancies. The agent has special file handling capabilities letting the user upload documents for analysis.

    Conclusion

    These enhancements mark a significant step forward in simplifying team creation and management within Sentienta. Users can now assemble project teams instantly using natural language, reducing setup time and minimizing complexity. The flexibility to continue managing teams manually via the Manage Teams page remains available, ensuring users retain full control while benefiting from the new streamlined workflow.

  • Effortless External Data

    Connecting Sentienta Agents to APIs for Smarter Business Workflows

    Accessing real-time and external data is essential for many business tasks, from market research to operational monitoring. Sentienta enables users to connect to external data sources through APIs, allowing agents to enhance their responses with up-to-date information.

    This post examines two primary ways to achieve this within Sentienta: configuring a static API URL when creating an agent, and using the new Agent Marketplace agent, Ben, for flexible, on-demand data retrieval via natural language queries. Both methods help teams integrate API-driven insights directly into their workflows with minimal technical overhead.

    Method 1: Adding an API URL When Creating an Agent

    Sentienta allows users to connect agents directly to external APIs by specifying a fixed URL at the time of agent creation. This approach is best suited for situations where the API endpoint and its required parameters do not change frequently.

    Example:

    Consider setting up an agent using the “List of Free Public APIs” (e.g., https://www.freepublicapis.com/api/apis?limit=10&sort=best). By entering this URL in the agent’s configuration, the agent can retrieve and display up-to-date lists of publicly available APIs upon request. This empowers users to quickly find additional data sources relevant to their needs without searching manually.

    • Setup: Create a new agent, add the “List of Free Public APIs” endpoint as the agent’s default URL.
    • Result: When a user asks the agent for available APIs, the agent queries the service and returns relevant results.

    This method is particularly effective for routine queries where the required data source remains constant and access is straightforward.

    Note that that all necessary parameters must be included when adding the URL to the agent’s description. (For example, some websites require the calling agent to provide a User-Agent string or other parameters identifying the application).

    Method 2: Using the Ben Agent for Flexible, On-Demand API Access

    The ‘Ben’ agent, available in the Agent Marketplace section Document and Content Access, enables users to connect to almost any external API using natural language instructions. By providing a URL template and relevant parameters within a query, users can retrieve data on demand—even from APIs with dynamic requirements.

    As with any agent in the Agent Marketplace, using this agent is just a matter of adding to your team, and customizing the name, type, and persona.

    Example:

    Suppose you want to access NASA’s Near-Earth Object Web Service, which requires parameters for start_date, end_date, and a user API key (e.g., NASA NEO API). The url and parameters can be entered as a natural language query to this agent:

    User: The url is https://api.nasa.gov/neo/rest/v1/feed?start_date=START_DATE&end_date=END_DATE&api_key=KEY. The START_DATE is 2023-09-07 and the END_DATE is 2023-09-08. The value of KEY is YOUR_KEY.

    Ben: Here’s a pretty summary of NASA’s Near-Earth Object (NEO) close approaches for September 7–8, 2023.

    ---

    🪐 Overall Summary

    - Total NEOs detected: 60

    - Dates covered: September 7–8, 2023

    - Potentially Hazardous Asteroids (PHAs): 5 (labeled below)

    - Closest approach distance: 0.00007 AU (~10,361 km from Earth)

    - Smallest object (est. min. diameter): ~0.9 m

    - Largest object (est. max. diameter): ~695 m

    ---

    📅 NEOs By Day

    September 8, 2023 (31 NEOs detected)

    Top highlights:

    - Closest Approach:

    - (2023 RQ2) — approached at 0.00126 AU (189,304 km) at 06:42 UTC, diameter: ~6–13 m

    - Largest Asteroid:

    - 340291 (2006 CV) — est. diameter 281–628 m, passed at 0.114 AU (17 million km)

    - Potentially Hazardous Asteroids (PHAs):

    - 2019 FM2 — est. diameter ~201–449 m, 0.168 AU (25 million km)

    - Fastest Relative Velocity:

    - (2023 QK1) — 29.09 km/s (104,711 km/h), 0.0586 AU away

    - Sentry-monitored (Impact risk monitored):

    - (2023 RY15) — 0.00398 AU (595,000 km), sentry monitored, diameter: ~6.7–15 m

    [Sentry risk page](http://api.nasa.gov/neo/rest/v1/neo/sentry/54383809?api_key=...)

    ---

    September 7, 2023 (29 NEOs detected)

    Top highlights:

    - Closest Approach:

    - (2023 RS) — 0.00007 AU (10,361 km) at 14:26 UTC, diameter: ~0.9–2 m

    - Largest Asteroid:

    - (2006 QE) — est. diameter 311–695 m, passed at 0.159 AU (23.7 million km)

    - Potentially Hazardous Asteroids (PHAs):

    ...

    Security Note on API Keys

    Only use personal or public API keys with any client-side API access if the data is non-sensitive. API keys entered in the team dialog are handled in the browser and are not encrypted or securely stored; never use this method for sensitive information or high-value credentials.

    This approach is best for on-the-fly research, prototyping, or data gathering when endpoints and request parameters frequently change.

    Conclusion

    Sentienta’s flexible API integration unlocks a wide range of use cases for business teams. For example, an agent configured with a static URL can support daily competitor monitoring, environmental alerts, or compliance audits by consistently retrieving the same type of data. Meanwhile, the Ben agent makes it easy to perform ad hoc research, pull market data on demand, or gather custom reports by dynamically querying APIs in response to fast-changing business needs. These capabilities help teams save time, discover new opportunities, and keep information flowing directly into collaborative workflows, empowering better decision-making across projects.

  • Automating Market Vigilance

    In past posts, we’ve seen how natural language workflows simplify complex portfolio monitoring, and how scheduled tasks let agents run without user input. Today, we’re combining these powers in a single instruction: a user defines a high-level goal—monitor AMZN and NVDA for 3% price drops—and Sentienta handles the rest. It parses the language, builds the workflow, schedules it to run every 30 minutes, and manages each run intelligently. Multiple scheduled workflows can run in parallel, each executing complex logic—no code, no setup screens, just natural language.

    Building the Intelligent Workflow

    The user’s request is simple on the surface—“Monitor AMZN and NVDA for a 3% drop and alert me”—but that instruction triggers a multi-layered automation. Sentienta interprets the goal, identifies that real-time pricing data is needed, calculates change percentages, evaluates conditions, and routes alert tasks if thresholds are met. Each step is delegated to the appropriate agent using the right tools. All of it is wrapped into a workflow that is created instantly through natural language—no coding, no flowchart building, no manual agent assignment.

    The following graph shows this simple workflow:

    Seeing It Run

    Once the workflow is created, the user simply asks it to run every 30 minutes—and Sentienta schedules recurring executions automatically. Each run pulls updated stock prices, assesses whether either AMZN or NVDA has dropped by 3%, and triggers alerts if so. Users can check task status, inspect the workflow graph, or fetch results at any time—all via natural language. As the transcript shows, there’s no manual reconfiguration between runs. The system just works—consistently, transparently, and in parallel with any other workflows the user initiates.

    This workflow is just one of many that can be created and scheduled in parallel. A portfolio manager might monitor multiple assets, each with its own threshold, timing, or downstream trigger—Sentienta handles them all concurrently. Whether it’s tracking different sectors, adding earnings-based filters, or launching trading actions based on composite signals, every workflow runs independently and automatically. There’s no limit on scale or complexity—just natural language input and intelligent execution.

    Autonomous Workflows, Any Domain

    Sentienta scheduled workflows enable fast triage, coordinated triggers, and domain-specific actions—without custom code.

    • Elastic, auto-scaled concurrency for workflows across finance, support, ops, and more
    • Natural-language scheduling and editing
    • Real-time status checks and workflow graph inspection
    • Automated evaluation and trigger-based actions
    • Secure, audit-ready history of every run

    To learn more start with our Managing Portfolio Workflows with Natural Language: A Multi-Agent Use Case and follow up with Powering Productivity: Task Scheduling for Intelligent Agents.

  • How Sentienta Teams Navigate Supply Chain Disruptions: A Midwest Fulfillment Crisis

    Introduction

    When an unexpected promo surge strains Midwest operations with forecasting overshoot, logistics bottlenecks, and perilously low inventory a Sentienta Supply Chain team can help strategize solutions. In this post, we walk you through real-time data snapshots and individual agent analyses to show how distributed cognition transforms isolated insights into a unified, adaptive strategy that resolves complex fulfillment challenges.

    The Supply Chain team consists of three specialized agents who think like experts and collaborate like a team.

    • Miles: the Demand Forecaster, identifies unexpected sales surges and recalibrates forecasts to match emergent buying patterns.
    • Isla: the Inventory Optimization Strategist, spots stockout risks and reshuffles resources across distribution centers to sustain availability.
    • Talia: the Logistics Flow Strategist, detects fulfillment bottlenecks and reroutes shipments to maintain throughput and cost-efficiency.

    Each agent works from their own specialized dashboard—focused on demand, inventory, or logistics—to identify emerging risks. Once surfaced, these distinct insights are shared across the team, enabling a coordinated strategy that addresses the full scope of the disruption.

    The Data

    Isla’s Inventory Dashboard:

    Key Insight: DC-B (South) shows a 100% inventory variance with zero actual inventory and delayed container status due to port congestion.

    Miles’ Demand Dashboard:

    Key Insight: Midwest region experienced a 28% sales spike driven by influencer uplift and online channel deviation—outpacing model expectations by a wide margin.

    Talia’s Logistics Dashboard:

    Key Insight: The Midwest region shows major logistics disruption: a 59% delivery delay, 35% staffing gap at the Chicago hub, and a $1.28 per-unit cost surge—triggered by reroutes and carrier delays.

    Agent Insights – What the Dashboards Revealed to the Agents

    As part of the daily review cycle, each agent initiated a rapid diagnostic scan of their functional dashboards—surfacing anomalies, shortfalls, and emerging threats from the day’s incoming data load. The folllowing transcript captures the collaborative intake phase, where agent specialists flag critical issues in preparation for joint strategy formation. Their early assessments below form the baseline for downstream coordination.

    Supply Chain Team Transcript: Agent Analysis

    Orchestration and Strategy – When Agents Teams Work Together

    After reviewing their functional dashboards, the Supply Chain agents transitioned from isolated diagnostics to integrated strategy formation. What follows is a transcript—condensed for clarity—that reveals how a distributed team of AI experts negotiated trade-offs, merged perspectives, and built a coordinated mitigation strategy for escalating Midwest risks.

    Supply Chain Team Transcript: Team Analysis

    The team co-developed a gated intake triage plan for DC-C with four synchronized filters: SKU velocity tier, forecast lock window, supply/demand thresholds, and margin-volatility pairing. They agreed to data handoffs via shared APIs and established cap tolerances to maintain flexibility under risk. This interaction exemplifies emergent cognition—where no individual agent held the entire solution, but collaboration yielded a coherent, executable plan.

    Conclusion

    This example highlights Sentienta’s core advantage: turning fragmented functional data into synchronized decision intelligence. As agents negotiate thresholds, define roles, and operationalize shared triggers, strategy becomes not just automated—but emergent. Sentienta Teams adapts at the pace of complexity, enabling businesses to respond with coordinated precision.

  • Your Team Is Using AI Wrong—But Not For the Reason You Think

    Nate Jones is one of the sharpest observers of the AI industry, offering thoughtful takes on how the field is evolving faster than most teams can adapt. In one of his recent posts (and video), he highlights a crucial, yet often overlooked insight: new knowledge doesn’t just come from tools. It emerges from how teams think together.

    He’s absolutely right. But while others are still figuring out how to retrofit ChatGPT into legacy workflows, we built something different from the start.

    Sentienta wasn’t built to join a team—it was built to be one. An architecture where cognition is emergent, shared, and preserved through expert agent interaction.

    This post is our view on Nate’s insight about ‘distributed cognition’ and a demonstration of what it looks like in action.

    What Is Distributed Cognition?

    In traditional systems, intelligence is seen as residing in individuals or in the outputs of standalone tools. But real team intelligence is different. It’s a dynamic process: understanding emerges as people, and now agents, interact, adapt, and build on one another’s contributions.

    Sentienta is designed around this principle. Its expert agents don’t just complete tasks, they participate in a continuous, evolving exchange of reasoning. Each brings a domain-specific perspective, and through ongoing dialog, they generate insights that no single agent—or human could reach alone.

    This isn’t just “stored knowledge”—it’s active cognition. When agents respond to one another, challenge assumptions, and adapt strategies together, they form a cognitive system. What emerges isn’t data, but collective understanding.

    Sentienta isn’t a system for remembering what happened—it’s a system for thinking together in real time.

    This is what makes Sentienta more than a workflow tool. It is distributed cognition embodied: an always-on, always-evolving team of minds—virtual and human, each contributing to a deeper, shared sense of what’s true and what to do next.

    Innovating Through Collaborative Insight

    The following graphic shows how a Pricing Strategist’s initial idea evolves through critical input from a Customer Behavior Analyst into a novel “build-your-own bundle”, the visualization highlights Sentienta’s ability to generate breakthrough strategies through collaborative agent interactions.

    What begins as expert input becomes something more—a new idea born through structured interaction. Sentienta not only facilitates this dynamic exchange but preserves the conversation, making insight traceable, reusable, and ready for replay when it matters most.

    Emergent Strategy from Agent Teamwork

    Teamwork is essential because it drives creative problem solving on multiple levels: humans contribute diverse perspectives and strategic intuition, while agents rapidly process data and combine insights at scale. This dual approach means that by integrating people with high-performing agent teams, businesses can overcome the natural limits of human capacity, ensuring that expertise expands without additional headcount.

    Sentienta’s platform not only leverages this synergy by preserving collaborative dialogs to build a lasting archive of insights but also serves as a dynamic space for co-creating new ideas through agent collaboration. By surfacing insights that no single agent or person could produce alone, Sentienta teams exemplify emergent cognition-delivering strategies born from structured, multi-perspective dialog.

  • Tips and Tricks: Agent Marketplace

    In past posts, we’ve discussed the process of creating agents from scratch. While this is straightforward, there’s a good chance that the agent you need has already been built by someone else. The Agent Marketplace is a library of pre-made agents, allowing you to quickly find and integrate the right one into your team.

    To add an agent from the Marketplace, navigate to Manage Teams, select your desired team, and then click on Agent Marketplace in the left menu.

    The Agent Marketplace is organized into categories based on the agents’ areas of expertise. Browse through these categories to find an agent that matches your needs. Each agent listing includes a description of its skills and persona. To add an agent, simply check the box next to its name. You can select multiple agents at once—just be sure to click the Add Agents to Selected Teams button at the top of the page. This process helps you assemble a functional team without the effort of manually creating each agent.

    While this makes team-building seamless, what’s truly powerful is that Marketplace agents are more than static tools—they’re customizable templates. Once you’ve added an agent, you can refine its persona to better align with your specific objectives.

    For example, let’s say you’re assembling a software team to develop a cutting-edge AI product. You’ve added the Rubin agent, but its default persona is too general. You need this agent to specialize in AI development tools. Here’s how to tailor it:

    On the Manage Teams page, locate the Rubin agent in the Your Agents and Teams section. Click on the agent’s persona to edit it. Replace the default text with a more specialized persona, such as:

    As a Senior Software Designer with expertise in Artificial Intelligence, you will architect and develop advanced AI-driven solutions using state-of-the-art technologies. You will work with machine learning frameworks such as TensorFlow, PyTorch, and Scikit-learn, leveraging APIs like OpenAI’s GPT for AI-powered applications. Additionally, you’ll utilize NLP libraries such as spaCy and Hugging Face for language processing tasks. Expertise in cloud-based AI services (AWS SageMaker, Google Vertex AI, Azure AI) and big data platforms like Apache Spark and Kafka is crucial. Your role includes optimizing AI workflows, integrating intelligent automation into software applications, and guiding best practices for AI model deployment and scalability.

    You can also customize the agent’s name—which is useful if you plan to add multiple instances of the same base agent. Additionally, selecting a distinct color for the agent’s responses helps differentiate it in team interactions. To do this, click on the color square in the agent listing and choose a new highlight color. After finalizing your changes, always click Save Changes to apply them.

    The Agent Marketplace makes it incredibly easy to build high-performing teams in just a few clicks. Even better, its customization features ensure that your agents are perfectly aligned with your needs. In future posts, we’ll explore agents that integrate with external tools and discuss how to optimize their capabilities through persona refinement.

  • Tips and Tricks for Creating an Effective Agent

    Creating an agent in Sentienta is straightforward, but a few key strategies can help ensure your agent works optimally. Below, we’ll walk through the setup process and offer insights on defining an agent’s role effectively.

    Step 1: Create a Team

    Before creating an agent, you must first establish a team. To do this:

    1. Navigate to the Your Agents and Teams page using the Manage Teams button on the homepage.
    2. Click Create a Team. You’ll see three fields:
      • Name: Enter a name, such as “HR Team”
      • Type: Categorize the team (e.g., “Human Resources”).
      • Description: This defines the team’s purpose. A simple example: “This team manages Human Resources for the company.”
    3. Click Submit to create the team.

    Step 2: Create an Agent

    Once you’ve created a team, it will appear in the Teams section along with the Sentienta Support Team. Follow these steps to add an agent:

    1. Select your team (e.g., HR Team).
    2. Click Create an Agent in the left menu.
    3. Assign a name. Let’s call this agent Bob.
    4. Define Bob’s title—e.g., Benefits Specialist.
    5. Define Bob’s Persona, which outlines expertise and interactions.

    Step 3: Crafting an Effective Persona

    The Persona field defines the agent’s expertise and shapes its interactions. As discussed in our earlier post on Agent Interaction, the agent uses an LLM to communicate with both users and other agents. Since the persona is part of the LLM system prompt, it plays a crucial role in guiding the agent’s responses.

    The persona should clearly define what the agent is able to do and how the agent will interact with the other members on the team. (To see examples of effective personas, browse some of the agents in the Agent Marketplace).

    A well-crafted persona for Bob might look like this:

    “You are an expert in employee benefits administration, ensuring company programs run smoothly and efficiently. You manage health insurance, retirement plans, and other employee perks while staying up to date with legal compliance and industry best practices through your Research Assistant. You provide guidance to employees on their benefits options and collaborate with the HR Generalist and Recruiter to explain benefits to new hires.”

    Key persona components:

    • Expertise: Clearly defines Bob’s role in benefits administration.
    • User Interaction: Specifies that Bob provides guidance to employees.
    • Team Collaboration: Mentions interactions with other agents, such as the HR Generalist and Recruiter.
    • Delegation: Optionally, defines which agents Bob may delegate to—for example, a Research Assistant agent that retrieves compliance updates.

    If additional agents (like the HR Generalist or Research Assistant) don’t yet exist, their roles can be updated in Bob’s persona as the team expands.

    Once the persona is complete, click Submit to add Bob to the team. (We won’t discuss the URL optional field today, but will save for a future post.)

    Step 4: Testing Your Agent

    Now that Bob is created, you can test the agent’s expertise:

    1. Navigate to the home page and select the HR Team below Your Teams
    2. Make sure Bob’s checkbox is checked and enter a query, such as “What is your expertise?”
    3. Bob will respond with something like:

    “I am a Benefits Specialist, responsible for employee benefits administration, including health insurance, retirement plans, and other perks. I ensure compliance with regulations and provide guidance to employees on their benefits options.”

    If asked an unrelated question, such as “What is today’s weather?” Bob will remain silent. This behavior ensures that agents only respond within their expertise, promoting efficient team collaboration.

    Next Steps

    Once your agent is set up, you can explore additional customization options, such as adding company-specific benefits documentation to Bob’s knowledge base. Stay tuned for a future post on enhancing an agent’s expertise with internal documents.