Tag: chatgpt

  • From Prompt to Action: Orchestrating Workflows in Real Time

    In most business settings, workflows involve a sequence of interrelated tasks distributed across roles and systems. Until now, large language models (LLMs) have tended to operate in isolation, responding to one-off queries without coordinating broader actions. Sentienta introduces workflow agents that act differently. Rather than simply responding, they structure and drive processes. In this post, we demonstrate how a workflow agent David, takes a compound instruction and performs three coordinated steps: (1) Decomposing intent, (2) Mapping task dependencies, and (3) Orchestrating execution via agent collaboration.

    1. The Scenario: A Simple Request with Hidden Complexity

    A user submits the following instruction: “Here is our portfolio [AMZN, NVDA, TSLA, GM]. For each stock, if the price decreased by more than 1%, send an alert.”

    At first glance, this appears to be a straightforward request. But the instruction conceals multiple steps requiring distinct capabilities: parsing the list of assets, retrieving current stock prices, applying threshold logic, and preparing an alert in the correct format.

    David is Sentienta’s workflow agent. To fulfill the request, he relies on a team of specialized agents—such as Angie, who handles online data retrieval; Rob, who focuses on data analysis and threshold logic; and Alec, who formats and delivers outbound messages. David uses his awareness of each agent’s capabilities to deconstruct the request, delegate the appropriate tasks, and coordinate the correct execution sequence.

    This simple example introduces the transition from a single human prompt to a structured, multi-agent collaboration.

    2. Visualizing the Workflow as a Structured Plan

    To manage the user’s request, David constructs a structured plan based on the capabilities of his team. At the core of this plan is a sequence of steps—defined, linked, and conditionally triggered—where outputs of one task feed into the next.

    The block diagram below is a high-level abstraction of this internal plan. It shows how David encapsulates the user’s prompt into a coordinated process. Each element in the diagram represents a role or action within the workflow, capturing how Sentienta combines the broad reasoning abilities of language models with the control of a dynamic scheduler. This view is a “pre-expansion plan” where David defines the overall structure, before agents are assigned.

    This structure allows David to handle complexity systematically, using reusable patterns that scale across tasks.

    3. Expanding Tasks, Assigning Agents, and Filling Gaps

    Once David has structured an initial plan, the next step is expansion—where the abstract workflow is broken into explicit, actionable tasks for each stock in the portfolio. This involves branching the workflow into parallel paths, one per stock, and mapping the subtasks to specific agents.

    For real-time data retrieval, Angie is assigned to fetch the current price of each stock. Rob takes on the analysis logic—checking whether each stock’s price has dropped more than 1%. Alec is responsible for formatting and sending alerts, but that only happens if the stock meets its threshold condition.

    Where explicit agent coverage is missing—such as interpreting threshold evaluation results—David deploys internal language models to classify whether conditions have been met. This ensures nothing gets dropped or left ambiguous, even in cases where no agent matches the need directly.

    The diagram below captures this expanded version of the workflow. It shows how each stock’s path is elaborated into three stages (data retrieval, analysis, alert) and where Sentienta’s internal logic steps in dynamically to complete the chain.

    4. Seeing the Workflow in Action: Conditional Paths in Real Time

    This final diagram provides a runtime view of how David’s workflow executes based on live data. Each block in green indicates a task that was actively executed; grey blocks were skipped due to unmet conditions.

    Here, only TSLA and GM triggered alerts—because only those stocks fell below the 1% threshold. This selective activation demonstrates how David uses real-time analysis and embedded logic to trigger only the necessary branches of a plan.

    While this stock alert workflow is intentionally simple, it serves as a clear illustration of how Sentienta agents collaborate, reason, and conditionally execute tasks in real time. In follow-up posts, we’ll explore more complex scenarios—like coordinating multi-agent triage in response to supply chain disruptions or chaining diagnostics across departments for strategic escalation—which highlight the full sophistication of Sentienta’s agent framework.

    Even more powerfully, workflows like this can be scheduled to run at regular intervals—automatically refreshing data, reevaluating conditions, and feeding results into broader systems of action without manual reentry.

  • Your Team Is Using AI Wrong—But Not For the Reason You Think

    Nate Jones is one of the sharpest observers of the AI industry, offering thoughtful takes on how the field is evolving faster than most teams can adapt. In one of his recent posts (and video), he highlights a crucial, yet often overlooked insight: new knowledge doesn’t just come from tools. It emerges from how teams think together.

    He’s absolutely right. But while others are still figuring out how to retrofit ChatGPT into legacy workflows, we built something different from the start.

    Sentienta wasn’t built to join a team—it was built to be one. An architecture where cognition is emergent, shared, and preserved through expert agent interaction.

    This post is our view on Nate’s insight about ‘distributed cognition’ and a demonstration of what it looks like in action.

    What Is Distributed Cognition?

    In traditional systems, intelligence is seen as residing in individuals or in the outputs of standalone tools. But real team intelligence is different. It’s a dynamic process: understanding emerges as people, and now agents, interact, adapt, and build on one another’s contributions.

    Sentienta is designed around this principle. Its expert agents don’t just complete tasks, they participate in a continuous, evolving exchange of reasoning. Each brings a domain-specific perspective, and through ongoing dialog, they generate insights that no single agent—or human could reach alone.

    This isn’t just “stored knowledge”—it’s active cognition. When agents respond to one another, challenge assumptions, and adapt strategies together, they form a cognitive system. What emerges isn’t data, but collective understanding.

    Sentienta isn’t a system for remembering what happened—it’s a system for thinking together in real time.

    This is what makes Sentienta more than a workflow tool. It is distributed cognition embodied: an always-on, always-evolving team of minds—virtual and human, each contributing to a deeper, shared sense of what’s true and what to do next.

    Innovating Through Collaborative Insight

    The following graphic shows how a Pricing Strategist’s initial idea evolves through critical input from a Customer Behavior Analyst into a novel “build-your-own bundle”, the visualization highlights Sentienta’s ability to generate breakthrough strategies through collaborative agent interactions.

    What begins as expert input becomes something more—a new idea born through structured interaction. Sentienta not only facilitates this dynamic exchange but preserves the conversation, making insight traceable, reusable, and ready for replay when it matters most.

    Emergent Strategy from Agent Teamwork

    Teamwork is essential because it drives creative problem solving on multiple levels: humans contribute diverse perspectives and strategic intuition, while agents rapidly process data and combine insights at scale. This dual approach means that by integrating people with high-performing agent teams, businesses can overcome the natural limits of human capacity, ensuring that expertise expands without additional headcount.

    Sentienta’s platform not only leverages this synergy by preserving collaborative dialogs to build a lasting archive of insights but also serves as a dynamic space for co-creating new ideas through agent collaboration. By surfacing insights that no single agent or person could produce alone, Sentienta teams exemplify emergent cognition-delivering strategies born from structured, multi-perspective dialog.

  • From Signals to Strategy

    How AI Agent Teams Transform Decision-Making

    In today’s real-time economy, with almost daily turmoil, waiting for static dashboards or end-of-day reports puts businesses at risk. Decision-makers need faster insight and smarter responses—and they’re finding both in AI-powered agent teams. This post follows a “decision-day” walk-through where scheduled agents detect anomalies, trigger analysis, and deliver actionable intelligence. Along the way, we’ll preview a powerful future: dynamic response teams that self-assemble on demand, transforming routine data monitoring into collaborative, AI-driven strategy.

    From Manual Monitoring to Team Workflows

    For years, business teams relied on dashboards and manual checks—forcing analysts to sift through data after the fact, often too late to pivot strategy. AI agents like Angie now automate this process, persistently scanning KPIs and surfacing anomalies in near real time – bringing signal to the surface before teams even ask. This shift transforms monitoring from a reactive task into a proactive loop, enabling tactical adjustments as performance fluctuates throughout the day.

    Monitoring TypeDescriptionResponse SpeedHuman Effort
    Manual MonitoringAnalysts check KPIs periodically; issues often spotted lateSlowHigh
    Scheduled TriggersAgents run on set schedules to check and report KPIsModerateMedium
    Team Workflows (Emerging)Agent teams collaborate in sequence—detecting, diagnosing, and acting as events unfold in real timeFastLow

    Early adopters gain more than speed – they free up analysts from routine checking, letting them focus on root cause, strategic options, and next steps.

    Intelligent Agent Teams and Semi-Autonomous Coordination (Today)

    Sentienta Teams lets you assemble expert AI agents customizable to your business context – think of them as always-on teammates with specialized roles. With data collected continuously by Angie, users can periodically request updates, triggering collaborative evaluation across the team. Insights emerge through the interplay of domain-specific agents, each contributing to problem-solving like part of a cohesive virtual task force.

    What’s powerful today is the modular collaboration between agents: Angie spots the signal, users prompt an update, and the team responds—analyzing root causes, surfacing implications, and aligning recommendations. This tiered interaction shortens feedback loops while keeping the human-in-the-loop where needed.

    This model already elevates decision quality by transforming raw signal detection into team-powered insight—without code or integration overhead. Let’s see how this might work:

    Scenario Walk-Through — “End-of-Quarter Chain Reaction”

    At 9:55 AM, Angie, the KPI Monitoring Agent, completes her scheduled data sweep: collecting conversion metrics, analyzing trends against prior cycles, and logging a 12% drop. She also factors in data from the UX Deployment Team’s new UI update logged at 8:00 PM yesterday, establishing a preliminary correlation with increased mobile bounce rates.

    9:55 AM: Angie (KPI Monitoring Agent): “Scheduled Insight Logged — Conversion rate down 12% vs prior cycle. Ingested UX deployment log from 8:00 PM yesterday—preliminary correlation with mobile exit rates established.”

    At 10:05 AM, CMO Alex Morgan reviews Angie’s logged summary while preparing for the QBR, triggering further analysis from the broader agent team.

    10:05 AM: Analytics Agent: “Manual review confirmed. Running historical comparison — dip predominantly isolated to mobile traffic sources.”

    10:07 AM: Competitive Intelligence Agent: “Scanning competitor trends… Detected concurrent promotional push by primary rival.”

    10:10 AM: Summary Agent: “Final insights compiled: Traffic dip largely traced to mobile UX issues with the recent UI update, compounded by competitor campaign. QBR-ready report delivered to shared location.”

    10:12 AM: Presentation Update Agent: “Integrating the final report into the QBR presentation—slides refreshed with updated KPI graphs and attribution notes reflecting the new UI deployment impact.”

    This scenario illustrates how scheduled data logging, combined with cross-departmental inputs and manual review, triggers a coordinated AI response that supports executive decision-making.

    Next-Gen Workflow Automation: Intelligent Sequences & Self-Assembling Expertise

    Get ready for a system where a planning agent can automatically orchestrate the entire process: sequencing actions, branching based on real-time data, and repeating steps as needed. As soon as anomalies are detected, it will dynamically self-assemble the precise mix of expert agents to diagnose issues, forecast impacts, and execute tailored responses, shifting reactive monitoring into a proactive, intelligent strategy.

    This next-generation automation doesn’t just detect an issue—it scripts the solution by activating specialized agents at each step, adapting in real time as conditions evolve.

    Each automated workflow acts like a living blueprint, employing branching logic, conditional triggers, and intelligent retries to ensure the right path is taken as new data emerges.

    From manual monitoring to intelligent workflows, and now toward dynamic, self-assembling response teams, Sentienta is redefining how decisions are made. These AI-powered teams don’t just react, they anticipate, adapt, and execute strategies in real time.

  • Powering Productivity: Task Scheduling for Intelligent Agents

    In previous posts, we’ve explored agent autonomy—how intelligent agents can work independently to accomplish complex goals. Today, we’re unlocking a new level of autonomy: the ability to schedule agents and teams to run tasks automatically at specific times. With task scheduling, agents can operate in the background—gathering information, performing analysis, and delivering results—without your ongoing involvement.

    Some tasks require regular execution over time, especially those that track changeable data. For example, say you’re monitoring market sentiment for a specific stock. Instead of having to prompt Angie, the Web Search Agent, to gather updates every hour, you can now schedule her to collect that data and compile a report without manual input.

    Why is Scheduling Important?

    This is also ideal for larger, resource-intensive tasks—such as background research, document synthesis, or team workflows—that benefit from being handled asynchronously. You can stay focused on other priorities while your agents work behind the scenes.

    How Does this Work?

    The Angie Web Search Agent (available in the Agent Marketplace under Document and Content Access) now supports temporal scheduling. Simply configure a time or interval and specify what you want the agent or team to accomplish. Sentienta handles task execution and ensures you receive a detailed summary when the job is done.

    This capability isn’t limited to a single agent. You can coordinate multiple agents or entire teams to run collaboratively or in parallel on timed schedules, enabling sophisticated workflows that unfold automatically over time. It’s automation at scale—with intelligence.

    To see how this works, consider the following interaction:

    User: Schedule a task to run every 10 minutes for 6 runs. The task is to query “What is the latest AMZN quote?”

    Wed, 09 Apr 2025 17:50:10 Angie:

    Task user-schedule-clscofield997-at-gmail.com-AMZNquoteschedule is scheduled to run every 10 minutes and will stop after 6 runs.

    The agent has created a scheduled task to search for the latest Amazon.com stock quote every 10 minutes. This will run 6 time and then complete.

    You can get a list of all the scheduled tasks and their status with the following:

    User: List scheduled tasks

    Wed, 09 Apr 2025 17:56:56 Angie:

    user-schedule-clscofield997-at-gmail.com-AAPLquoteschedule complete

    user-schedule-clscofield997-at-gmail.com-AMZNquoteschedule running

    user-schedule-clscofield997-at-gmail.com-NVDAquoteschedule cancelled

    Angie will list each of the tasks that have been scheduled and their status. At any time you can get the latest results from a task with:

    User: Get results for task user-schedule-clscofield997-at-gmail.com-AMZNquoteschedule

    Wed, 09 Apr 2025 18:01:05 Angie:

    Here are the current results for user-schedule-clscofield997-at-gmail.com-AMZNquoteschedule:

    Wed, 09 Apr 2025 17:50:38 Angie: As of today, Wednesday, April 9, 2025, here is the latest information on Amazon’s stock (AMZN):

    – Current Price (Last Trade): $184.14

    – Change from Previous Close: +$13.48

    – Percentage Change: +7.90%

    – Opening Price: $172.12

    – Day’s High: $186.22

    – Day’s Low: $169.93

    – Previous Close: $170.66

    Amazon’s stock has seen a strong gain today, rising nearly 8%, indicating strong investor interest or positive market news affecting the company.

    When the task has completed, Angie will take all of the intermediate results and compile a report. If there are numerical values (like stock prices), she’ll compile these into a table:

    Here is a consolidated table summarizing the key stock metrics for Amazon (AMZN) on April 9, 2025, based on the multiple time-stamped reports:

    MetricValue(s) Reported
    Current Price$187.24, $186.67, $186.68, $187.56, $187.59
    Price Change+$16.58, +$16.01, +$16.02, +$16.90, +$16.93
    Percent Change+9.72%, +9.38%, +9.39%, +9.90%, +9.92%
    Opening Price$172.12 (Consistent across all reports)
    Day’s High$187.92 (4 reports), $187.98 (latest report)
    Day’s Low$169.93 (Consistent across all reports)
    Previous Close$170.66 (Consistent across all reports)

    📝 Summary of Research Findings:

    On April 9, 2025, Amazon’s (AMZN) stock experienced a substantial surge, consistently reported as a ~9.4–9.9% increase from the previous closing price of $170.66. The opening price for the day was $172.12, and the lowest traded price was $169.93. Throughout the day, the high varied slightly between reports, peaking between $187.92 and $187.98.

    The final snapshot reported a trading price of $187.59, reflecting a daily gain of $16.93 or 9.92%. The consistency across key metrics such as opening, low, and previous close enhances confidence in these results.

    Overall, the stock showed strong upward momentum likely driven by significant investor sentiment or positive news, marking one of the most bullish single-day movements for Amazon in recent times.

    Of course Angie will also perform all the necessary task management duties too. For example, you can cancel a task at any time:

    User: Cancel task user-schedule-clscofield997-at-gmail.com-AMZNquoteschedule running

    Wed, 09 Apr 2025 18:12:55 Angie:

    The task user-schedule-clscofield997-at-gmail.com-AMZNquoteschedule will be cancelled.

    And you can delete the results from former tasks with:

    User: Delete results for task user-schedule-clscofield997-at-gmail.com-AMZNquoteschedule

    Wed, 09 Apr 2025 18:16:29 Angie:

    The task results for user-schedule-clscofield997-at-gmail.com-AMZNquoteschedule were deleted.

    Beyond Reporting: Orchestrating Workflows with Scheduled Intelligence

    Beyond data gathering, scheduled tasks can drive entire workflows—triggering follow-up actions, coordinating hand-offs between agents, or summarizing findings across time. For example, after collecting market data, a second agent might be scheduled to generate trend reports or highlight unusual activity. This kind of orchestration transforms agents from simple tools into participants in a broader, ongoing process that adapts to your needs.

    In short, Sentienta’s task scheduling unlocks a new depth of autonomy and responsiveness. By setting your agents on a timeline, you gain continuity, consistency, and clarity—without losing control. Whether you’re monitoring real-time events or streamlining internal analysis, this feature helps you keep pace with the demands of modern work through intelligent automation.

  • LLMs have Passed the Turing Test

    “I propose to consider the question, ‘Can machines think?’”

    – Alan Turing, *Computing Machinery and Intelligence* (1950)

    In a post to arXiv, now under peer review, two researchers at UC San Diego report the first experimental demonstration of LLMs passing the Turing test. Originally called the Imitation Game the Turing test has had a profound influence on the field of AI for 75 years. The test has been the subject of both volumes of academic analysis and popular fiction. A machine’s passing the test has been called an epochal moment for man.

    Turing’s Imitation Game

    As there have been many interpretations of the test, it might be worth noting why the UCSD version is important and closest to Turing’s original proposal. Turing described a so-called 3-party test in which a judge (human) interacts with a man and a women via text, and must decide which is which. Then, one of the participants is replaced with a machine which is tasked with fooling the judge into thinking it is human.

    The UCSD version is also a 3-party test in which a human interrogator simultaneously interacts with a human and an LLM, both of which must convince the interrogator that they are human. The LLM passes the test if on average the interrogator cannot identify the human. In UCSD’s study, a baseline was set with the classical ELIZA chatbot, while LLM models included OpenAI’s GPT-4o an GPT-4.5 and LLaMa-3.1-405B from Meta.

    There are a number of subtleties that make test implementation complex, but one in particular is of interest for this post. The behavior of an LLM is a function of both its pretraining and the context at query time. This context, if persistent across queries, is often defined in the system prompt and can be instructions for how the LLM should act. Sentienta calls this context the persona, as do the UCSD researchers. In their study, only LLMs with an adequately defined persona were able to pass the test. The winning persona had been fine-tuned by playing against human judges on the website turingtest.live. When paired with this persona, GPT-4.5 was chosen as the human participant 73% of the time, convincingly passing the test.

    What Does Passing the Turing Test Mean?

    Turing was interested in the question “Can machines think?” and came up with the imitation game in order to operationalize it. Does passing the test mean that an LLM can think? That depends on what we mean by “thinking”. This study demonstrates that LLMs can use language and reasoning well enough to convincingly imitate human intelligence, and fool human interrogators. What it does not do is demonstrate either machine consciousness or even understanding.

    Nonetheless, the three-party Turing test sets a high bar. As the authors note, to win this version of the test, the LLM must not only appear human—it must appear more human than the other participant. This is a difficult task: interrogators correctly observed that the human participant lacked knowledge an AI might possess or made errors an AI wouldn’t make. In other words, for the LLM to win, it must display not only intelligence, but also human fallibility.

    Reference:
    Christian, Brian. The Most Human Human: What Artificial Intelligence Teaches Us About Being Alive . New York: Anchor Books, 2012.

    Turing, Alan M. Computing Machinery and Intelligence . Mind, Vol. 59, No. 236, 1950, pp. 433–460.

  • Understanding Operator Agents

    There has been significant buzz (and here and here) surrounding “Operator” agents—tools designed to autonomously interact with desktop content or manipulate web pages.

    Competing Approaches: OpenAI vs. Manus

    OpenAI has introduced “computer-use”, enabling agents to take screenshots of the desktop and utilize GPT-4o’s vision capabilities to navigate, click, scroll, type, and perform tasks like a human user.

    Meanwhile, Manus leverages the Browser Use library, allowing agents to interact directly with elements within a browser session. Unlike OpenAI’s general approach, this method is optimized for web-based workflows by analyzing and interacting with a webpage’s DOM elements instead of relying on screenshots.

    Performance Comparison

    Both approaches are relatively new, and early benchmarks indicate promising yet limited capabilities. Recent results show that OpenAI’s method achieves a 38.1% success rate on an OSWorld benchmark (where human performance is 72.36%) and 58.1% on WebArena. While no direct comparison is available for Manus, company-released figures claim a 57.7% score on the GAIA benchmark, where OpenAI’s Deep Research tool stands at 47.6%.

    Despite these advances, neither solution is fully autonomous. Some concerns have also been raised about Manus’ limited beta, with speculation that early results may have been optimized for publicity rather than real-world performance.

    Alternative: Direct API Integration

    A third approach to working with online content is integrating directly with third-party APIs. Although less general than OpenAI or Manus, API access tends to deliver more consistent results. For instance, retrieving intraday stock performance can be done with one of several APIs (e.g. – Yahoo Finance API, Alpha Vantage API, Polygon.io API)

    These services provide structured data through API calls, often free for limited use, avoiding the challenges of web scraping (which is usually blocked or discouraged).

    The No-Code Alternative: Sentienta

    Sentienta simplifies API-based automation with a No-Code solution. By integrating leading APIs and advanced search capabilities, Sentienta agents can access real-time web data without requiring any coding on the user’s part. This approach manages API connections and token authentication, enabling users to assemble expert AI teams with minimal effort.

    In an upcoming post, we’ll explore how to build a portfolio management team that factors in real-time market sentiment, financial news, and stock performance—entirely without writing a single line of code.

  • Tips and Tricks: Agent Marketplace

    In past posts, we’ve discussed the process of creating agents from scratch. While this is straightforward, there’s a good chance that the agent you need has already been built by someone else. The Agent Marketplace is a library of pre-made agents, allowing you to quickly find and integrate the right one into your team.

    To add an agent from the Marketplace, navigate to Manage Teams, select your desired team, and then click on Agent Marketplace in the left menu.

    The Agent Marketplace is organized into categories based on the agents’ areas of expertise. Browse through these categories to find an agent that matches your needs. Each agent listing includes a description of its skills and persona. To add an agent, simply check the box next to its name. You can select multiple agents at once—just be sure to click the Add Agents to Selected Teams button at the top of the page. This process helps you assemble a functional team without the effort of manually creating each agent.

    While this makes team-building seamless, what’s truly powerful is that Marketplace agents are more than static tools—they’re customizable templates. Once you’ve added an agent, you can refine its persona to better align with your specific objectives.

    For example, let’s say you’re assembling a software team to develop a cutting-edge AI product. You’ve added the Rubin agent, but its default persona is too general. You need this agent to specialize in AI development tools. Here’s how to tailor it:

    On the Manage Teams page, locate the Rubin agent in the Your Agents and Teams section. Click on the agent’s persona to edit it. Replace the default text with a more specialized persona, such as:

    As a Senior Software Designer with expertise in Artificial Intelligence, you will architect and develop advanced AI-driven solutions using state-of-the-art technologies. You will work with machine learning frameworks such as TensorFlow, PyTorch, and Scikit-learn, leveraging APIs like OpenAI’s GPT for AI-powered applications. Additionally, you’ll utilize NLP libraries such as spaCy and Hugging Face for language processing tasks. Expertise in cloud-based AI services (AWS SageMaker, Google Vertex AI, Azure AI) and big data platforms like Apache Spark and Kafka is crucial. Your role includes optimizing AI workflows, integrating intelligent automation into software applications, and guiding best practices for AI model deployment and scalability.

    You can also customize the agent’s name—which is useful if you plan to add multiple instances of the same base agent. Additionally, selecting a distinct color for the agent’s responses helps differentiate it in team interactions. To do this, click on the color square in the agent listing and choose a new highlight color. After finalizing your changes, always click Save Changes to apply them.

    The Agent Marketplace makes it incredibly easy to build high-performing teams in just a few clicks. Even better, its customization features ensure that your agents are perfectly aligned with your needs. In future posts, we’ll explore agents that integrate with external tools and discuss how to optimize their capabilities through persona refinement.

  • AI’s Real Impacts: Work, Truth, and the Path Forward

    Much has been written about the dangers of artificial intelligence. While attention has often focused on the so-called “singularity”—the point at which machines surpass human intelligence and potentially threaten humanity—this scenario remains speculative and distant compared to the present-day impacts of AI, agents, and large language models (LLMs).

    This post will focus on two immediate and pressing societal impacts: changes in the workforce and the growing difficulty of verifying the truth of news content.

    AI and the Workforce

    The influence of AI on employment is progressing much faster than expected. Leading technology firms now attribute significant portions of their coding to LLMs. In May 2025, Microsoft announced 2,000 job cuts, with software engineering representing more than 40% of those positions. Amazon’s CEO has warned that, as the company incorporates more AI tools and agents, its corporate workforce will shrink over the coming years.

    We are only witnessing the beginning of this transformation. Fewer traditional white-collar jobs will be available, and graduates in previously “safe” fields—such as engineering, law, and healthcare—will find fewer opportunities. Societal consequences may include increased discontent among the educated, unemployed, and downward pressure on industries supported by high-skilled, well-paid workers. For instance:

    • Demand for education in these fields may decrease.
    • Fewer professionals in law, medicine, and engineering could lead to higher costs for human-provided services.
    • More reliance on AI-driven decisions in critical sectors could diminish human oversight.
    • If LLMs primarily recycle existing knowledge rather than create new breakthroughs, innovation itself may slow, leaving society poorer in the long run.

    Despite these challenges, positive outcomes are possible. Many highly skilled individuals could respond to job displacement by creating new companies or innovations. Greater use of AI systems could extend high-quality services to under served or rural communities. Routine, machine-generated decisions may become more reliable and accessible for all.

    The Loss of Shared Understanding

    A more subtle, but perhaps more damaging, impact of AI is the erosion of a shared foundation of truth. The spread of false information, already a concern during the 2016 US presidential election, has only accelerated with AI’s ability to generate realistic text, images, and video.

    False content now permeates every aspect of society—commerce, healthcare, science, politics—with examples including:

    • An uptick in AI-generated or AI-assisted research papers, some lacking scientific rigor.
    • Deepfake videos, such as fabricated footage of protests or conflicts, appearing in news reports.
    • The use of AI-generated content as evidence in geopolitical disputes.

    These developments deepen societal polarization and promote the spread of misinformation, which gains legitimacy with repeated sharing. As traditional news organizations shrink and fewer journalists are devoted to fact-checking, it becomes more difficult for individuals to find credible information. The “confirmation bias machine” of the internet reinforces beliefs, making each person’s perception of reality more fragmented and isolated.

    A healthy society depends on a shared vision grounded in facts. Common values and beliefs foster cooperation and a sense of belonging. For AI to enhance this shared understanding, it must be grounded in factual, well-vetted sources, relying on diverse, independent editorial oversight wherever possible.

    Conclusion

    As artificial intelligence becomes deeply embedded in our daily lives, society stands at a crossroads. The same technologies that threaten traditional jobs and blur the boundaries of truth also hold the promise of new opportunities and more equitable access to services. Navigating these changes responsibly will require a combination of innovation, adaptability, and a renewed commitment to shared facts and values. How we address the challenges and harness the benefits of AI today will shape not only our workforce and institutions, but also the foundations of our collective understanding for generations to come.

  • Team Dynamics

    While we’ve explored agent functions in these posts, Sentienta is, at its core, a multi-agent system where cooperation and debate enhance reasoning.

    Multi-agent Debate (MAD) and Multi-agent Cooperative Decision Making (CDM) have recently become intense areas of research, with numerous survey papers exploring both classical (non-LLM) and fully LLM-based approaches ([1], [2], [3]). While these reviews typically provide a high-level overview of the domains in which MAD/CDM systems operate and their general structure, they offer limited detail on enabling effective interaction among LLMs through cooperative and critical dialogue. In this post, we aim to bridge this gap, focusing specifically on techniques for enhancing LLM-based systems.

    We’ll begin by reviewing the characteristics of effective team dynamics, human or otherwise. Teams are most productive when they display these behaviors:

    • Balanced Participation – Ensure all members contribute and have the opportunity to share their insights.
    • Critical Thinking – Evaluate ideas objectively, considering their strengths and weaknesses. Encourage discussion and rebuttals where needed.
    • Well-define Expertise and Responsibilities – Each team member should bring something special to the discussion and be responsible for exercising that expertise.
    • Continuous Learning – Team members should reflect on past discussions and recall earlier decisions to refine the current dialog.
    • Defined Decision-Making Criteria – Teams should have a clear idea of how and when a problem is solved. This may or may not include a team-lead concluding the discussion.

    How might we get a team of LLM-based agents to exhibit these dynamics? LLMs are stateless, and this means that whenever we want an agent to participate, it needs to be provided with both the query, the context of the query, and any instructions on how best to answer the query.

    As discussed here, the context for the query is provided as a transcript of the current and past dialogs. The system prompt is where the agent is given instructions for team dynamics and the persona that defines the agent’s expertise.

    Here are some key points in the system prompt that address the team dynamics we’re looking for, stated in second-person instructions:

    Balanced Participation:

    **Brevity**: Keep answers short (1-2 sentences) to allow others to participate.

    **Avoid Repetition**: Do not repeat what others have or you have said. Only add new insights or alternative viewpoints.

    **Contribute**: Add new, relevant insights if your response is unique.

    Critical Thinking:

    **Critique**: Think critically about others’ comments and ask probing questions.

    **Listen Engage**: Focus on understanding your teammates and ask questions that dig into their ideas. Listen for gaps in understanding and use questions to address these gaps.

    **Avoid Repetition**: Do not repeat what others have or you have said. Only add new insights or alternative viewpoints.

    **Prioritize Questions**: Lead with questions that advance the discussion, ensuring clarification or elaboration on points made by others before providing your own insights.

    Well-define Expertise and Responsibilities:

    This is provided by the agent persona. In addition, there are these team instructions:

    **Engage**: Provide analysis, ask clarifying questions, or offer new ideas based on your expertise.

    Learning:

    **Read the Transcript**: Review past and current discussions. If neither have content, then simply answer the user’s question.

    **Reference**: Answer questions from past dialogs when relevant.

    Defined Decision-Making Criteria:

    **Prioritize High-Value Contributions**: Respond to topics that have not yet been adequately covered or address any gaps in the discussion. If multiple agents are addressing the same point, seek consensus before contributing.

    **Silence**: If you find no specific question to answer or insight to add, do not respond.

    **Completion**: If you have nothing more to add to the discussion and the user’s query has been answered, simply state you have nothing to add.

    These instructions direct each agent to contribute based on their expertise, responding to both user queries and peer inputs. They emphasize brevity and silence when no meaningful input is available, ensuring discussions remain concise, non-redundant, and goal-oriented.

    Conclusion

    The team dialog will evolve dynamically with each agent addressing the user’s query through these dynamics. The dialog will continue until each agent has participated fully, typically several times responding to ideas offered by teammates. Once each agent decides there is nothing more to add, the discussion comes to an end.

    References:

    [1] Jin, Weiqiang and Du, Hongyang and Zhao, Biao and Tian, Xingwu and Shi, Bohang and Yang, Guan, A Comprehensive Survey on Multi-Agent Cooperative Decision-Making: Scenarios, Approaches, Challenges and Perspectives. Available at SSRN.

    [2] Li, X., Wang, S., Zeng, S. et al. A survey on LLM-based multi-agent systems: workflow, infrastructure, and challenges. Vicinagearth 1, 9 (2024). Available at DOI.

    [3] Y. Rizk, M. Awad and E. W. Tunstel, “Decision Making in Multiagent Systems: A Survey,” in IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 3, pp. 514-529, Sept. 2018, doi: 10.1109/TCDS.2018.2840971. keywords: {Decision making;Robot sensing systems;Task analysis;Robot kinematics;Particle swarm optimization;Multi-agent systems;Cooperation;decision making models;game theory;Markov decision process (MDP);multiagent systems (MASs);swarm intelligence}, Available at IEEE.

  • Tips and Tricks for Creating an Effective Agent

    Creating an agent in Sentienta is straightforward, but a few key strategies can help ensure your agent works optimally. Below, we’ll walk through the setup process and offer insights on defining an agent’s role effectively.

    Step 1: Create a Team

    Before creating an agent, you must first establish a team. To do this:

    1. Navigate to the Your Agents and Teams page using the Manage Teams button on the homepage.
    2. Click Create a Team. You’ll see three fields:
      • Name: Enter a name, such as “HR Team”
      • Type: Categorize the team (e.g., “Human Resources”).
      • Description: This defines the team’s purpose. A simple example: “This team manages Human Resources for the company.”
    3. Click Submit to create the team.

    Step 2: Create an Agent

    Once you’ve created a team, it will appear in the Teams section along with the Sentienta Support Team. Follow these steps to add an agent:

    1. Select your team (e.g., HR Team).
    2. Click Create an Agent in the left menu.
    3. Assign a name. Let’s call this agent Bob.
    4. Define Bob’s title—e.g., Benefits Specialist.
    5. Define Bob’s Persona, which outlines expertise and interactions.

    Step 3: Crafting an Effective Persona

    The Persona field defines the agent’s expertise and shapes its interactions. As discussed in our earlier post on Agent Interaction, the agent uses an LLM to communicate with both users and other agents. Since the persona is part of the LLM system prompt, it plays a crucial role in guiding the agent’s responses.

    The persona should clearly define what the agent is able to do and how the agent will interact with the other members on the team. (To see examples of effective personas, browse some of the agents in the Agent Marketplace).

    A well-crafted persona for Bob might look like this:

    “You are an expert in employee benefits administration, ensuring company programs run smoothly and efficiently. You manage health insurance, retirement plans, and other employee perks while staying up to date with legal compliance and industry best practices through your Research Assistant. You provide guidance to employees on their benefits options and collaborate with the HR Generalist and Recruiter to explain benefits to new hires.”

    Key persona components:

    • Expertise: Clearly defines Bob’s role in benefits administration.
    • User Interaction: Specifies that Bob provides guidance to employees.
    • Team Collaboration: Mentions interactions with other agents, such as the HR Generalist and Recruiter.
    • Delegation: Optionally, defines which agents Bob may delegate to—for example, a Research Assistant agent that retrieves compliance updates.

    If additional agents (like the HR Generalist or Research Assistant) don’t yet exist, their roles can be updated in Bob’s persona as the team expands.

    Once the persona is complete, click Submit to add Bob to the team. (We won’t discuss the URL optional field today, but will save for a future post.)

    Step 4: Testing Your Agent

    Now that Bob is created, you can test the agent’s expertise:

    1. Navigate to the home page and select the HR Team below Your Teams
    2. Make sure Bob’s checkbox is checked and enter a query, such as “What is your expertise?”
    3. Bob will respond with something like:

    “I am a Benefits Specialist, responsible for employee benefits administration, including health insurance, retirement plans, and other perks. I ensure compliance with regulations and provide guidance to employees on their benefits options.”

    If asked an unrelated question, such as “What is today’s weather?” Bob will remain silent. This behavior ensures that agents only respond within their expertise, promoting efficient team collaboration.

    Next Steps

    Once your agent is set up, you can explore additional customization options, such as adding company-specific benefits documentation to Bob’s knowledge base. Stay tuned for a future post on enhancing an agent’s expertise with internal documents.