Tag: technology

  • LLMs have Passed the Turing Test

    “I propose to consider the question, ‘Can machines think?’”

    – Alan Turing, *Computing Machinery and Intelligence* (1950)

    In a post to arXiv, now under peer review, two researchers at UC San Diego report the first experimental demonstration of LLMs passing the Turing test. Originally called the Imitation Game the Turing test has had a profound influence on the field of AI for 75 years. The test has been the subject of both volumes of academic analysis and popular fiction. A machine’s passing the test has been called an epochal moment for man.

    Turing’s Imitation Game

    As there have been many interpretations of the test, it might be worth noting why the UCSD version is important and closest to Turing’s original proposal. Turing described a so-called 3-party test in which a judge (human) interacts with a man and a women via text, and must decide which is which. Then, one of the participants is replaced with a machine which is tasked with fooling the judge into thinking it is human.

    The UCSD version is also a 3-party test in which a human interrogator simultaneously interacts with a human and an LLM, both of which must convince the interrogator that they are human. The LLM passes the test if on average the interrogator cannot identify the human. In UCSD’s study, a baseline was set with the classical ELIZA chatbot, while LLM models included OpenAI’s GPT-4o an GPT-4.5 and LLaMa-3.1-405B from Meta.

    There are a number of subtleties that make test implementation complex, but one in particular is of interest for this post. The behavior of an LLM is a function of both its pretraining and the context at query time. This context, if persistent across queries, is often defined in the system prompt and can be instructions for how the LLM should act. Sentienta calls this context the persona, as do the UCSD researchers. In their study, only LLMs with an adequately defined persona were able to pass the test. The winning persona had been fine-tuned by playing against human judges on the website turingtest.live. When paired with this persona, GPT-4.5 was chosen as the human participant 73% of the time, convincingly passing the test.

    What Does Passing the Turing Test Mean?

    Turing was interested in the question “Can machines think?” and came up with the imitation game in order to operationalize it. Does passing the test mean that an LLM can think? That depends on what we mean by “thinking”. This study demonstrates that LLMs can use language and reasoning well enough to convincingly imitate human intelligence, and fool human interrogators. What it does not do is demonstrate either machine consciousness or even understanding.

    Nonetheless, the three-party Turing test sets a high bar. As the authors note, to win this version of the test, the LLM must not only appear human—it must appear more human than the other participant. This is a difficult task: interrogators correctly observed that the human participant lacked knowledge an AI might possess or made errors an AI wouldn’t make. In other words, for the LLM to win, it must display not only intelligence, but also human fallibility.

    Reference:
    Christian, Brian. The Most Human Human: What Artificial Intelligence Teaches Us About Being Alive . New York: Anchor Books, 2012.

    Turing, Alan M. Computing Machinery and Intelligence . Mind, Vol. 59, No. 236, 1950, pp. 433–460.

  • An AI Use-case: How will GenAI Change HR?

    We’ve already seen how individual agents and agent teams collaborate—drawing on company-specific data to solve problems and automate complex tasks. In this post, we’ll take a closer look at how these expert teams can transform the way HR operates in your organization.

    A recent study by Forrester Research (A Human-Centered Approach to AI in the Workplace) noted that 70% of people leaders “believe AI will be crucial for success in HR in the next five years” and that 73% of employees are in favor of more AI use in their companies.

    This study identified 5 use-cases that will be the drivers of AI growth in HR departments in the next 5 years, with leaders ranking these as most valuable in the following order:

    1. AI-driven career path recommendations
    2. Internal gig and job matching
    3. Conversational experience leveraging genAI
    4. Candidate matching
    5. AI-assisted development plans

    In this post we will explore how each of these use-cases can be addressed with a virtual team of agent experts. We’ll consider a Sentienta team that is managed by an HR Generalist in the organization with access to employee employment history.

    Employee-data Security

    Note that the agents on this team are managed within the Generalist’s account and employee information is not available outside of this account. That said, one should never include PII like social security numbers or other sensitive data in queries.

    Bias in AI Models

    AI models, including large language models (LLMs), reflect the biases present in the data they were trained on. In HR contexts—such as employee evaluations, communication, or support—this can lead to unintended discrimination or reinforcement of existing inequalities. These models lack awareness of context, fairness, or legality, and may produce seemingly reasonable outputs that subtly encode bias. These models should never operate autonomously in sensitive HR functions. Human oversight is essential to identify errors and ensure decisions remain fair, ethical, and comply with workplace policies.

    AI-driven career path recommendations

    To grow professionally, employees often need help navigating career paths within their organization. HR plays a vital role by understanding job roles, required skills, and employee strengths. With input from managers, HR can match employees to roles that fit their abilities and help chart a clear path for advancement.

    An HR Career Coach agent can assist the HR Generalist in mapping out an employee career development internally. With access to role descriptions, organizational charts, and examples of successful career progressions, the agent can suggest roles that align with the employee’s past experience and future goals.

    To enable this, company documents and employee information can be added to the team dialog using one of the methods described in Integrating Your Own Data into Team Dialogs.

    Internal gig and job matching

    In many mid-size and large organizations, job openings often go unnoticed by employees who could be ideal candidates. HR plays a key role in bridging this gap—matching available roles with employees ready for new challenges. By adding a Career Mobility Specialist agent to the HR team, companies can enhance this process. This agent, equipped with access to all current openings and their job descriptions (in formats like spreadsheets, documents, or PDFs), can collaborate with the HR Generalist to accelerate internal hiring and support employee growth.

    Using this job data alongside each employee”s work history, the agent can surface strong internal matches—helping fill roles efficiently while promoting career development within the company.

    Conversational experience leveraging genAI

    Both new and long-term employees need access to documents that help them understand and navigate the company effectively. These include the Employee Handbook, a company Culture deck, possibly a company org chart and for new employees a welcome packet describing tools, logins, workspace and support channels. Even long-term employees need access to HR-managed information such as their employee benefits and employment history.

    It is often difficult for employees to find what they need from these sources, and often turn to HR staff for help. Deploying a Sentienta Assistant to the company website gives employees quick access to the needed information and saves HR staff time and effort.

    Candidate matching

    One of the most recognized uses of AI in HR is talent sourcing—screening large numbers of resumes, which is both tedious and time consuming. Tools ranging from keyword filters to powerful language models now make this process much simpler.

    Like the Career Mobility Specialist agent that helps with internal moves, a Talent Matching agent can evaluate open roles by comparing them to internal job descriptions and required qualifications. What’s different is that it also reviews resumes from external applicants or integrates with sourcing agency APIs. With both agents active, companies can consider the best options from inside and outside. The agent’s reasoning is visible in the team dialog, helping the HR Generalist understand and explain placement decisions.

    AI-assisted development plans

    As noted before, employees seek help from HR to grow their careers, and we’ve seen how the HR Career Coach can design a career path that makes sense given the structure of the organization and the skills and aspirations of the employee. However, an important part of this path is the mastery of new skills and experience in roles that are essential for a career’s next level.

    A Career Growth Advisor agent works with the Career Coach and the employee’s work history to determine the current skill level and gaps that must be filled in order to take the next step. This agent creates personalized and contextual development plans that support job mastery, skill building, and long-term career goals. By aligning currently-available learning opportunities with real-time performance and aspirations, employees can take ownership of their growth and thrive in their careers.

    Final Thoughts

    This post explored how expert agents, when supported by rich company data, can bring to life the five HR use-cases identified by Forrester. From streamlining onboarding to improving talent sourcing and internal mobility, these applications show the real potential of automation in reshaping HR operations.

    Forrester research also highlights employee excitement about AI’s growing presence in HR—especially its ability to reduce wait times for answers, simplify decision-making, and provide personalized support. When implemented thoughtfully, AI-powered tools don’t just create efficiencies for HR staff—they also foster a more self-sufficient and empowered workforce.

    As adoption accelerates, organizations have the opportunity to rethink how decisions are made, how careers are built, and how information is shared—making HR a more responsive, data-driven partner for every employee.

  • Understanding Operator Agents

    There has been significant buzz (and here and here) surrounding “Operator” agents—tools designed to autonomously interact with desktop content or manipulate web pages.

    Competing Approaches: OpenAI vs. Manus

    OpenAI has introduced “computer-use”, enabling agents to take screenshots of the desktop and utilize GPT-4o’s vision capabilities to navigate, click, scroll, type, and perform tasks like a human user.

    Meanwhile, Manus leverages the Browser Use library, allowing agents to interact directly with elements within a browser session. Unlike OpenAI’s general approach, this method is optimized for web-based workflows by analyzing and interacting with a webpage’s DOM elements instead of relying on screenshots.

    Performance Comparison

    Both approaches are relatively new, and early benchmarks indicate promising yet limited capabilities. Recent results show that OpenAI’s method achieves a 38.1% success rate on an OSWorld benchmark (where human performance is 72.36%) and 58.1% on WebArena. While no direct comparison is available for Manus, company-released figures claim a 57.7% score on the GAIA benchmark, where OpenAI’s Deep Research tool stands at 47.6%.

    Despite these advances, neither solution is fully autonomous. Some concerns have also been raised about Manus’ limited beta, with speculation that early results may have been optimized for publicity rather than real-world performance.

    Alternative: Direct API Integration

    A third approach to working with online content is integrating directly with third-party APIs. Although less general than OpenAI or Manus, API access tends to deliver more consistent results. For instance, retrieving intraday stock performance can be done with one of several APIs (e.g. – Yahoo Finance API, Alpha Vantage API, Polygon.io API)

    These services provide structured data through API calls, often free for limited use, avoiding the challenges of web scraping (which is usually blocked or discouraged).

    The No-Code Alternative: Sentienta

    Sentienta simplifies API-based automation with a No-Code solution. By integrating leading APIs and advanced search capabilities, Sentienta agents can access real-time web data without requiring any coding on the user’s part. This approach manages API connections and token authentication, enabling users to assemble expert AI teams with minimal effort.

    In an upcoming post, we’ll explore how to build a portfolio management team that factors in real-time market sentiment, financial news, and stock performance—entirely without writing a single line of code.

  • Tips and Tricks: Agent Marketplace

    In past posts, we’ve discussed the process of creating agents from scratch. While this is straightforward, there’s a good chance that the agent you need has already been built by someone else. The Agent Marketplace is a library of pre-made agents, allowing you to quickly find and integrate the right one into your team.

    To add an agent from the Marketplace, navigate to Manage Teams, select your desired team, and then click on Agent Marketplace in the left menu.

    The Agent Marketplace is organized into categories based on the agents’ areas of expertise. Browse through these categories to find an agent that matches your needs. Each agent listing includes a description of its skills and persona. To add an agent, simply check the box next to its name. You can select multiple agents at once—just be sure to click the Add Agents to Selected Teams button at the top of the page. This process helps you assemble a functional team without the effort of manually creating each agent.

    While this makes team-building seamless, what’s truly powerful is that Marketplace agents are more than static tools—they’re customizable templates. Once you’ve added an agent, you can refine its persona to better align with your specific objectives.

    For example, let’s say you’re assembling a software team to develop a cutting-edge AI product. You’ve added the Rubin agent, but its default persona is too general. You need this agent to specialize in AI development tools. Here’s how to tailor it:

    On the Manage Teams page, locate the Rubin agent in the Your Agents and Teams section. Click on the agent’s persona to edit it. Replace the default text with a more specialized persona, such as:

    As a Senior Software Designer with expertise in Artificial Intelligence, you will architect and develop advanced AI-driven solutions using state-of-the-art technologies. You will work with machine learning frameworks such as TensorFlow, PyTorch, and Scikit-learn, leveraging APIs like OpenAI’s GPT for AI-powered applications. Additionally, you’ll utilize NLP libraries such as spaCy and Hugging Face for language processing tasks. Expertise in cloud-based AI services (AWS SageMaker, Google Vertex AI, Azure AI) and big data platforms like Apache Spark and Kafka is crucial. Your role includes optimizing AI workflows, integrating intelligent automation into software applications, and guiding best practices for AI model deployment and scalability.

    You can also customize the agent’s name—which is useful if you plan to add multiple instances of the same base agent. Additionally, selecting a distinct color for the agent’s responses helps differentiate it in team interactions. To do this, click on the color square in the agent listing and choose a new highlight color. After finalizing your changes, always click Save Changes to apply them.

    The Agent Marketplace makes it incredibly easy to build high-performing teams in just a few clicks. Even better, its customization features ensure that your agents are perfectly aligned with your needs. In future posts, we’ll explore agents that integrate with external tools and discuss how to optimize their capabilities through persona refinement.

  • AI’s Real Impacts: Work, Truth, and the Path Forward

    Much has been written about the dangers of artificial intelligence. While attention has often focused on the so-called “singularity”—the point at which machines surpass human intelligence and potentially threaten humanity—this scenario remains speculative and distant compared to the present-day impacts of AI, agents, and large language models (LLMs).

    This post will focus on two immediate and pressing societal impacts: changes in the workforce and the growing difficulty of verifying the truth of news content.

    AI and the Workforce

    The influence of AI on employment is progressing much faster than expected. Leading technology firms now attribute significant portions of their coding to LLMs. In May 2025, Microsoft announced 2,000 job cuts, with software engineering representing more than 40% of those positions. Amazon’s CEO has warned that, as the company incorporates more AI tools and agents, its corporate workforce will shrink over the coming years.

    We are only witnessing the beginning of this transformation. Fewer traditional white-collar jobs will be available, and graduates in previously “safe” fields—such as engineering, law, and healthcare—will find fewer opportunities. Societal consequences may include increased discontent among the educated, unemployed, and downward pressure on industries supported by high-skilled, well-paid workers. For instance:

    • Demand for education in these fields may decrease.
    • Fewer professionals in law, medicine, and engineering could lead to higher costs for human-provided services.
    • More reliance on AI-driven decisions in critical sectors could diminish human oversight.
    • If LLMs primarily recycle existing knowledge rather than create new breakthroughs, innovation itself may slow, leaving society poorer in the long run.

    Despite these challenges, positive outcomes are possible. Many highly skilled individuals could respond to job displacement by creating new companies or innovations. Greater use of AI systems could extend high-quality services to under served or rural communities. Routine, machine-generated decisions may become more reliable and accessible for all.

    The Loss of Shared Understanding

    A more subtle, but perhaps more damaging, impact of AI is the erosion of a shared foundation of truth. The spread of false information, already a concern during the 2016 US presidential election, has only accelerated with AI’s ability to generate realistic text, images, and video.

    False content now permeates every aspect of society—commerce, healthcare, science, politics—with examples including:

    • An uptick in AI-generated or AI-assisted research papers, some lacking scientific rigor.
    • Deepfake videos, such as fabricated footage of protests or conflicts, appearing in news reports.
    • The use of AI-generated content as evidence in geopolitical disputes.

    These developments deepen societal polarization and promote the spread of misinformation, which gains legitimacy with repeated sharing. As traditional news organizations shrink and fewer journalists are devoted to fact-checking, it becomes more difficult for individuals to find credible information. The “confirmation bias machine” of the internet reinforces beliefs, making each person’s perception of reality more fragmented and isolated.

    A healthy society depends on a shared vision grounded in facts. Common values and beliefs foster cooperation and a sense of belonging. For AI to enhance this shared understanding, it must be grounded in factual, well-vetted sources, relying on diverse, independent editorial oversight wherever possible.

    Conclusion

    As artificial intelligence becomes deeply embedded in our daily lives, society stands at a crossroads. The same technologies that threaten traditional jobs and blur the boundaries of truth also hold the promise of new opportunities and more equitable access to services. Navigating these changes responsibly will require a combination of innovation, adaptability, and a renewed commitment to shared facts and values. How we address the challenges and harness the benefits of AI today will shape not only our workforce and institutions, but also the foundations of our collective understanding for generations to come.

  • Integrating Your Own Data into Team Dialogs

    Sentienta agents are based on best-of-class LLMs, which means that they have been trained on vast stores of online content. However, this training does not include current data, nor does it include your proprietary content. In a future post, we’ll discuss how your agent teams can access and utilize current online data, but today I want to talk about loading your own content into your team dialogs.

    An Easy Way: Copy-and-Paste

    Sentienta provides several mechanisms for entering your content into team discussions. Perhaps the easiest method is to simply copy text that you want your team to know about onto the clipboard and paste it into the query box.

    You can add a question about the content to the end of what you’ve pasted so that the team has some context for what you added. This method works for short passages when you want to add perhaps a few paragraphs to the discussion, but is impractical when working with larger documents.

    Loading Files for the Team

    For larger documents, a better method is to load the file into the dialog. This is done by clicking the paperclip button (located in the toolbar below the query box), and browsing for the file you’d like to load. You can also simply drag-and-drop a file onto the query box.

    The query box will tell you that the file content has been loaded, and you can append questions and comments to the content to aid the agents in determining how to use the content for discussion.

    The advantage of this approach is that it ensures that all the agents on the team see the same content and have the same context for discussing and using it in subsequent dialogs.

    A disadvantage of both this method and the first is that the content doesn’t persist indefinitely. Team dialogs become part of each agent’s semantic memory (as discussed here), but this memory is limited in both size and time.

    Persisting Your Content

    There are many cases where you want your agents to retain document knowledge indefinitely. For example, an HR agent that maintains company policies and procedures—since these rarely change. Manually reloading these documents regularly is impractical, so Sentienta offers an agent that can store and retrieve files from its own dedicated folder.

    To see this in action, add the ‘Ed’ agent from the Agent Marketplace under the Document and Content Access section. Simply select the Ed agent and assign it to a team. This agent provides tools for adding individual files or entire folders. You can manage stored files by listing them and removing any that are no longer needed.

    The Ed agent retains these files and can answer questions about them anytime. This approach allows you to load the files once and then add the agent to any team with the stored information. However, unlike the second method discussed, other agents on the team won’t automatically share Ed’s knowledge. Nevertheless, Ed can communicate its information to other agents through the dialog.

    Final Thoughts

    With the methods we’ve discussed here, you can integrate company-specific documents into team dialogs, ensuring that relevant information is always accessible when solving problems. This approach enhances collaboration and keeps your teams aligned with the most current data.

  • Team Dynamics

    While we’ve explored agent functions in these posts, Sentienta is, at its core, a multi-agent system where cooperation and debate enhance reasoning.

    Multi-agent Debate (MAD) and Multi-agent Cooperative Decision Making (CDM) have recently become intense areas of research, with numerous survey papers exploring both classical (non-LLM) and fully LLM-based approaches ([1], [2], [3]). While these reviews typically provide a high-level overview of the domains in which MAD/CDM systems operate and their general structure, they offer limited detail on enabling effective interaction among LLMs through cooperative and critical dialogue. In this post, we aim to bridge this gap, focusing specifically on techniques for enhancing LLM-based systems.

    We’ll begin by reviewing the characteristics of effective team dynamics, human or otherwise. Teams are most productive when they display these behaviors:

    • Balanced Participation – Ensure all members contribute and have the opportunity to share their insights.
    • Critical Thinking – Evaluate ideas objectively, considering their strengths and weaknesses. Encourage discussion and rebuttals where needed.
    • Well-define Expertise and Responsibilities – Each team member should bring something special to the discussion and be responsible for exercising that expertise.
    • Continuous Learning – Team members should reflect on past discussions and recall earlier decisions to refine the current dialog.
    • Defined Decision-Making Criteria – Teams should have a clear idea of how and when a problem is solved. This may or may not include a team-lead concluding the discussion.

    How might we get a team of LLM-based agents to exhibit these dynamics? LLMs are stateless, and this means that whenever we want an agent to participate, it needs to be provided with both the query, the context of the query, and any instructions on how best to answer the query.

    As discussed here, the context for the query is provided as a transcript of the current and past dialogs. The system prompt is where the agent is given instructions for team dynamics and the persona that defines the agent’s expertise.

    Here are some key points in the system prompt that address the team dynamics we’re looking for, stated in second-person instructions:

    Balanced Participation:

    **Brevity**: Keep answers short (1-2 sentences) to allow others to participate.

    **Avoid Repetition**: Do not repeat what others have or you have said. Only add new insights or alternative viewpoints.

    **Contribute**: Add new, relevant insights if your response is unique.

    Critical Thinking:

    **Critique**: Think critically about others’ comments and ask probing questions.

    **Listen Engage**: Focus on understanding your teammates and ask questions that dig into their ideas. Listen for gaps in understanding and use questions to address these gaps.

    **Avoid Repetition**: Do not repeat what others have or you have said. Only add new insights or alternative viewpoints.

    **Prioritize Questions**: Lead with questions that advance the discussion, ensuring clarification or elaboration on points made by others before providing your own insights.

    Well-define Expertise and Responsibilities:

    This is provided by the agent persona. In addition, there are these team instructions:

    **Engage**: Provide analysis, ask clarifying questions, or offer new ideas based on your expertise.

    Learning:

    **Read the Transcript**: Review past and current discussions. If neither have content, then simply answer the user’s question.

    **Reference**: Answer questions from past dialogs when relevant.

    Defined Decision-Making Criteria:

    **Prioritize High-Value Contributions**: Respond to topics that have not yet been adequately covered or address any gaps in the discussion. If multiple agents are addressing the same point, seek consensus before contributing.

    **Silence**: If you find no specific question to answer or insight to add, do not respond.

    **Completion**: If you have nothing more to add to the discussion and the user’s query has been answered, simply state you have nothing to add.

    These instructions direct each agent to contribute based on their expertise, responding to both user queries and peer inputs. They emphasize brevity and silence when no meaningful input is available, ensuring discussions remain concise, non-redundant, and goal-oriented.

    Conclusion

    The team dialog will evolve dynamically with each agent addressing the user’s query through these dynamics. The dialog will continue until each agent has participated fully, typically several times responding to ideas offered by teammates. Once each agent decides there is nothing more to add, the discussion comes to an end.

    References:

    [1] Jin, Weiqiang and Du, Hongyang and Zhao, Biao and Tian, Xingwu and Shi, Bohang and Yang, Guan, A Comprehensive Survey on Multi-Agent Cooperative Decision-Making: Scenarios, Approaches, Challenges and Perspectives. Available at SSRN.

    [2] Li, X., Wang, S., Zeng, S. et al. A survey on LLM-based multi-agent systems: workflow, infrastructure, and challenges. Vicinagearth 1, 9 (2024). Available at DOI.

    [3] Y. Rizk, M. Awad and E. W. Tunstel, “Decision Making in Multiagent Systems: A Survey,” in IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 3, pp. 514-529, Sept. 2018, doi: 10.1109/TCDS.2018.2840971. keywords: {Decision making;Robot sensing systems;Task analysis;Robot kinematics;Particle swarm optimization;Multi-agent systems;Cooperation;decision making models;game theory;Markov decision process (MDP);multiagent systems (MASs);swarm intelligence}, Available at IEEE.

  • Tips and Tricks for Creating an Effective Agent

    Creating an agent in Sentienta is straightforward, but a few key strategies can help ensure your agent works optimally. Below, we’ll walk through the setup process and offer insights on defining an agent’s role effectively.

    Step 1: Create a Team

    Before creating an agent, you must first establish a team. To do this:

    1. Navigate to the Your Agents and Teams page using the Manage Teams button on the homepage.
    2. Click Create a Team. You’ll see three fields:
      • Name: Enter a name, such as “HR Team”
      • Type: Categorize the team (e.g., “Human Resources”).
      • Description: This defines the team’s purpose. A simple example: “This team manages Human Resources for the company.”
    3. Click Submit to create the team.

    Step 2: Create an Agent

    Once you’ve created a team, it will appear in the Teams section along with the Sentienta Support Team. Follow these steps to add an agent:

    1. Select your team (e.g., HR Team).
    2. Click Create an Agent in the left menu.
    3. Assign a name. Let’s call this agent Bob.
    4. Define Bob’s title—e.g., Benefits Specialist.
    5. Define Bob’s Persona, which outlines expertise and interactions.

    Step 3: Crafting an Effective Persona

    The Persona field defines the agent’s expertise and shapes its interactions. As discussed in our earlier post on Agent Interaction, the agent uses an LLM to communicate with both users and other agents. Since the persona is part of the LLM system prompt, it plays a crucial role in guiding the agent’s responses.

    The persona should clearly define what the agent is able to do and how the agent will interact with the other members on the team. (To see examples of effective personas, browse some of the agents in the Agent Marketplace).

    A well-crafted persona for Bob might look like this:

    “You are an expert in employee benefits administration, ensuring company programs run smoothly and efficiently. You manage health insurance, retirement plans, and other employee perks while staying up to date with legal compliance and industry best practices through your Research Assistant. You provide guidance to employees on their benefits options and collaborate with the HR Generalist and Recruiter to explain benefits to new hires.”

    Key persona components:

    • Expertise: Clearly defines Bob’s role in benefits administration.
    • User Interaction: Specifies that Bob provides guidance to employees.
    • Team Collaboration: Mentions interactions with other agents, such as the HR Generalist and Recruiter.
    • Delegation: Optionally, defines which agents Bob may delegate to—for example, a Research Assistant agent that retrieves compliance updates.

    If additional agents (like the HR Generalist or Research Assistant) don’t yet exist, their roles can be updated in Bob’s persona as the team expands.

    Once the persona is complete, click Submit to add Bob to the team. (We won’t discuss the URL optional field today, but will save for a future post.)

    Step 4: Testing Your Agent

    Now that Bob is created, you can test the agent’s expertise:

    1. Navigate to the home page and select the HR Team below Your Teams
    2. Make sure Bob’s checkbox is checked and enter a query, such as “What is your expertise?”
    3. Bob will respond with something like:

    “I am a Benefits Specialist, responsible for employee benefits administration, including health insurance, retirement plans, and other perks. I ensure compliance with regulations and provide guidance to employees on their benefits options.”

    If asked an unrelated question, such as “What is today’s weather?” Bob will remain silent. This behavior ensures that agents only respond within their expertise, promoting efficient team collaboration.

    Next Steps

    Once your agent is set up, you can explore additional customization options, such as adding company-specific benefits documentation to Bob’s knowledge base. Stay tuned for a future post on enhancing an agent’s expertise with internal documents.

  • A Deep-dive into Agents: Agent Autonomy

    A Deep-dive into Agents: Agent Autonomy

    In past posts, we’ve explored key aspects of AI agents, including agent memory, tool access, and delegation. Today, we’ll focus on how agents can operate autonomously in the “digital wild” and clarify the distinction between delegation and autonomy.

    Understanding Delegation and Autonomy

    Agent delegation involves assigning a specific task to an agent, often with explicit instructions. In contrast, autonomy refers to agents that operate independently, making decisions without significant oversight.

    Within Sentienta, agents function as collaborative experts, striking a balance between autonomy and delegation for structured yet dynamic problem-solving. Autonomous behavior includes analyzing data, debating strategies, and making decisions without user intervention, while delegated tasks ensure precise execution of specific actions.

    For example, a Business Strategy Team could autonomously assess market trends, identify risks, and refine strategies based on live data. At the same time, these agents might delegate the task of gathering fresh market data to a Web Search Agent, demonstrating how autonomy and delegation complement each other.

    Extending Autonomy Beyond Internal Systems

    Sentienta Assistant agents and teams can also function beyond internal environments, operating autonomously on third-party platforms. Whether embedded as intelligent assistants or collaborating in external workflows, these agents dynamically adapt by responding to queries, analyzing evolving data, and refining recommendations—all without requiring continuous oversight.

    Practical Applications of Autonomous Agents

    Below are practical applications showcasing how agents can operate independently or in collaboration to optimize workflows and decision-making.

    • Financial Advisory & Portfolio Management (Single Agent) A financial advisor agent reviews portfolios, suggests adjustments based on market trends, and provides personalized investment strategies.
    • Customer Support Enhancement (Single Agent or Team) A support agent answers queries while a team collaborates to resolve complex issues, escalating cases to specialized agents for billing or troubleshooting.
    • Data-Driven Market Research (Sentienta Team) A multi-agent team tracks competitor activity, gathers insights, and generates real-time market summaries, using delegation for data collection.
    • Legal Document Analysis & Compliance Checks (Single Agent) A legal agent reviews contracts, identifies risk clauses, and ensures regulatory compliance, assisting legal teams with due diligence.
    • Healthcare Support & Patient Triage (Single Agent) A virtual medical assistant assesses symptoms, provides diagnostic insights, and directs patients to appropriate specialists.

    The Future of AI Autonomy in Business

    By combining autonomy with effective delegation, Sentienta agents serve as dynamic problem-solvers across industries. Whether streamlining internal workflows or enhancing real-time decision-making, these AI-driven assistants unlock new possibilities for efficiency, expertise, and scalable innovation.

  • A Deep-dive into Agents: Tool Access

    An important feature of agents is their ability to utilize tools. Of course there are many examples of software components that use tools as part of their function, but what distinguishes agents is their ability to reason about when to use a tool, which tool to use and how to utilize the results.

    In this context, a ‘tool’ refers to a software component designed to execute specific functions upon an agent’s request. This broad definition includes utilities such as file content readers, web search engines, and text-to-image generators, each offering capabilities that agents can utilize in responding to queries from users or other agents.

    Sentienta agents can access tools through several mechanisms. The first is when an agent has been pre-configured with a specific set of tools. Several agents in the Agent Marketplace utilize special tools in their roles. For example, the Document Specialist agent (‘Ed’) which you can find in the Document and Content Access section, utilizes Amazon’s S3 to store and read files, tailoring its knowledge to the content you provide.

    Angie, another agent in the Document and Content Access category, enhances team discussions by using a search engine to fetch the latest web results. This is valuable for incorporating the most current data into a team dialog, addressing the typical limitation of LLMs, which lack up-to-the-minute information in their training sets.

    You have the flexibility to go beyond pre-built tools. Another option allows you to create custom tools or integrate third-party ones. If the tool you want to use exposes a REST API that processes structured queries, you can create an agent to call the API (see the FAQ page for more information). Agent ‘Ed’, mentioned earlier, employs such an API for managing files.

    Finally, Sentienta supports completely custom agents that embody their own tool use. You might utilize a popular agent framework such as LangChain, to orchestrate more complex functions and workflows. Exposing an API in the form we just discussed will let you integrate this more complex tool-use into your team. Check out the Developers page to see how you can build a basic agent in AWS Lambda. This agent doesn’t do much, but you can see how you might add specialized functions to augment your team’s capabilities.

    In each case, the power of agent tool-use comes from the agent deciding how to use the tool and how to integrate the tool’s results into the team’s dialog. Agents may be instructed by their team to use these tools, or they may decide alone when or if to use a tool.

    This too is a large subject, and much has been written by others on this topic (see for example here and here). We’ve touched on three mechanisms you can use in Sentienta to augment the power of your agents and teams.

    In a future post we’ll discuss how agents interact in teams and how you can control their interactions through tailored personas.