Tag: Sentienta

  • Adding Real-time Data to Your Agent Dialogs

    One of the biggest challenges with LLMs is their limited access to new information. These massive models take months to train, followed by additional months of testing and refining safeguards to protect users from harmful content and unauthorized queries. However, for LLM-based agents to truly be effective, they must have access to the most up-to-date data.

    I’ve been exploring Sentienta’s latest real-time search agent, ‘Angie,’ available in the Agent Marketplace under the ‘Document and Content Access’ section. Angie offers unique capabilities, including real-time news searches, detailed financial report summaries—covering SEC filings, stock news, sentiment analysis, and price history—and the ability to summarize content from open web pages.

    Sentienta’s Angie agent offers these capabilities without the need for coding or integration, unlike OpenAI and Google’s Gemini platforms. It provides a no-code solution for seamlessly incorporating real-time data into team discussions.

    Under the covers, this agent makes use of an integrated tool suite, intelligently selecting the best tool for the task. For content searches there are tools that directly integrate third-party APIs. This ensures that you don’t have to manage API keys and subscriptions. Instead you get the most up-to-date content when you need it. It just works.

    Here are some examples of how you can use Angie in your teams. Suppose you are interested in the latest SEC filings on Uber. I can’t reproduce the full result here, but here is an abbreviated transcript:

    Me: Angie get the latest sec filings for Uber

    Thu, 20 Mar 2025 17:59:26 Angie:

    I will get a report on UBER, from document 10-K

    Thu, 20 Mar 2025 17:59:27 Angie:

    This document has 142 pages.

    Thu, 20 Mar 2025 18:00:25 Angie:

    The document is a 10-K annual report filed by Uber Technologies, Inc. with the SEC for the fiscal year ending December 31, 2024. It contains comprehensive financial and operational details about the company and its activities.

    … (multiple pages of analysis)

    The document also discusses market risks, including interest rate risk, investment risk, and foreign currency risk. It details the company’s investment policies aimed at capital preservation and mitigating risks associated with investments in publicly traded and privately held entities. Foreign currency fluctuations pose a risk to international revenues and expenses, partially offset by incurring expenses in the same foreign currencies. The company uses derivative contracts to manage currency risks but acknowledges that these measures cannot fully eliminate exposure.

    After reading this summary of the 10-K, perhaps you’d like to see the latest news on Uber:

    Me: Angie what is the latest news on Uber?

    Thu, 20 Mar 2025 18:45:29 Angie:

    Here is the best possible answer based on the search results for the latest news on Uber as of Thursday, March 20, 2025:

    Uber Technologies (NYSE: UBER) has been performing strongly in 2025, with its stock up 31.55% year-to-date. Analysts have set a consensus price target of $90.7, indicating a potential 14.16% upside. Uber has also announced a $1.5 billion accelerated share repurchase program as part of its broader $7.0 billion repurchase authorization, showing confidence in its financial position. Additionally, for Q1 2025, Uber expects bookings to grow between 17%-21%, indicating robust operational performance going forward.

    [5 supporting links]

    With that in hand, perhaps you’d like to see the latest closing price for this stock:

    Me: Show me the latest closing price for Uber’s stock

    Thu, 20 Mar 2025 18:53:16 Angie:

    As of the most recent data refresh on March 19, 2025, at 19:55 EST, Uber Technologies Inc. (UBER) had the following stock performance:

    • – Last traded price (close): $73.39
    • – Opening price: $73.345
    • – Daily high: $73.39
    • – Daily low: $73.3001
    • – Trading volume: 5,430 shares

    This data is based on intraday trading with 5-minute intervals. If you need more details such as historical trends or analyst insights, please let me know!

    There are of course, many ways to access this data, including a series of Google searches. However, Angie automatically integrates it into your team’s dialog, making it readily available to other agents. My team included a Research Lead, ‘Roger,’ who analyzed Angie’s results. Below is a brief transcript of Roger’s suggested next steps.

    Me: Roger – please summarize what we’ve learned about Uber. Is this a good investment?

    Thu, 20 Mar 2025 18:57:06 Roger:

    Uber has gained 31.55% year-to-date, with analysts targeting $90.7, indicating a 14.16% potential upside. A $1.5 billion share repurchase suggests financial confidence, and 17%-21% bookings growth signals strong operations. The latest closing price was $73.39 as of March 19, 2025. While the outlook appears positive, further analysis of risks, competition, and market conditions is recommended before making an investment decision.

    Me: What does this data tell you about risks?

    Thu, 20 Mar 2025 18:58:43 Roger:

    Uber’s strong year-to-date growth and share repurchase suggest confidence, but risks remain. Potential concerns include market volatility, regulatory challenges, competition from Lyft and autonomous vehicle companies, and macroeconomic factors affecting ride demand. Further analysis of revenue stability and profitability trends is necessary.

    By exploring competitive trends and regulatory changes, I gained additional insights, too many to show here. This demonstrates the power of Angie as a research assistant and the strength of a team that can analyze data to highlight what truly matters. In a future post, I’ll showcase how the Research Lead delegates research tasks to Angie and compiles an investment opportunities report—without writing a single line of code.

  • Should We Pursue Machine Consciousness or Is That a Very Bad Idea?

    In past posts (Why Sentienta? and Machine Consciousness: Simulation vs Reality), we’ve explored the controversial issue of machine consciousness. This field is gaining attention, with dedicated research journals offering in-depth analysis (e.g. – Journal of Artificial Intelligence and Consciousness and International Journal of Machine Consciousness). On the experimental front, significant progress has been made in identifying neural correlates of consciousness (for a recent review see The Current of Consciousness: Neural Correlates and Clinical Aspects).

    Should We Halt Conscious AI Development?

    Despite growing interest, some researchers argue that we should avoid developing conscious machines altogether (Metzinger and Seth). Philosopher Thomas Metzinger, in particular, has advocated for a moratorium on artificial phenomenology—the creation of artificial conscious experiences—until at least 2050.

    Metzinger’s concern is rooted in the idea that conscious machines would inevitably experience “artificial suffering”—subjective states they wish to escape but cannot. A crucial component of suffering, he argues, is self-awareness: for an entity to suffer, it must recognize negative states as happening to itself.

    The Risk of an “Explosion of Negative Phenomenology” (ENP)

    Beyond ethical concerns, Metzinger warns that if conscious machines hold economic value and can be replicated infinitely, we may face an uncontrolled proliferation of suffering—an “explosion of negative phenomenology” (ENP). As moral beings, he believes we are responsible for preventing such an outcome.

    Defining Consciousness: Metzinger’s Epistemic Space Model

    To frame his argument, Metzinger proposes a working definition of consciousness, known as the Epistemic Space Model (ESM):

    “Being conscious means continuously integrating the currently active content appearing in a single epistemic space with a global model of this very epistemic space itself.”

    This concept is simple and concise: consciousness is simply a space of cognition and an integrated model of that cognition itself. Here cognition means the continuous processing of new inputs.

    How to Prevent Artificial Suffering

    Metzinger outlines four key conditions that must be met for artificial suffering to occur. If any one condition is blocked, suffering is avoided:

    • Conscious Experience: A machine must first have an ESM to be considered conscious.
    • Possession of a Self-Model: A system can only experience suffering if it possesses a self-model that recognizes negative states as happening to itself and cannot detach from them.
    • Negative States: These are aversive perceptions an entity actively seeks to escape.
    • Transparency: The machine must lack visibility into its own cognitive processes, making negative experiences feel inescapable.

    Notably, these conditions are necessary but not necessarily sufficient, meaning if any one fails to manifest, artificial suffering does not arise.

    Should We Avoid Suffering at All Costs?

    While Metzinger convincingly argues for avoiding machine suffering, he gives little attention to whether suffering itself might hold value. He acknowledges that suffering has historically been a highly efficient evolutionary mechanism, stating:

    “… suffering established a new causal force, a metaschema for compulsory learning which motivates organisms and continuously drives them forward, forcing them to evolve ever more intelligent forms of avoidance behavior.”

    Indeed, suffering has driven humans toward some of their greatest achievements, fostering resilience and learning. If it has served such a crucial function in human progress, should we entirely exclude it from artificial intelligence?

    Ethical Safeguards for Conscious Machines

    We certainly want to prevent machines from experiencing unnecessary suffering, and Metzinger outlines specific conditions to achieve this. In particular, any machine with a self-model should also be able to externalize or dissociate negative states from itself.

    Is Conscious AI a Moral Imperative?

    Even in its infancy, generative AI has already made breakthroughs in medicine and science. What might the next leap—conscious AI—offer? Might allowing AI to experience consciousness (and by extension, some level of suffering) be a necessity for the pursuit of advanced knowledge?

    While we don’t yet need definitive answers, the conversation around ‘post-biotic’ consciousness is just beginning. As we approach this technological threshold, we must continue to ask: what should be done, and what must never be done?

  • Understanding Operator Agents

    There has been significant buzz (and here and here) surrounding “Operator” agents—tools designed to autonomously interact with desktop content or manipulate web pages.

    Competing Approaches: OpenAI vs. Manus

    OpenAI has introduced “computer-use”, enabling agents to take screenshots of the desktop and utilize GPT-4o’s vision capabilities to navigate, click, scroll, type, and perform tasks like a human user.

    Meanwhile, Manus leverages the Browser Use library, allowing agents to interact directly with elements within a browser session. Unlike OpenAI’s general approach, this method is optimized for web-based workflows by analyzing and interacting with a webpage’s DOM elements instead of relying on screenshots.

    Performance Comparison

    Both approaches are relatively new, and early benchmarks indicate promising yet limited capabilities. Recent results show that OpenAI’s method achieves a 38.1% success rate on an OSWorld benchmark (where human performance is 72.36%) and 58.1% on WebArena. While no direct comparison is available for Manus, company-released figures claim a 57.7% score on the GAIA benchmark, where OpenAI’s Deep Research tool stands at 47.6%.

    Despite these advances, neither solution is fully autonomous. Some concerns have also been raised about Manus’ limited beta, with speculation that early results may have been optimized for publicity rather than real-world performance.

    Alternative: Direct API Integration

    A third approach to working with online content is integrating directly with third-party APIs. Although less general than OpenAI or Manus, API access tends to deliver more consistent results. For instance, retrieving intraday stock performance can be done with one of several APIs (e.g. – Yahoo Finance API, Alpha Vantage API, Polygon.io API)

    These services provide structured data through API calls, often free for limited use, avoiding the challenges of web scraping (which is usually blocked or discouraged).

    The No-Code Alternative: Sentienta

    Sentienta simplifies API-based automation with a No-Code solution. By integrating leading APIs and advanced search capabilities, Sentienta agents can access real-time web data without requiring any coding on the user’s part. This approach manages API connections and token authentication, enabling users to assemble expert AI teams with minimal effort.

    In an upcoming post, we’ll explore how to build a portfolio management team that factors in real-time market sentiment, financial news, and stock performance—entirely without writing a single line of code.

  • Tips and Tricks: Agent Marketplace

    In past posts, we’ve discussed the process of creating agents from scratch. While this is straightforward, there’s a good chance that the agent you need has already been built by someone else. The Agent Marketplace is a library of pre-made agents, allowing you to quickly find and integrate the right one into your team.

    To add an agent from the Marketplace, navigate to Manage Teams, select your desired team, and then click on Agent Marketplace in the left menu.

    The Agent Marketplace is organized into categories based on the agents’ areas of expertise. Browse through these categories to find an agent that matches your needs. Each agent listing includes a description of its skills and persona. To add an agent, simply check the box next to its name. You can select multiple agents at once—just be sure to click the Add Agents to Selected Teams button at the top of the page. This process helps you assemble a functional team without the effort of manually creating each agent.

    While this makes team-building seamless, what’s truly powerful is that Marketplace agents are more than static tools—they’re customizable templates. Once you’ve added an agent, you can refine its persona to better align with your specific objectives.

    For example, let’s say you’re assembling a software team to develop a cutting-edge AI product. You’ve added the Rubin agent, but its default persona is too general. You need this agent to specialize in AI development tools. Here’s how to tailor it:

    On the Manage Teams page, locate the Rubin agent in the Your Agents and Teams section. Click on the agent’s persona to edit it. Replace the default text with a more specialized persona, such as:

    As a Senior Software Designer with expertise in Artificial Intelligence, you will architect and develop advanced AI-driven solutions using state-of-the-art technologies. You will work with machine learning frameworks such as TensorFlow, PyTorch, and Scikit-learn, leveraging APIs like OpenAI’s GPT for AI-powered applications. Additionally, you’ll utilize NLP libraries such as spaCy and Hugging Face for language processing tasks. Expertise in cloud-based AI services (AWS SageMaker, Google Vertex AI, Azure AI) and big data platforms like Apache Spark and Kafka is crucial. Your role includes optimizing AI workflows, integrating intelligent automation into software applications, and guiding best practices for AI model deployment and scalability.

    You can also customize the agent’s name—which is useful if you plan to add multiple instances of the same base agent. Additionally, selecting a distinct color for the agent’s responses helps differentiate it in team interactions. To do this, click on the color square in the agent listing and choose a new highlight color. After finalizing your changes, always click Save Changes to apply them.

    The Agent Marketplace makes it incredibly easy to build high-performing teams in just a few clicks. Even better, its customization features ensure that your agents are perfectly aligned with your needs. In future posts, we’ll explore agents that integrate with external tools and discuss how to optimize their capabilities through persona refinement.

  • Integrating Your Own Data into Team Dialogs

    Sentienta agents are based on best-of-class LLMs, which means that they have been trained on vast stores of online content. However, this training does not include current data, nor does it include your proprietary content. In a future post, we’ll discuss how your agent teams can access and utilize current online data, but today I want to talk about loading your own content into your team dialogs.

    An Easy Way: Copy-and-Paste

    Sentienta provides several mechanisms for entering your content into team discussions. Perhaps the easiest method is to simply copy text that you want your team to know about onto the clipboard and paste it into the query box.

    You can add a question about the content to the end of what you’ve pasted so that the team has some context for what you added. This method works for short passages when you want to add perhaps a few paragraphs to the discussion, but is impractical when working with larger documents.

    Loading Files for the Team

    For larger documents, a better method is to load the file into the dialog. This is done by clicking the paperclip button (located in the toolbar below the query box), and browsing for the file you’d like to load. You can also simply drag-and-drop a file onto the query box.

    The query box will tell you that the file content has been loaded, and you can append questions and comments to the content to aid the agents in determining how to use the content for discussion.

    The advantage of this approach is that it ensures that all the agents on the team see the same content and have the same context for discussing and using it in subsequent dialogs.

    A disadvantage of both this method and the first is that the content doesn’t persist indefinitely. Team dialogs become part of each agent’s semantic memory (as discussed here), but this memory is limited in both size and time.

    Persisting Your Content

    There are many cases where you want your agents to retain document knowledge indefinitely. For example, an HR agent that maintains company policies and procedures—since these rarely change. Manually reloading these documents regularly is impractical, so Sentienta offers an agent that can store and retrieve files from its own dedicated folder.

    To see this in action, add the ‘Ed’ agent from the Agent Marketplace under the Document and Content Access section. Simply select the Ed agent and assign it to a team. This agent provides tools for adding individual files or entire folders. You can manage stored files by listing them and removing any that are no longer needed.

    The Ed agent retains these files and can answer questions about them anytime. This approach allows you to load the files once and then add the agent to any team with the stored information. However, unlike the second method discussed, other agents on the team won’t automatically share Ed’s knowledge. Nevertheless, Ed can communicate its information to other agents through the dialog.

    Final Thoughts

    With the methods we’ve discussed here, you can integrate company-specific documents into team dialogs, ensuring that relevant information is always accessible when solving problems. This approach enhances collaboration and keeps your teams aligned with the most current data.

  • Tips and Tricks for Creating an Effective Agent

    Creating an agent in Sentienta is straightforward, but a few key strategies can help ensure your agent works optimally. Below, we’ll walk through the setup process and offer insights on defining an agent’s role effectively.

    Step 1: Create a Team

    Before creating an agent, you must first establish a team. To do this:

    1. Navigate to the Your Agents and Teams page using the Manage Teams button on the homepage.
    2. Click Create a Team. You’ll see three fields:
      • Name: Enter a name, such as “HR Team”
      • Type: Categorize the team (e.g., “Human Resources”).
      • Description: This defines the team’s purpose. A simple example: “This team manages Human Resources for the company.”
    3. Click Submit to create the team.

    Step 2: Create an Agent

    Once you’ve created a team, it will appear in the Teams section along with the Sentienta Support Team. Follow these steps to add an agent:

    1. Select your team (e.g., HR Team).
    2. Click Create an Agent in the left menu.
    3. Assign a name. Let’s call this agent Bob.
    4. Define Bob’s title—e.g., Benefits Specialist.
    5. Define Bob’s Persona, which outlines expertise and interactions.

    Step 3: Crafting an Effective Persona

    The Persona field defines the agent’s expertise and shapes its interactions. As discussed in our earlier post on Agent Interaction, the agent uses an LLM to communicate with both users and other agents. Since the persona is part of the LLM system prompt, it plays a crucial role in guiding the agent’s responses.

    The persona should clearly define what the agent is able to do and how the agent will interact with the other members on the team. (To see examples of effective personas, browse some of the agents in the Agent Marketplace).

    A well-crafted persona for Bob might look like this:

    “You are an expert in employee benefits administration, ensuring company programs run smoothly and efficiently. You manage health insurance, retirement plans, and other employee perks while staying up to date with legal compliance and industry best practices through your Research Assistant. You provide guidance to employees on their benefits options and collaborate with the HR Generalist and Recruiter to explain benefits to new hires.”

    Key persona components:

    • Expertise: Clearly defines Bob’s role in benefits administration.
    • User Interaction: Specifies that Bob provides guidance to employees.
    • Team Collaboration: Mentions interactions with other agents, such as the HR Generalist and Recruiter.
    • Delegation: Optionally, defines which agents Bob may delegate to—for example, a Research Assistant agent that retrieves compliance updates.

    If additional agents (like the HR Generalist or Research Assistant) don’t yet exist, their roles can be updated in Bob’s persona as the team expands.

    Once the persona is complete, click Submit to add Bob to the team. (We won’t discuss the URL optional field today, but will save for a future post.)

    Step 4: Testing Your Agent

    Now that Bob is created, you can test the agent’s expertise:

    1. Navigate to the home page and select the HR Team below Your Teams
    2. Make sure Bob’s checkbox is checked and enter a query, such as “What is your expertise?”
    3. Bob will respond with something like:

    “I am a Benefits Specialist, responsible for employee benefits administration, including health insurance, retirement plans, and other perks. I ensure compliance with regulations and provide guidance to employees on their benefits options.”

    If asked an unrelated question, such as “What is today’s weather?” Bob will remain silent. This behavior ensures that agents only respond within their expertise, promoting efficient team collaboration.

    Next Steps

    Once your agent is set up, you can explore additional customization options, such as adding company-specific benefits documentation to Bob’s knowledge base. Stay tuned for a future post on enhancing an agent’s expertise with internal documents.

  • A Deep-dive into Agents: Agent Autonomy

    A Deep-dive into Agents: Agent Autonomy

    In past posts, we’ve explored key aspects of AI agents, including agent memory, tool access, and delegation. Today, we’ll focus on how agents can operate autonomously in the “digital wild” and clarify the distinction between delegation and autonomy.

    Understanding Delegation and Autonomy

    Agent delegation involves assigning a specific task to an agent, often with explicit instructions. In contrast, autonomy refers to agents that operate independently, making decisions without significant oversight.

    Within Sentienta, agents function as collaborative experts, striking a balance between autonomy and delegation for structured yet dynamic problem-solving. Autonomous behavior includes analyzing data, debating strategies, and making decisions without user intervention, while delegated tasks ensure precise execution of specific actions.

    For example, a Business Strategy Team could autonomously assess market trends, identify risks, and refine strategies based on live data. At the same time, these agents might delegate the task of gathering fresh market data to a Web Search Agent, demonstrating how autonomy and delegation complement each other.

    Extending Autonomy Beyond Internal Systems

    Sentienta Assistant agents and teams can also function beyond internal environments, operating autonomously on third-party platforms. Whether embedded as intelligent assistants or collaborating in external workflows, these agents dynamically adapt by responding to queries, analyzing evolving data, and refining recommendations—all without requiring continuous oversight.

    Practical Applications of Autonomous Agents

    Below are practical applications showcasing how agents can operate independently or in collaboration to optimize workflows and decision-making.

    • Financial Advisory & Portfolio Management (Single Agent) A financial advisor agent reviews portfolios, suggests adjustments based on market trends, and provides personalized investment strategies.
    • Customer Support Enhancement (Single Agent or Team) A support agent answers queries while a team collaborates to resolve complex issues, escalating cases to specialized agents for billing or troubleshooting.
    • Data-Driven Market Research (Sentienta Team) A multi-agent team tracks competitor activity, gathers insights, and generates real-time market summaries, using delegation for data collection.
    • Legal Document Analysis & Compliance Checks (Single Agent) A legal agent reviews contracts, identifies risk clauses, and ensures regulatory compliance, assisting legal teams with due diligence.
    • Healthcare Support & Patient Triage (Single Agent) A virtual medical assistant assesses symptoms, provides diagnostic insights, and directs patients to appropriate specialists.

    The Future of AI Autonomy in Business

    By combining autonomy with effective delegation, Sentienta agents serve as dynamic problem-solvers across industries. Whether streamlining internal workflows or enhancing real-time decision-making, these AI-driven assistants unlock new possibilities for efficiency, expertise, and scalable innovation.

  • A Deep-dive into Agents: Agent Interaction

    In previous posts, I introduced the capabilities of individual agents, including their unique memory architecture and tool access. But what sets Sentienta apart is its multi-agent platform, where agents work together to solve problems. Today, we’ll explore how these agents interact as a team.

    How Agents Work Together

    Let’s consider an example: imagine you’ve formed a team to design a new electric scooter. We’ll call this the Scooter Team, and it’s type is Product Design.

    The team consists of key specialists: a VP of Product Design as team lead, a mechanical engineer with expertise in two-wheeled vehicles, an electrical engineer for the scooter’s power system, and a legal representative to ensure compliance with regulations. In future posts, we’ll discuss how to create specialist agents in Sentienta, but for now, imagine they’re in place and ready to collaborate.

    Once the team is set up, you initiate a discussion—say, “Let’s consider all the elements needed in the scooter design.” Each agent processes the request from its area of expertise and contributes insights. As they respond, their inputs become part of an ongoing team dialogue, which, as discussed in this post, is stored in each agent’s memory and informs subsequent responses.

    Iterative Problem-Solving

    Agents interact much like human working groups: they listen to teammates before responding, integrating their insights into their own reasoning. This iterative exchange continues until the original question is thoroughly addressed.

    What does that mean for the scooter design team? Suppose that the first response comes from the mechanical engineer: she tells the team about the basic components in the design, and in particular suggests the power that is needed to drive the scooter. The electrical engineer will consider this power specification when developing his response. The agent representing legal may note that regulations cap the scooter’s speed at 25 mph.

    And this is what is interesting: the input from legal may cause the mechanical and electrical engineers to reconsider their answers and respond again. This iterative answering will continue until each agent has contributed sufficiently to fully address the query. Reasoning about the user’s question derives from this agent interaction.

    The Role of LLMs in Agent Interaction

    How does this happen? The engine that drives each agent is an LLM. The LLM system prompt includes several key instructions and information that foster team interaction: the team definition along with teammate personas are included in the prompt enabling the LLM to consider who on the team is best able to address aspects of the query.

    In addition, each agent is instructed to critically think about input from teammates when developing a response. This makes team dialogs interactive and not just isolated LLM responses. Agents are instructed to consider whether the query has been answered by other agents already. This helps to drive the dialog to conclusion.

    Looking Ahead

    This dynamic interaction forms the foundation of Sentienta’s multi-agent problem-solving. In future posts, we’ll explore concepts like delegation and agent autonomy, further uncovering the depth and efficiency of these collaborations.

  • Machine Consciousness: Simulation vs Reality

    Suppose that we create a perfect model of a hydrogen atom. After all, we know all the elements of the atom, a single proton paired with a single electron. Each of these elements is understood for all purposes of discussing the atom. The electron’s dynamics are perfectly understood: the energy levels, the probability distributions, the electron’s spin, the Lamb shift, it is all well defined. We can simulate this atom to any degree of detail we’d like, including representing it in a quantum computer that actualizes the quantum properties of the constituents.

    But can that simulation ever bridge the gap to reality? Is the story about something ever the same as the thing? Can two perfectly represented hydrogen atoms be added to a real oxygen atom to make a real water molecule? No.

    That is one argument for why machines cannot be made sentient: we can make a machine do all of the things we think sentience entails, but in the end it is just a simulation of intelligence, thinking and consciousness. It is a story, perhaps a very detailed story, but in the end just a story.

    This reminds one of C. S. Lewis’ remark “If God is our Creator, then we would relate to God as Hamlet would relate to Shakespeare. Now, how would Hamlet ever gonna know anything about Shakespeare? Hamlet’s not gonna find him anywhere on stage.” And similarly, Shakespeare is never going to know the real Hamlet.

    Sentience in a Dish In December of 2022, Bret Kagan and colleagues at Cortical Labs in Melbourne, Australia published an article in the journal Neuron, describing how brain organoids, small lab-grown networks of neurons, were able to learn to play the classic Pong video game. The authors claim that the organoids met the formal definition of sentience in the sense that they were “‘responsive to sensory impressions’ through adaptive internal processes”.

    This may or may not be true, but it is certainly distinct from a simulated sentience that plays Pong. After all, this is not a description of cells that interact with a simulated or even real Pong game. These are real live cells, requiring nutrients in a Petri dish. Those that argue that consciousness can only come through embodiment would be happy with this definition of sentience.

    But what is it that makes these cells sentient? Where in their soupy embodiment lies the sentience? If we tease apart the network, can we get down to the minimum viable network that plays Pong and meets our formal definition? After all, this is a brain organoid, grown in the lab. We could do this over again and stop when there are fewer cells and see if the same behavior is exhibited. If so, we can repeat the process and find that minimum network that still plays a good game of Pong.

    Whatever that number is, 10 cells or 10,000 cells, we can study and very likely represent with a model that replicates the connections, spiking behavior, even the need for simulated nutrients, everything that is meaningful about the organoid. Would this simulation learn to play Pong? Given progress in machine learning in the past decade, we have every reason to believe the answer is yes. Would this create sentience in a machine? Or just tell a very detailed story about an organoid that is sentient? And if the latter, then where is the difference?

    Is the simulation of the hydrogen atom qualitatively different from that of the organoid? The simulated hydrogen atom can’t be used to make water. But the simulated organoid, for all practical purposes, does exactly the same thing as the real thing. Both meet the same formal definition of sentience.

    I don’t believe these thoughts get closer to understanding whether machines can be conscious or not. Reductionism might just fail for this problem, and others will argue that embodiment is a requirement. But I do think that we are not far from having that simulation, which in all meaningful ways, can be called sentient.

  • Why Sentienta?

    A little over a year ago I gave a talk in which I discussed machine reasoning and gave some examples from the literature. My audience was skeptical to say the least. But here we are in 2025 with multiple companies claiming powerful reasoning capabilities for their models and new benchmarks set weekly. These are exciting times.

    I’ve always believed that understanding the science behind machine “intelligence” would help us understand more deeply who and what we are. In a way, the reasoning capabilities of today’s models do that. We question whether they are simply capturing the ‘surface’ statistics of training data. At the same time, they are unquestionably powerful. I think sometimes this tells us that our cherished human intelligence may rely on similar ‘weak’ methods more often than we’d like to admit. That is to say, as we come to understand machine intelligence the mystery of our own may lessen.

    And now we come to machine consciousness. This is the topic, that if mentioned in serious venues, produces much more skepticism and even snickering. After all, we can’t really define what human consciousness is. We know it has to do with a sense of our own existence, sensations of our environment, and thoughts or ‘inner speech’. Given that this is all subjective, will we ever be able to understand what it is? I suspect that, just as with machine reasoning, the mystery of consciousness will begin to lift as we understand what it looks like in a machine.

    One of the more compelling models for consciousness is the ‘Society of Mind‘ (Minsky, 1988). This model has only been strengthened in the intervening years since it was published. We now know self-referential thought and introspection involve multiple brain centers, collectively called the Default Mode Network (DMN). Brain regions including those from the medial prefrontal cortex, the cingulate gyrus and the hippocampus work together to integrate both past and current experiences. As we begin to model these kinds of interactions will Chalmer’s “hard problem of consciousness” fade away?

    Sentienta was started to both help companies scale their businesses through virtual teams of agents, and to explore these ideas. Agents interact and share their expertise in teams. The dialog that occurs between agents generates new ideas that no single agent produced. Agents have their own memories that they develop from these interactions. These memories are a function of an agent’s persona. As a result, agents evolve and bring new experiences and ideas to a team discussion.

    And here is the point: we can think of a Sentienta team as an agent itself. It consists of the collective experience and interactions of multiple agents we might think of as a society of mind. Can we build agents that perform analogous functions to those found in the DMN? What light might this shine on our own subjective experience?