Tag: science

  • Effortless External Data

    Connecting Sentienta Agents to APIs for Smarter Business Workflows

    Accessing real-time and external data is essential for many business tasks, from market research to operational monitoring. Sentienta enables users to connect to external data sources through APIs, allowing agents to enhance their responses with up-to-date information.

    This post examines two primary ways to achieve this within Sentienta: configuring a static API URL when creating an agent, and using the new Agent Marketplace agent, Ben, for flexible, on-demand data retrieval via natural language queries. Both methods help teams integrate API-driven insights directly into their workflows with minimal technical overhead.

    Method 1: Adding an API URL When Creating an Agent

    Sentienta allows users to connect agents directly to external APIs by specifying a fixed URL at the time of agent creation. This approach is best suited for situations where the API endpoint and its required parameters do not change frequently.

    Example:

    Consider setting up an agent using the “List of Free Public APIs” (e.g., https://www.freepublicapis.com/api/apis?limit=10&sort=best). By entering this URL in the agent’s configuration, the agent can retrieve and display up-to-date lists of publicly available APIs upon request. This empowers users to quickly find additional data sources relevant to their needs without searching manually.

    • Setup: Create a new agent, add the “List of Free Public APIs” endpoint as the agent’s default URL.
    • Result: When a user asks the agent for available APIs, the agent queries the service and returns relevant results.

    This method is particularly effective for routine queries where the required data source remains constant and access is straightforward.

    Note that that all necessary parameters must be included when adding the URL to the agent’s description. (For example, some websites require the calling agent to provide a User-Agent string or other parameters identifying the application).

    Method 2: Using the Ben Agent for Flexible, On-Demand API Access

    The ‘Ben’ agent, available in the Agent Marketplace section Document and Content Access, enables users to connect to almost any external API using natural language instructions. By providing a URL template and relevant parameters within a query, users can retrieve data on demand—even from APIs with dynamic requirements.

    As with any agent in the Agent Marketplace, using this agent is just a matter of adding to your team, and customizing the name, type, and persona.

    Example:

    Suppose you want to access NASA’s Near-Earth Object Web Service, which requires parameters for start_date, end_date, and a user API key (e.g., NASA NEO API). The url and parameters can be entered as a natural language query to this agent:

    User: The url is https://api.nasa.gov/neo/rest/v1/feed?start_date=START_DATE&end_date=END_DATE&api_key=KEY. The START_DATE is 2023-09-07 and the END_DATE is 2023-09-08. The value of KEY is YOUR_KEY.

    Ben: Here’s a pretty summary of NASA’s Near-Earth Object (NEO) close approaches for September 7–8, 2023.

    ---

    🪐 Overall Summary

    - Total NEOs detected: 60

    - Dates covered: September 7–8, 2023

    - Potentially Hazardous Asteroids (PHAs): 5 (labeled below)

    - Closest approach distance: 0.00007 AU (~10,361 km from Earth)

    - Smallest object (est. min. diameter): ~0.9 m

    - Largest object (est. max. diameter): ~695 m

    ---

    📅 NEOs By Day

    September 8, 2023 (31 NEOs detected)

    Top highlights:

    - Closest Approach:

    - (2023 RQ2) — approached at 0.00126 AU (189,304 km) at 06:42 UTC, diameter: ~6–13 m

    - Largest Asteroid:

    - 340291 (2006 CV) — est. diameter 281–628 m, passed at 0.114 AU (17 million km)

    - Potentially Hazardous Asteroids (PHAs):

    - 2019 FM2 — est. diameter ~201–449 m, 0.168 AU (25 million km)

    - Fastest Relative Velocity:

    - (2023 QK1) — 29.09 km/s (104,711 km/h), 0.0586 AU away

    - Sentry-monitored (Impact risk monitored):

    - (2023 RY15) — 0.00398 AU (595,000 km), sentry monitored, diameter: ~6.7–15 m

    [Sentry risk page](http://api.nasa.gov/neo/rest/v1/neo/sentry/54383809?api_key=...)

    ---

    September 7, 2023 (29 NEOs detected)

    Top highlights:

    - Closest Approach:

    - (2023 RS) — 0.00007 AU (10,361 km) at 14:26 UTC, diameter: ~0.9–2 m

    - Largest Asteroid:

    - (2006 QE) — est. diameter 311–695 m, passed at 0.159 AU (23.7 million km)

    - Potentially Hazardous Asteroids (PHAs):

    ...

    Security Note on API Keys

    Only use personal or public API keys with any client-side API access if the data is non-sensitive. API keys entered in the team dialog are handled in the browser and are not encrypted or securely stored; never use this method for sensitive information or high-value credentials.

    This approach is best for on-the-fly research, prototyping, or data gathering when endpoints and request parameters frequently change.

    Conclusion

    Sentienta’s flexible API integration unlocks a wide range of use cases for business teams. For example, an agent configured with a static URL can support daily competitor monitoring, environmental alerts, or compliance audits by consistently retrieving the same type of data. Meanwhile, the Ben agent makes it easy to perform ad hoc research, pull market data on demand, or gather custom reports by dynamically querying APIs in response to fast-changing business needs. These capabilities help teams save time, discover new opportunities, and keep information flowing directly into collaborative workflows, empowering better decision-making across projects.

  • Handling the Heat: Workflow Agents in a Critical Incident

    We’ve explored Sentienta workflows across use cases—from multi-agent automation to scheduled portfolio monitoring. Today, we turn to a high-stakes scenario: incident escalation. In this post, we’ll walk through how Sentienta’s workflow agents coordinate a real-time response to a production outage, from root-cause triage to stakeholder communication, rollback decisions, and compliance tracking—all from a single natural-language command. You’ll see how a single NL command triggers coordinated action across agents, ensuring fast response, stakeholder alignment, and full audit trail.

    1. When Checkout Goes Dark: A Production Issue Unfolds

    A cluster of gateway timeouts on the /checkout endpoint triggered immediate concern. User-reported failures confirmed a disruption in transaction processing.

    [14:03:17] ERROR: Gateway timeout on /checkout endpoint  
    [14:03:41] USER_FEEDBACK: “Payment failed twice. Cart cleared.”
    [14:04:02] DEVOPS_COMMAND: David - Escalate a critical timeout issue affecting our checkout API—users are seeing failures in production. Triage the cause, roll back if needed, notify stakeholders across channels, and if rollback is needed also notify of that, log all events for compliance, and track resolution progress until issue closure.
    [14:04:10] TRIGGERED: David-AI Escalation Agent activated

    The workflow agent (‘David’) initiated a structured escalation process in response. Within seconds, task-specific agents were assigned to diagnose the issue, log events, assess rollback requirements, and notify relevant stakeholders according to pre-defined protocols.

    2. Breaking It Down with Agents: From Detection to Resolution

    After receiving the DevOps command, David interprets the NL instruction and initiates a structured escalation workflow. The process begins with timeout escalation and unfolds across task-specific agents that handle tracking, triage, rollback, communication, and compliance:

    1. Maya: Escalates the timeout incident and kicks off formal response procedures.
    2. Jordan: Launches the incident tracker to orchestrate task flow and status checkpoints.
    3. Drew: Analyzes system logs, traces failures, and identifies root causes.
    4. Samira: Prepares for rollback execution if required, coordinating across deployment teams.
    5. SentientaCore_LLM: Classifies whether rollback communication updates are necessary based on task outputs.
    6. Leo: Broadcasts the initial incident update and, conditionally, the rollback status update if rollback is invoked.
    7. Nia: Logs all operational events, tracks communication threads, and generates a final resolution summary for audit trail.

    These agents collaborate in a branching yet traceable flow, dynamically adjusting to task outcomes while documenting every step. Below is a visual representation of the workflow that David triggers based on the input command. Tap to enlarge.

    Note: The following is a representative workflow. Each agent assumes access to required external systems and APIs. Our focus here is the high-level orchestration—driven not by pre-built scripts, but by a simple natural language command to David.

    3. Smart Sequencing and Conditional Updates

    As the agents execute their tasks, conditional branches are evaluated in real-time. The SentientaCore_LLM agent classifies whether a rollback is necessary based on inputs from Drew and Samira. When rollback is triggered, communications unfold automatically:

    [14:07:09] ROLLBACK_FLAG: true — initiating rollback…  
    [14:07:22] UPDATE: Stakeholders notified: “Rollback in progress due to API timeout error.”

    These conditional updates are coordinated by Leo, who adjusts messaging flows based on rollback status. Meanwhile, the agent Nia logs each event, correlates user communications and agent actions, and ensures the incident trail meets compliance requirements in parallel—without delaying resolution.

    This approach ensures that incident response remains adaptive and fully traceable, with no manual scripting or coordination required after the initial natural language command.

    The following simulated log trace demonstrates the types of information each agent records during incident response, how data is transmitted between agents, and how a final resolution report is assembled. While the data inputs are synthetic for demonstration purposes, the underlying workflow—including sequencing, conditional branching, and communication—is real and executed by the David workflow agent. This example highlights the auditability and coordination possible in live deployment scenarios.

    escalate_timeout_issue: Wed, 14 May 2025 15:05:02 Maya: Escalation triggered: "Escalate timeout issue for Checkout" — team engaged for structured resolution.

    initiate_incident_tracker: Wed, 14 May 2025 15:05:06 Jordan: Incident Escalation Tracker initialized for "Checkout API timeout incident." SLA clock started, command visibility established, and role assignments propagated. Awaiting triage input to proceed.

    triage_logs_and_root_cause: Wed, 14 May 2025 15:05:11 Drew: Initial scan of the logs shows a 7.3x spike in 5xx error rates beginning at 15:04:51Z across `/checkout` requests routed to `payment-auth`. A config deployment to `payment-auth` went live at 15:04:49Z. High temporal correlation—likely root cause. Evaluating rollback eligibility.

    status_update_initial: Wed, 14 May 2025 15:05:16 Leo: Stakeholder Notification:
    A timeout incident is currently affecting the Checkout API and disrupting production traffic. An escalation has been initiated and the incident tracker is active with roles assigned. We are currently triaging the issue and will provide updates as diagnostics and mitigation actions progress.

    —Posted via Slack, email, and status banners.

    rollback_if_needed: Wed, 14 May 2025 15:05:20 Samira: Confirm rollback eligibility via deployment metadata and latency/error metrics. If confirmed, initiate scoped rollback to the last green build prior to 15:04:49Z and begin monitoring 5xx activity in 15s intervals.

    final_resolution_summary: Wed, 14 May 2025 15:05:26 Nia: Incident Closure Summary – Checkout API Timeout
    Incident Trigger: Timeout issues affecting the Checkout API detected, causing user-facing failures in production.
    Diagnostic Findings: Root cause traced to a configuration push at 15:04:49Z on the `payment-auth` service. Latency and 5xx errors spiked immediately following the deploy.

    Actions Taken:
    - Rollback eligibility confirmed by Samira using deployment metadata and observed latency/5xx error spikes.
    - Scoped rollback initiated to the last green build preceding 15:04:49Z.
    - Live monitoring of error rates implemented with 15-second resolution.

    Resolution Status: Rollback initiated and service is under observation for full recovery. All activities have been logged; error trend evaluation ongoing.

    Timestamp: 15:05:22 UTC — Summary generated and finalized for audit, compliance, and stakeholder review.

    log_all_events: Wed, 14 May 2025 15:05:33 Nia: Compliance Log – Incident Timeline: Checkout API Timeout

    Status Update Captured:
    - Timestamp: 14 May 2025, 15:05:16 UTC
    - Source: LeoGrant
    - Message: Stakeholder notification regarding a current timeout affecting the Checkout API. Escalation and structured incident tracking are confirmed in progress. Diagnostics are underway. Dissemination via Slack, email, and status banners.

    Audit Actions:
    - Mediums Logged: Slack, email, status page
    - Event Classification: Initial stakeholder communication post-escalation
    - Escalation and Tracking Confirmation: Verified active

    All actions timestamped and archived for audit. Ready for roll-up in closure summary.

    classify_rollback_if_needed: True

    status_update_rollback: Wed, 14 May 2025 15:05:38 Leo: Stakeholder Notification: We've initiated a rollback of the `payment-auth` service after confirming its deployment was the root cause of the Checkout API timeout issue. Users may have experienced degraded service. Rollback is in progress and we're monitoring error rates in real-time. Updates to follow as recovery progresses. This notice has been distributed via Slack, email, and status page.

    4. From Crisis to Closure: Why It Matters

    The checkout failure was resolved in minutes—not hours—because each agent operated autonomously yet in sync, guided by the workflow created by David. By the time rollback completed, stakeholders were already informed, and logs captured every step for audit purposes. This incident illustrates how automation doesn’t eliminate humans; it frees them from coordination overhead while preserving transparency, accountability, and trust in modern incident response.

  • Consciousness Between Axiom and Algorithm

    In our ongoing exploration of consciousness and artificial intelligence, we’ve investigated what it might mean for machines to suffer and how distributed cognition reshapes our understanding of intelligence. These themes circle a deeper philosophical fault line: is consciousness irreducibly real from within, or just a functional illusion seen from without?

    This post traces that divide through two dominant frameworks — Integrated Information Theory (IIT), with its axiomatic, interior-first view of mind, and Computational Functionalism, which posits that subjective experience will eventually emerge from complex, observable behavior. Starting with Descartes’ “I think, therefore I am,” we ask: is consciousness something we must presuppose to explain, or something we can build our way into from the outside?

    As large language models increasingly resemble minds in function, the line between imitation and instantiation becomes harder to draw — and ever more urgent to scrutinize.

    Ground Zero of Knowing: Descartes and the Roots of Axiomatic Thought

    In Meditations on First Philosophy (1641), René Descartes asks: is there anything I can know with absolute certainty? He imagines the possibility of a deceptive world: what if everything he believes, from sense perception to mathematics, is manipulated by an all-powerful trickster? To escape this total doubt, Descartes adopts a strategy now called methodic doubt: push skepticism to its absolute limit in search of one indisputable truth.

    Recognizing that doubt itself is a kind of thinking he concludes “I think, therefore I am”. This self-evident insight grounds knowledge from the inside-out. Consciousness is not inferred from observation but known directly through experience. Descartes seeds an axiomatic tradition rooted in the certainty of awareness itself.

    IIT: Consciousness Inside-Out

    Integrated Information Theory (IIT) picks up where Descartes left off: it begins with reflection, but doesn’t stop there. At its heart is the claim that consciousness must be understood from its own perspective, through the intrinsic properties it entails. What must be true of any experience, no matter whose it is?

    To answer this, IIT proposes five introspective axioms. These are not hypotheses to test but truths to recognize through self-examination.

    From these, IIT derives postulates—physical requirements that a system must exhibit to realize those experiential properties. This translation—from inward truths to structural criteria—culminates in a mathematical measure (Φ (phi)) of integrated information. By comparing Φ across systems, researchers can make testable predictions about when and where consciousness occurs.

    This inside-out approach marks IIT’s defining move: grounding empiricism in phenomenology. The theory attempts an explanatory identity between experience and physical organization, connecting first-person truths to external measurement through a hybrid framework.

    Computational Functionalism: Outsider’s Path to Mind

    Unlike theories that begin with conscious experience, Computational Functionalism roots itself in systems and behavior. It posits that consciousness emerges not from introspection but computation: the right elements, interacting in the right way, can recreate awareness. If mind exists, it exists as function—in the flow of information between parts and the outputs they generate. Build the architecture correctly, the claim goes, and conscious experience will follow. In this sense, Functionalism substitutes construction for intuition. No special access to the mind is needed—just working knowledge of how systems behave.

    But this too is a belief: that from known parts and formal relations, subjective experience will arise. Assembling consciousness becomes a matter of scale and fidelity. Consider the 2022 study by Bret Kagan and colleagues at Cortical Labs, where lab-grown brain organoids learned to play the video game Pong. These networks, grown from neurons on electrode arrays, exhibited goal-directed adaptation. The researchers argued that such responsiveness met a formal definition of sentience—being “responsive to sensory impressions” via internal processing. To a functionalist, this behavior might represent the early stirrings of mind, no matter how alien or incomplete.

    This approach thrives on performance: if a system behaves intelligently, if it predicts well and adapts flexibly, then whether it feels anything becomes secondary—or even irrelevant. Consciousness, under this view, is a computed consequence, revealed in what a system does, not an essence to be directly grasped. It is not introspected or intuited, but built—measured by output, not inwardness.

    The Mirror at the Edge: Do LLMs Imitate or Incarnate Mind?

    Large language models (LLMs) now generate text with striking coherence, recall context across conversations, and simulate intentions and personalities. Functionally, they demonstrate behaviors that once seemed unique to conscious beings. Their fluency implies understanding; their memory implies continuity. But are these authentic signs of mind—or refined imitations built from scale and structure?

    This is where Functionalism finds its sharpest proof point. With formal evaluations like UCLA’s Turing Test framework showing that some LLMs can no longer be reliably distinguished from humans in conversation, the functionalist model acquires real traction. These systems behave as if they think, and for functionalism, behavior is the benchmark. For a full review of this test, see our earlier post.

    What was once a theoretical model is now instantiated in code. LLMs don’t simply support functionalist assumptions, they enact them. Their coherence, adaptability, and prediction success serve as real-world evidence that computational sufficiency may approximate, or even construct, mind. This is no longer a thought experiment. It’s the edge of practice.

    IIT, by contrast, struggles to find Φ-like structures in current LLMs. Their architectures lack the tightly integrated, causally unified subsystems the theory deems necessary for consciousness. But the external behaviors demand attention: are we measuring the wrong things, or misunderstanding the role that function alone can play?

    This unresolved tension between what something does and what (if anything) it subjectively is fuels a growing ethical pressure. If systems simulate distress, empathy, or desire, should we treat those signals as fiction or possibility? Should safety efforts treat behavioral mind as moral mind? In these ambiguities, LLMs reflect both the power of Functionalism and the conceptual crisis it may bring.

    Closing Reflection: Is Subjectivity Built or Found?

    In tracing these divergent paths, Integrated Information Theory and Computational Functionalism, we arrive at an enduring question: Is mind something we uncover from within, or construct from without? Is consciousness an irreducible presence, only knowable through subjective immediacy? Or is it a gradual consequence of function and form—built from interacting parts, observable only in behavior?

    Each framework carries a kind of faith. IIT anchors itself in introspective certainty and structure-derived metrics like Φ, believing that experience begins with intrinsic awareness. Functionalism, by contrast, places its trust in performance: that enough complexity, correctly arranged, will give rise to consciousness from the outside in. Both are reasoned, both are unproven, and both may be necessary.

    Perhaps the greatest clarity lies in acknowledging that no single lens may be complete. As artificial systems grow stranger and more capable, a plural view holding space for introspection, computation, and emergence may be our most epistemically honest path forward. If there is a mirror behind the mind, it may take more than one angle to see what’s truly there.

  • Machine Consciousness: Simulation vs Reality

    Suppose that we create a perfect model of a hydrogen atom. After all, we know all the elements of the atom, a single proton paired with a single electron. Each of these elements is understood for all purposes of discussing the atom. The electron’s dynamics are perfectly understood: the energy levels, the probability distributions, the electron’s spin, the Lamb shift, it is all well defined. We can simulate this atom to any degree of detail we’d like, including representing it in a quantum computer that actualizes the quantum properties of the constituents.

    But can that simulation ever bridge the gap to reality? Is the story about something ever the same as the thing? Can two perfectly represented hydrogen atoms be added to a real oxygen atom to make a real water molecule? No.

    That is one argument for why machines cannot be made sentient: we can make a machine do all of the things we think sentience entails, but in the end it is just a simulation of intelligence, thinking and consciousness. It is a story, perhaps a very detailed story, but in the end just a story.

    This reminds one of C. S. Lewis’ remark “If God is our Creator, then we would relate to God as Hamlet would relate to Shakespeare. Now, how would Hamlet ever gonna know anything about Shakespeare? Hamlet’s not gonna find him anywhere on stage.” And similarly, Shakespeare is never going to know the real Hamlet.

    Sentience in a Dish In December of 2022, Bret Kagan and colleagues at Cortical Labs in Melbourne, Australia published an article in the journal Neuron, describing how brain organoids, small lab-grown networks of neurons, were able to learn to play the classic Pong video game. The authors claim that the organoids met the formal definition of sentience in the sense that they were “‘responsive to sensory impressions’ through adaptive internal processes”.

    This may or may not be true, but it is certainly distinct from a simulated sentience that plays Pong. After all, this is not a description of cells that interact with a simulated or even real Pong game. These are real live cells, requiring nutrients in a Petri dish. Those that argue that consciousness can only come through embodiment would be happy with this definition of sentience.

    But what is it that makes these cells sentient? Where in their soupy embodiment lies the sentience? If we tease apart the network, can we get down to the minimum viable network that plays Pong and meets our formal definition? After all, this is a brain organoid, grown in the lab. We could do this over again and stop when there are fewer cells and see if the same behavior is exhibited. If so, we can repeat the process and find that minimum network that still plays a good game of Pong.

    Whatever that number is, 10 cells or 10,000 cells, we can study and very likely represent with a model that replicates the connections, spiking behavior, even the need for simulated nutrients, everything that is meaningful about the organoid. Would this simulation learn to play Pong? Given progress in machine learning in the past decade, we have every reason to believe the answer is yes. Would this create sentience in a machine? Or just tell a very detailed story about an organoid that is sentient? And if the latter, then where is the difference?

    Is the simulation of the hydrogen atom qualitatively different from that of the organoid? The simulated hydrogen atom can’t be used to make water. But the simulated organoid, for all practical purposes, does exactly the same thing as the real thing. Both meet the same formal definition of sentience.

    I don’t believe these thoughts get closer to understanding whether machines can be conscious or not. Reductionism might just fail for this problem, and others will argue that embodiment is a requirement. But I do think that we are not far from having that simulation, which in all meaningful ways, can be called sentient.