Why Sentienta?

A little over a year ago I gave a talk in which I discussed machine reasoning and gave some examples from the literature. My audience was skeptical to say the least. But here we are in 2025 with multiple companies claiming powerful reasoning capabilities for their models and new benchmarks set weekly. These are exciting times.

I’ve always believed that understanding the science behind machine “intelligence” would help us understand more deeply who and what we are. In a way, the reasoning capabilities of today’s models do that. We question whether they are simply capturing the ‘surface’ statistics of training data. At the same time, they are unquestionably powerful. I think sometimes this tells us that our cherished human intelligence may rely on similar ‘weak’ methods more often than we’d like to admit. That is to say, as we come to understand machine intelligence the mystery of our own may lessen.

And now we come to machine consciousness. This is the topic, that if mentioned in serious venues, produces much more skepticism and even snickering. After all, we can’t really define what human consciousness is. We know it has to do with a sense of our own existence, sensations of our environment, and thoughts or ‘inner speech’. Given that this is all subjective, will we ever be able to understand what it is? I suspect that, just as with machine reasoning, the mystery of consciousness will begin to lift as we understand what it looks like in a machine.

One of the more compelling models for consciousness is the ‘Society of Mind‘ (Minsky, 1988). This model has only been strengthened in the intervening years since it was published. We now know self-referential thought and introspection involve multiple brain centers, collectively called the Default Mode Network (DMN). Brain regions including those from the medial prefrontal cortex, the cingulate gyrus and the hippocampus work together to integrate both past and current experiences. As we begin to model these kinds of interactions will Chalmer’s “hard problem of consciousness” fade away?

Sentienta was started to both help companies scale their businesses through virtual teams of agents, and to explore these ideas. Agents interact and share their expertise in teams. The dialog that occurs between agents generates new ideas that no single agent produced. Agents have their own memories that they develop from these interactions. These memories are a function of an agent’s persona. As a result, agents evolve and bring new experiences and ideas to a team discussion.

And here is the point: we can think of a Sentienta team as an agent itself. It consists of the collective experience and interactions of multiple agents we might think of as a society of mind. Can we build agents that perform analogous functions to those found in the DMN? What light might this shine on our own subjective experience?

Comments

Leave a comment