Should We Pursue Machine Consciousness or Is That a Very Bad Idea?

In past posts (Why Sentienta? and Machine Consciousness: Simulation vs Reality), we’ve explored the controversial issue of machine consciousness. This field is gaining attention, with dedicated research journals offering in-depth analysis (e.g. – Journal of Artificial Intelligence and Consciousness and International Journal of Machine Consciousness). On the experimental front, significant progress has been made in identifying neural correlates of consciousness (for a recent review see The Current of Consciousness: Neural Correlates and Clinical Aspects).

Should We Halt Conscious AI Development?

Despite growing interest, some researchers argue that we should avoid developing conscious machines altogether (Metzinger and Seth). Philosopher Thomas Metzinger, in particular, has advocated for a moratorium on artificial phenomenology—the creation of artificial conscious experiences—until at least 2050.

Metzinger’s concern is rooted in the idea that conscious machines would inevitably experience “artificial suffering”—subjective states they wish to escape but cannot. A crucial component of suffering, he argues, is self-awareness: for an entity to suffer, it must recognize negative states as happening to itself.

The Risk of an “Explosion of Negative Phenomenology” (ENP)

Beyond ethical concerns, Metzinger warns that if conscious machines hold economic value and can be replicated infinitely, we may face an uncontrolled proliferation of suffering—an “explosion of negative phenomenology” (ENP). As moral beings, he believes we are responsible for preventing such an outcome.

Defining Consciousness: Metzinger’s Epistemic Space Model

To frame his argument, Metzinger proposes a working definition of consciousness, known as the Epistemic Space Model (ESM):

“Being conscious means continuously integrating the currently active content appearing in a single epistemic space with a global model of this very epistemic space itself.”

This concept is simple and concise: consciousness is simply a space of cognition and an integrated model of that cognition itself. Here cognition means the continuous processing of new inputs.

How to Prevent Artificial Suffering

Metzinger outlines four key conditions that must be met for artificial suffering to occur. If any one condition is blocked, suffering is avoided:

  • Conscious Experience: A machine must first have an ESM to be considered conscious.
  • Possession of a Self-Model: A system can only experience suffering if it possesses a self-model that recognizes negative states as happening to itself and cannot detach from them.
  • Negative States: These are aversive perceptions an entity actively seeks to escape.
  • Transparency: The machine must lack visibility into its own cognitive processes, making negative experiences feel inescapable.

Notably, these conditions are necessary but not necessarily sufficient, meaning if any one fails to manifest, artificial suffering does not arise.

Should We Avoid Suffering at All Costs?

While Metzinger convincingly argues for avoiding machine suffering, he gives little attention to whether suffering itself might hold value. He acknowledges that suffering has historically been a highly efficient evolutionary mechanism, stating:

“… suffering established a new causal force, a metaschema for compulsory learning which motivates organisms and continuously drives them forward, forcing them to evolve ever more intelligent forms of avoidance behavior.”

Indeed, suffering has driven humans toward some of their greatest achievements, fostering resilience and learning. If it has served such a crucial function in human progress, should we entirely exclude it from artificial intelligence?

Ethical Safeguards for Conscious Machines

We certainly want to prevent machines from experiencing unnecessary suffering, and Metzinger outlines specific conditions to achieve this. In particular, any machine with a self-model should also be able to externalize or dissociate negative states from itself.

Is Conscious AI a Moral Imperative?

Even in its infancy, generative AI has already made breakthroughs in medicine and science. What might the next leap—conscious AI—offer? Might allowing AI to experience consciousness (and by extension, some level of suffering) be a necessity for the pursuit of advanced knowledge?

While we don’t yet need definitive answers, the conversation around ‘post-biotic’ consciousness is just beginning. As we approach this technological threshold, we must continue to ask: what should be done, and what must never be done?

Comments

Leave a comment