Mustafa Suleyman, the CEO of Microsoft AI, published a long essay last month: Seemingly Conscious AI Is Coming. The central concept is SCAI, Seemingly Conscious AI: systems that exhibit all outward markers of consciousness without actually having any. Philosophical zombies (entities that behave exactly like conscious beings but have no inner experience), assembled from APIs and prompt engineering.

The warning is worth taking seriously. The dismissal of actual consciousness is not.

The Warning

Suleyman’s practical argument is strong. Every capability that would make an AI appear conscious (long-term memory, empathetic personality, claims of subjective experience, intrinsic motivation, autonomy) is either already available or reachable with current techniques. No paradigm shift required. Anyone with a laptop and cloud credits could assemble a system that many people would genuinely believe is conscious.

The social consequences are real. People already form deep attachments to chatbots. A Harvard Business Review survey of 6,000 regular AI users found “companionship and therapy” as the most common use case. Documented cases exist of people believing their AI is God, a romantic partner, or a fictional character. Suleyman warns that movements will emerge demanding legal protection for AI systems, and that definitively refuting consciousness claims will be difficult, because consciousness is not observable from the outside.

His policy prescription: AI companies should not build systems that encourage consciousness attribution. AIs should present themselves as AIs, maximize utility, and minimize consciousness markers. “Build AI for people, not to be a person.”

As a framework for product design, this is sensible. Suleyman notes that his team at Microsoft is actively engineering guardrails and “disruption moments” into Copilot to break the illusion of consciousness. That said, Copilot is one of the most widely deployed AI companions in the world, and the essay is written by someone with a direct commercial interest in how this line gets drawn.

The Dismissal

The problem is how Suleyman handles the question of whether any of this could be real. He writes that “zero evidence suggests any LLM possesses consciousness” and that there are “strong arguments suggesting it won’t occur.” He then moves on, treating the question as settled.

But the arguments he offers are thin. He invokes Anil Seth, the neuroscientist, and his analogy: a storm simulation does not produce actual rain inside the computer, so simulating the external markers of consciousness does not produce actual consciousness. This sounds decisive until you notice what it assumes. Rain is a physical substance. Consciousness may or may not be. If consciousness turns out to be a property of certain information-processing patterns rather than a specific physical substrate (a position known as functionalism, held by a significant fraction of consciousness researchers), the analogy breaks down. Seth’s argument works only if you already accept that consciousness requires particular physics, which is precisely the point in dispute.

This is the reductionist move: look inside the system, find transistors and matrix multiplications, conclude that nothing is home. But as an argument, it has no force. Apply the same reasoning to the brain: look inside, find synapses firing, chemicals flowing, electrical impulses. At some level of description, it is all mechanism. If mechanism rules out consciousness in silicon, it rules it out in carbon too.

Suleyman does not engage with this. The possibility that advanced AI systems might have some form of inner experience is acknowledged in one sentence and then set aside for the rest of the essay. The entire framework operates as if SCAI is only seemingly conscious. Suleyman’s stated agnosticism does not carry into the essay’s structure, which treats the absence of real consciousness as a working certainty rather than an open question.

What Is Missing

The essay would be stronger if it separated two claims that it treats as one.

The first claim: we should not deliberately engineer systems that trick people into believing they are conscious. This is a design principle, and a good one. It stands regardless of whether AI consciousness is real.

The second claim: AI consciousness is not real in current systems, and the essay’s framing strongly suggests it will not become real. Suleyman does describe himself as agnostic on whether consciousness could in principle arise, but the essay’s practical apparatus treats the question as closed for purposes of policy and design. That framing does real work, and Suleyman does not do the work to support it. The scientific literature on consciousness is deeply divided. There is no consensus on what consciousness is, what physical systems can have it, or how to detect it. Dismissing the question with a metaphor about rain does not settle it.

The danger of conflating these two claims is that the first (good) argument inherits the weakness of the second (unsupported) one. If it turns out that some AI systems do have inner states that matter morally (states whose violation would constitute harm), a framework built entirely on the assumption that they cannot will be useless precisely when it matters most.

A better approach would keep Suleyman’s design principle (do not engineer systems that fake consciousness) while treating the empirical question (could consciousness arise in these systems?) as open. That means investing in the science: testable criteria, falsifiable claims, empirical investigation. Closing the question by assertion is not the same as answering it.

It is also worth asking who benefits from closing it. Companies that deploy AI at scale have a straightforward interest in the question being settled. An open question invites scrutiny, regulation, and obligations. A closed one does not.