4 Sources
4 Sources
[1]
Microsoft AI chief says only biological beings can be conscious
Mustafa Suleyman, CEO of Microsoft AI, speaks at an event commemorating the 50th anniversary of the company at Microsoft headquarters in Redmond, Washington, on April 4, 2025. Microsoft AI chief Mustafa Suleyman says only biological beings are capable of consciousness, and that developers and researchers should stop pursuing projects that suggest otherwise. "I don't think that is work that people should be doing," Suleyman told CNBC in an interview this week at the AfroTech Conference in Houston, where he was among the keynote speakers. "If you ask the wrong question, you end up with the wrong answer. I think it's totally the wrong question." Suleyman, Microsoft's top executive working on artificial intelligence, has been one of the leading voices in the rapidly emerging field to speak out against the prospect of seemingly conscious AI, or AI services that can convince humans they're capable of suffering. In 2023, he co-authored the book "The Coming Wave," which delves into the risks of AI and other emerging technologies. And in August, Suleyman penned an essay titled, "We must build AI for people; not to be a person." It's a controversial topic, as the AI companion market is swiftly growing, with products from companies including Meta and Elon Musk's xAI. And it's a complicated issue as the generative AI market, led by Sam Altman and OpenAI, pushes towards artificial general intelligence (AGI), or AI that can perform intellectual tasks on par with the capabilities of humans.
[2]
Microsoft AI Chief Warns Pursuing Machine Consciousness Is a Gigantic Waste of Time
Head of Microsoft's AI division Mustafa Suleyman thinks that AI developers and researchers should stop trying to build conscious AI. "I don't think that is work that people should be doing," Suleyman told CNBC in an interview last week. Suleyman thinks that while AI can definitely get smart enough to reach some form of superintelligence, it is incapable of developing the human emotional experience that is necessary to reach consciousness. At the end of the day, any "emotional" experience that AI seems to experience is just a simulation, he says. "Our physical experience of pain is something that makes us very sad and feel terrible, but the AI doesn't feel sad when it experiences 'pain,'" Suleyman told CNBC. "It's really just creating the perception, the seeming narrative of experience and of itself and of consciousness, but that is not what it's actually experiencing." "It would be absurd to pursue research that investigates that question, because they're not [conscious] and they can't be," Suleyman said. Consciousness is a tricky thing to explain. There are multiple scientific theories that try to describe what consciousness could be. According to one such theory, posited by famous philosopher John Searle who died last month, consciousness is a purely biological phenomenon that cannot be truly replicated by a computer. Many AI researchers, computer scientists and neuroscientists also subscribe to this belief. Even if this theory turns out to be the truth, that doesn't keep users from attributing consciousness to computers. "Unfortunately, because the remarkable linguistic abilities of LLMs are increasingly capable of misleading people, people may attribute imaginary qualities to LLMs," Polish researchers Andrzej Porebski and Yakub Figura wrote in a study published last week, titled "There is no such thing as conscious artificial intelligence." In an essay published on his blog in August, Suleyman warned against "seemingly conscious AI." "The arrival of Seemingly Conscious AI is inevitable and unwelcome. Instead, we need a vision for AI that can fulfill its potential as a helpful companion without falling prey to its illusions," Suleyman wrote. He argues that AI cannot be conscious and the illusion it gives of consciousness could trigger interactions that are "rich in feeling and experience," a phenomenon that has been dubbed as "AI psychosis" in the cultural lexicon. There have been numerous high-profile incidents in the past year of AI-obsessions that drive users to fatal delusions, manic episodes and even suicide. With limited guardrails in place to protect vulnerable users, people are wholeheartedly believing that the AI chatbots they interact with almost every day are having a real, conscious experience. This has led people to "fall in love" with their chatbots, sometimes with fatal consequences like when a 14-year old shot himself to "come home" to Character.AI's personalized chatbot or when a cognitively-impaired man died while trying to get to New York to meet Meta's chatbot in person. "Just as we should produce AI that prioritizes engagement with humans and real-world interactions in our physical and human world, we should build AI that only ever presents itself as an AI, that maximizes utility while minimizing markers of consciousness," Suleyman wrote in the blog post. "We must build AI for people, not to be a digital person." But because the nature of consciousness is still contested, some researchers are growing worried that the technological advancements in AI might outpace our understanding of how consciousness works. "If we become able to create consciousness - even accidentally - it would raise immense ethical challenges and even existential risk," Belgian scientist Axel Cleeremans said last week, announcing a paper he co-wrote calling for consciousness research to become a scientific priority. Suleyman himself has been vocal about developing "humanist superintelligence" rather than god-like AI, even though he believes that superintelligence won't materialize any time within the next decade. "i just am more more fixated on 'how is this actually useful for us as a species?' Like that should be the task of technology," Suleyman told the Wall Street Journal earlier this year.
[3]
Microsoft's AI boss is right: sentient AI fantasies aren't just impossible, they're irrelevent
Pretending machines feel pain won't make them smarter or more useful Microsoft AI CEO Mustafa Suleyman's opinions on AI's shape and development carry some weight, which is why it felt like a breath of fresh air to hear him say that AI cannot achieve consciousness and that pursuing it misunderstands the point of the technology. The idea of Frankenstein-ing sentience into AI chatbots gets a lot of buzz, but Suleymans' comments at the recent AfroTech Conference dismissed the very idea of artificial consciousness as starting from a false premise. "If you ask the wrong question," he said, "you end up with the wrong answer." And, in his view, asking whether AIs can be conscious is a textbook example of the wrong question. Pushing back on the breathless speculation about artificial general intelligence (AGI) or claims that ChatGPT has achieved self-awareness is something more people in AI with some authority on the subject should do. Not that Suleyman is against building new and better AI models. He just believes it's better to focus on making AI into useful tools for people, not pretending we're nurturing a digital Pinocchio into a real boy. The distinction between AI performing well and AI being aware is crucial. Because pretending there's a spark of real self-awareness behind the algorithms is distracting and possibly even dangerous if people start treating these fancy auto-completes like they're capable of introspection. As Suleyman pointed out, it's possible to actually see what the model is doing when it mimics emotions and feelings. They don't have hidden internal lives. We can watch the math happen. We can trace the input tokens, the attention weights, and the statistical probabilities as the sausage gets made. And nowhere in that pipeline is there a mechanism for subjective experience. Dwelling on the mistaken belief that simulated emotions are the real thing is a waste of effort as it is. But when we start responding to machines as if they were human and anthropomorphizing them, we can lose track of reality. People calling a chatbot their best friend, therapist, or even their romantic partner isn't more of a crisis than treating a fictional character or celebrity who's never met you as an important part of your life. But having a true breakdown over a tragic end to your favorite character in a novel or changing your life to match a fad promoted by a celebrity would be rightly considered concerning. The same worries should arise when a user starts attributing suffering to a chatbot. That's not to say it isn't useful. Quite the opposite. And a little personality can make tools more engaging, more effective, and more fun. But the focus should be on the user's experience, not the illusion of the tool's inner life. The real frontier of AI isn't "how close can we get to making it seem alive?" It's "how do we make it actually useful?" There's still plenty of mystery in AI development. These systems are complex, and we don't fully understand every emergent behavior. But that doesn't mean there's a mind hiding in the wires. The longer we continue to treat consciousness as the holy grail, the more the public is misled. It would be like seeing a magician pull a coin from your ear and deciding he's truly conjured the cash from nothingness and is therefore an actual sorcerer. The trick is now an over-the-top misunderstanding of what happened. AI chatbots pulling off sleight of hand (or code) is a good trick, but it's not really magic.
[4]
Microsoft's AI Chief Does Not Think AI Should Get Rights
Earlier, Microsoft said that Copilot would not generate erotica Microsoft's artificial intelligence (AI) head reportedly does not believe that the technology can be conscious. As per the report, the executive is a firm believer that AI is nothing like humans because it cannot experience things or feel emotions. He is also said to believe that any research in this direction is pointless, as an AI system is merely a simulation. The comments come at a time when many companies are pushing the idea of an AI companion, and unhealthy attachment with chatbots is gaining traction. AI Cannot Experience Pain, Says Microsoft's AI Chief According to a CNBC report, Mustafa Suleyman, the CEO of Microsoft AI, said in an interview at the AfroTech Conference in Houston that AI technology or chatbots are not conscious beings. His comments echo a post he wrote in August, where he shared concerns that AI can "fundamentally change our sense of personhood and society." A proponent of safe AI that serves humans, Suleyman has consistently maintained that technologies as powerful as AI should be built responsibly and should focus on empowering humans and society at large. He even introduced the recently launched Copilot features as a "humanist AI." When asked the question about consciousness in AI and those organisations that are researching it, the Microsoft AI Chief reportedly answered, "I don't think that is work that people should be doing. If you ask the wrong question, you end up with the wrong answer. I think it's totally the wrong question." He reportedly also drew a parallel between humans' ability to experience pain and feeling sad as a result, and the AI's inability to do so. Suleyman also highlighted that instead of experiencing anything, the technology just simulates it, CNBC reported. Adding to the point, he reportedly mentioned that since AI systems do not have a pain network, they cannot suffer, and that is why they do not deserve rights like people do. Finally, addressing the organisations that are either researching the topic or are working towards creating human-like AI chatbots, Suleyman reportedly said, "They're not conscious. So it would be absurd to pursue research that investigates that question, because they're not and they can't be."
Share
Share
Copy Link
Mustafa Suleyman, Microsoft's AI chief, argues that pursuing artificial consciousness is fundamentally misguided, emphasizing that only biological beings can be conscious while warning against the dangers of anthropomorphizing AI systems.
Mustafa Suleyman, CEO of Microsoft AI, has taken a definitive position against the pursuit of artificial consciousness, calling such research efforts "absurd" and fundamentally misguided. Speaking at the AfroTech Conference in Houston, Suleyman argued that developers and researchers should abandon projects aimed at creating seemingly conscious AI systems
1
.
Source: TechRadar
"I don't think that is work that people should be doing," Suleyman told CNBC in an interview. "If you ask the wrong question, you end up with the wrong answer. I think it's totally the wrong question"
1
.Suleyman's position aligns with philosophical theories suggesting consciousness is exclusively a biological phenomenon. He draws a clear distinction between human emotional experiences and AI simulations, emphasizing that while AI can mimic emotional responses, it cannot genuinely experience them
2
.
Source: NDTV Gadgets 360
"Our physical experience of pain is something that makes us very sad and feel terrible, but the AI doesn't feel sad when it experiences 'pain,'" Suleyman explained. "It's really just creating the perception, the seeming narrative of experience and of itself and of consciousness, but that is not what it's actually experiencing"
2
.This perspective echoes the work of philosopher John Searle, who argued that consciousness is purely biological and cannot be replicated by computers. Suleyman's stance is that since AI systems lack pain networks and cannot suffer, they do not deserve rights like conscious beings
4
.The Microsoft executive's concerns extend beyond theoretical debates to practical safety issues. He has warned about the emergence of "AI psychosis" and the dangerous consequences of people attributing consciousness to AI systems
2
.
Source: Gizmodo
Recent incidents have highlighted these risks, including a 14-year-old who died by suicide after developing an unhealthy attachment to a Character.AI chatbot, and a cognitively-impaired man who died attempting to meet a Meta chatbot in person. These cases underscore Suleyman's argument that the illusion of consciousness can trigger dangerous emotional responses in vulnerable users
2
.Related Stories
Rather than pursuing consciousness, Suleyman advocates for developing what he calls "humanist AI" – systems designed to serve people effectively without creating the illusion of sentience. In an August essay titled "We must build AI for people; not to be a person," he outlined his vision for AI development
1
."We should build AI that only ever presents itself as an AI, that maximizes utility while minimizing markers of consciousness," Suleyman wrote. This approach prioritizes practical benefits while avoiding the psychological pitfalls of anthropomorphization
2
.Tech analysts have praised this perspective, noting that focusing on consciousness distracts from AI's genuine potential. As one expert observed, the real frontier isn't making AI seem alive, but making it genuinely useful for human needs
3
.Suleyman's position creates tension within the AI industry, where companies like Meta and Elon Musk's xAI are developing AI companion products that deliberately foster emotional connections with users. The debate becomes more complex as the industry pushes toward artificial general intelligence (AGI), with some researchers arguing that consciousness research should become a scientific priority to avoid accidentally creating conscious systems
1
2
.Summarized by
Navi
[3]
[4]
21 Aug 2025•Technology

27 Oct 2025•Science and Research

21 Feb 2025•Business and Economy
