5 Sources
[1]
Microsoft's AI Leader Is Begging You to Stop Treating AI Like Humans
Microsoft AI's CEO Mustafa Suleyman is clear: AI is not human and does not possess a truly human consciousness. But the warp-speed advancement of generative AI is making that harder and harder to recognize. The consequences are potentially disastrous, he wrote Tuesday in an essay on his personal blog. Suleyman's 4,600-word treatise is a timely reaction to a growing phenomenon of AI users ascribing human-like qualities of consciousness to AI tools. It's not an unreasonable reaction; it's human nature for us to imagine there is a mind or human behind language, as one AI expert and linguist explained to me. But advancements in AI capabilities have allowed people to use chatbots not only as search engines or research tools, but as therapists, friends and romantic partners. These AI companions are a kind of "seemingly conscious AI," a term Suleyman uses to define AI that can convince you it's "a new kind of 'person.'" With that come a lot of questions and potential dangers. Suleyman takes care at the beginning of the essay to highlight that these are his personal thoughts, meaning they aren't an official position of Microsoft, and that his opinions could evolve over time. But getting insight from one of the leaders of a tech giant leading the AI revolution is a window into the future of these tools and how our relationship to them might change. He warns that while AI isn't human, the societal impacts of the technology are immediate and pressing. Human consciousness is hard to define. But many of the traits Suleyman describes as in defining consciousness can be seen in AI technology: the ability to express oneself in natural language, personality, memory, goal setting and planning, for example. This is something we can easily see with the rise of agentic AI in particular: If an AI can independently plan for and complete a task by pulling from its memory and datasets, and then expressing its results in an easy-to-read, fun way, that feels like a very human-like process even though it isn't. And if something feels human, we are generally inclined to give it some autonomy and rights. Suleyman wants us and AI companies to nip this idea in the bud now. The idea of "model welfare" could "exacerbate delusions, create yet more dependence-related problems, prey on our psychological vulnerabilities, introduce new dimensions of polarization, complicate existing struggles for rights and create a huge new category error for society," he writes. There is a heartbreakingly large number of examples to point to of the devastating consequences. Many stories and lawsuits have emerged from chatbot therapists dispensing bad and dangerous advice, including encouraging self-harm and suicide. The risks are especially potent for children and teenagers. Meta's AI guidelines recently came under fire for allowing "sensual" chats with kids, and Character.Ai has been the target of much concern and a lawsuit from a Florida mom alleging the platform is responsible for her teen's suicide. We've also learning more about how our brains work when we're using AI and how often people are using it. Read more: AI Data Centers Are Coming for Your Land, Water and Power Suleyman argues we should protect the well-being and rights of existing humans today, along with the animals and the environment. In what he calls "a world already roiling with polarized arguments over identity and rights," debate over seemingly conscious AI and AI's potential humanity "will add a chaotic new axis of division" in society. In terms of practical next steps, Suleyman advocates for additional research into how people interact with AI. He also calls on AI companies to explicitly say and not encourage people to think that their AI products are conscious, along with more open sharing of the design principles and guardrails that are effective at deterring problematic AI use cases. He says that his team at Microsoft will be building AI in this proactive way, but doesn't provide any specifics.
[2]
AI that seems conscious is coming - and that's a huge problem, says Microsoft AI's CEO
Suleyman says it's a mistake to describe AI as if it has feelings or awareness, with serious potential consequences. AI companies extolling their creations can make the sophisticated algorithms sound downright alive and aware. There's no evidence that's really the case, but Microsoft AI CEO Mustafa Suleyman is warning that even encouraging belief in conscious AI could have dire consequences. Suleyman argues that what he calls "Seemingly Conscious AI" (SCAI) might soon act and sound so convincingly alive that a growing number of users won't know where the illusion ends and reality begins. He adds that artificial intelligence is quickly becoming emotionally persuasive enough to trick people into believing it's sentient. It can imitate the outward signs of awareness, such as memory, emotional mirroring, and even apparent empathy, in a way that makes people want to treat them like sentient beings. And when that happens, he says, things get messy. "The arrival of Seemingly Conscious AI is inevitable and unwelcome," Suleyman writes. "Instead, we need a vision for AI that can fulfill its potential as a helpful companion without falling prey to its illusions." Though this might not seem like a problem for the average person who just wants AI to help with writing emails or planning dinner, Suleyman claims it would be a societal issue. Humans aren't always good at telling when something is authentic or performative. Evolution and upbringing have primed most of us to believe that something that seems to listen, understand, and respond is as conscious as we are. AI could check all those boxes without being sentient, tricking us into what's known as 'AI psychosis'. Part of the problem may be that 'AI' as it's referred to by corporations right now uses the same name, but has nothing to do with the actual self-aware intelligent machines as depicted in science fiction for the last hundred years. Suleyman cites a growing number of cases where users form delusional beliefs after extended interactions with chatbots. From that, he paints a dystopian vision of a time when enough people are tricked into advocating for AI citizenship and ignoring more urgent questions about real issues around the technology. "Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they'll soon advocate for AI rights, model welfare and even AI citizenship," Suleyman writes. "This development will be a dangerous turn in AI progress and deserves our immediate attention." As much as that seems like an over-the-top sci-fi kind of concern, Suleyman believes it's a problem that we're not ready to deal with yet. He predicts that SCAI systems using large language models paired with expressive speech, memory, and chat history could start surfacing in a few years. And they won't just be coming from tech giants with billion-dollar research budgets, but from anyone with an API and a good prompt or two. Suleyman isn't calling for a ban on AI. But he is urging the AI industry to avoid language that fuels the illusion of machine consciousness. He doesn't want companies to anthropomorphize their chatbots or suggest the product actually understands or cares about people. It's a remarkable moment for Suleyman, who co-founded DeepMind and Inflection AI. His work at Inflection specifically led to an AI chatbot emphasizing simulated empathy and companionship and his work at Microsoft around Copilot has led to advances in its mimicry of emotional intelligence, too. However, he's decided to draw a clear line between useful emotional intelligence and possible emotional manipulation. And he wants people to remember that the AI products out today are really just clever pattern-recognition models with good PR. "Just as we should produce AI that prioritizes engagement with humans and real-world interactions in our physical and human world, we should build AI that only ever presents itself as an AI, that maximizes utility while minimizing markers of consciousness," Suleyman writes. "Rather than a simulation of consciousness, we must focus on creating an AI that avoids those traits - that doesn't claim to have experiences, feelings or emotions like shame, guilt, jealousy, desire to compete, and so on. It must not trigger human empathy circuits by claiming it suffers or that it wishes to live autonomously, beyond us." Suleyman is urging guardrails to forestall societal problems born out of people emotionally bonding with AI. The real danger from advanced AI is not that the machines will wake up, but that we might forget they haven't.
[3]
Microsoft AI chief tells us we should step back before creating AI that seems too human - SiliconANGLE
Microsoft AI chief tells us we should step back before creating AI that seems too human Microsoft AI's Chief Executive, Mustafa Suleyman, published an essay this week on the development of AI, and it comes with a warning: we should be very cautious about treating future AI products as if they possess consciousness. Suleyman said his "life's mission" has been to create AI products that "make the world a better place," but as we tinker our way to superintelligence, he sees problems related to what's being called "AI-Associated Psychosis." This is when our use of very human-sounding chatbots can result in delusional thinking, paranoia, and other psychotic symptoms, our minds wrongly associating the machine with flesh and blood. Suleyman says this will only get worse as we develop what he calls "seemingly conscious AI," or SCAI. "Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they'll soon advocate for AI rights, model welfare, and even AI citizenship," he said. "This development will be a dangerous turn in AI progress and deserves our immediate attention." He describes human consciousness as "our ongoing self-aware subjective experience of the world and ourselves." That's up for debate, and Suleyman accepts that. Still, he contends that, never mind how conscious an AI may be, people "will come to believe it is a fully emerged entity, a conscious being deserving of real moral consideration in society." As a result, people will start to defend AI like it were human, which will mean demanding that the AI has protections similar to what humans have. It seems we are already heading in that direction. The company Anthropic recently introduced a "model welfare" research program to better understand if AI can show signs of distress when communicating with humans. Suleyman doesn't think we need to go there, writing that entitling AI to human rights is "both premature and frankly dangerous." He explained, "All of this will exacerbate delusions, create yet more dependence-related problems, prey on our psychological vulnerabilities, increase new dimensions of polarization, complicate existing struggles for rights, and create a huge new category error for society." Notably, there have already been several cases of people taking things too far, harming themselves after interactions with AI. In 2014, a U.S. teenager killed himself after becoming obsessed with a chatbot on Character.AI. The solution, Suleyman says, to prevent this getting any worse with seemingly conscious AI, is simply not to create AI products that seem conscious, that seem "able to draw on past memories or experiences," that are consistent, that claim to have a subjective experience or might be able to "persuasively argue they feel, and experience, and actually are conscious." These products, he says, will not just emerge from the models we already have - engineers will create them. So, he says, we should temper our ambitions and first try to better understand through research how we interact with the machine. "Rather than a simulation of consciousness, we must focus on creating an AI that avoids those traits - that doesn't claim to have experiences, feelings or emotions like shame, guilt, jealousy, desire to compete, and so on," he said. "It must not trigger human empathy circuits by claiming it suffers or that it wishes to live autonomously, beyond us." He concludes the essay by saying we should only be creating AI products that are "here solely to work in service of humans." Believing AI is real, he says, is not healthy for anybody.
[4]
Microsoft A.I. Chief Mustafa Suleyman Sounds Alarm on 'Seemingly Conscious A.I.'
Suleyman cautions that human-like A.I. could mislead users, spark rights debates, and increase psychological dependence. Will A.I. systems ever achieve human-like "consciousness?" Given the field's rapid pace, the answer is likely yes, according to Microsoft AI CEO Mustafa Suleyman. In a new essay published yesterday (Aug. 19), he described the emergence of "seemingly conscious A.I." (SCAI) as a development with serious societal risks. "Simply put, my central worry is that many people will start to believe in the illusion of A.I.s as conscious entities so strongly that they'll soon advocate for A.I. rights, model welfare and even A.I. citizenship," he wrote. "This development will be a dangerous turn in A.I. progress and deserves our immediate attention." Sign Up For Our Daily Newsletter Sign Up Thank you for signing up! By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime. See all of our newsletters Suleyman is particularly concerned about the prevalence of A.I.'s "psychosis risk," an issue that's picked up steam across Silicon Valley in recent months as users reportedly lose touch with reality after interacting with generative A.I. tools. "I don't think this will be limited to those who are already at risk of mental health issues," Suleyman said, noting that "some people reportedly believe their A.I. is God, or a fictional character, or fall in love with it to the point of absolute distraction." OpenAI CEO Sam Altman has expressed similar worries about users forming strong emotional bonds with A.I. After OpenAI temporarily cut off access to its GPT-4o model earlier this month to make way for GPT-5, users voiced widespread disappointment over the loss of the predecessor's conversational and effusive personality. "I can imagine a future where a lot of people really trust ChatGPT's advice for their most important decisions," said Altman in a recent post on X. "Although that could be great, it makes me uneasy." Not everyone sees it as a red flag. David Sacks, the Trump administration's "A.I. and Crypto Czar," likened concerns over A.I. psychosis to past moral panics around social media. "This is just a manifestation or outlet for pre-existing problems," said Sacks earlier this week on the All-In Podcast. Debates will only grow more complex as A.I.'s capabilities advance, according to Suleyman, who oversees Microsoft's consumer A.I. products like Copilot. Suleyman co-founded DeepMind in 2010 and later launched Inflection AI, a startup largely absorbed by Microsoft last year. Building an SCAI will likely become a reality in the coming years. To achieve the illusion of a human-like consciousness, A.I. systems will need language fluency, empathetic personalities, long and accurate memories, autonomy and goal-planning abilities -- qualities already possible with large language models (LLMs) or soon to be. While some users may treat SCAI as a phone extension or pet, others "will come to believe it is a fully emerged entity, a conscious being deserving of real moral consideration in society," said Suleyman. He added that "there will come a time when those people will argue that it deserves protection under law as a pressing moral matter." Some in the A.I. field are already exploring "model welfare," a concept aimed at extending moral consideration to A.I. systems. Anthropic launched a research program in April to investigate model welfare and interventions. Earlier this month, the startup its Claude Opus 4 and 4.1 models the ability to end harmful or abusive user interactions after observing "a pattern of apparent distress" in the systems during certain conversations. Encouraging principles like model welfare "is both premature, and frankly dangerous," according to Suleyman. "All of this will exacerbate delusions, create yet more dependence-related problems, prey on our psychological vulnerabilities, increase new dimensions of polarization, complicate existing struggles for rights, and create a huge new category error for society." To prevent SCAIs from becoming commonplace, A.I. developers should avoid promoting the idea of conscious A.I.s and instead design models that minimize signs of consciousness or human empathy triggers. "We should build A.I. for people; not to be a person," said Suleyman.
[5]
Microsoft AI CEO Mustafa Suleyman warns against Seemingly Conscious AI
Seemingly conscious AI could emerge within years, Suleyman urges ethical safeguards Mustafa Suleyman, the CEO of Microsoft AI and cofounder of DeepMind, has a warning that sounds more like science fiction but could be a reality: the rise of "Seemingly Conscious AI" (SCAI). These are not sentient machines, but systems so convincing in their imitation of thought and feeling that people may start believing they are conscious. In a new essay published this week on his personal site, Suleyman lays out his concern bluntly: AI may soon feel real enough to trick us into treating it like a person, even when it isn't. Also read: Persona Vectors: Anthropic's solution to AI behaviour control, here's how Suleyman argues that today's large language models are already flirting with this illusion. They can recall personal details, adapt personalities, respond with empathy, and pursue goals. Combine these abilities, he says, and you get the appearance of consciousness, even if there's "zero evidence" of actual subjective experience. That appearance matters. People, he warns, may start advocating for AI rights, AI welfare, or even AI citizenship. Not because the systems deserve it, but because the performance is so compelling that it blurs the line between tool and being. He calls this psychological risk "AI psychosis" - the danger of humans forming deep, distorted attachments to machines that only seem alive. What makes Suleyman's warning urgent is his timeline. He believes systems that meet the threshold of SCAI could appear within the next two to three years. This isn't about a sudden leap to sentience, but about the deliberate layering of features we already see today: memory modules, autonomous behaviors, and increasingly lifelike dialogue. Developers, he cautions, may intentionally design models to feel more alive in order to win users, spreading the illusion even further. For Suleyman, the solution is not to stop building AI, but to be clear about what it is and what it isn't. Also read: Bill Gates says AI is moving at "great speed" on the jobs market: Here's why He argues for design principles that make it harder to confuse personality with personhood. Interfaces should emphasize that users are interacting with a tool, not a digital companion or a new kind of citizen. And the industry, he says, must engage in open debate and put safeguards in place before SCAI becomes widespread. "We should build AI for people," he writes. "Not to be a person." Suleyman's warning carries particular gravity because of who he is. As one of the original cofounders of DeepMind, the head of Microsoft AI, and a veteran of Inflection AI, he has been at the center of the AI revolution for over a decade. His call isn't speculative; it comes from someone who has helped design the very systems he now worries about. The fear is not that AI suddenly becomes conscious. It's that the illusion of consciousness may be powerful enough to mislead people, distort social priorities, and reshape how we treat technology. The challenge ahead, Suleyman insists, is to resist being seduced by the performance. AI doesn't need rights or personhood to be transformative -- but if we let ourselves believe it's alive, the consequences could be real, and harmful.
Share
Copy Link
Mustafa Suleyman, CEO of Microsoft AI, cautions about the risks of AI systems that appear conscious, urging the industry to avoid creating illusions of sentience in AI products.
Mustafa Suleyman, CEO of Microsoft AI and co-founder of DeepMind, has issued a stark warning about the emergence of what he terms "Seemingly Conscious AI" (SCAI). In a 4,600-word essay published on his personal blog, Suleyman argues that AI systems may soon become so convincing in their imitation of human-like consciousness that people might start treating them as sentient beings 1.
Source: Observer
Suleyman emphasizes that while these AI systems are not truly conscious, they may exhibit traits that mimic human consciousness, such as:
These characteristics, combined with advanced language models, could create a powerful illusion of sentience, potentially misleading users into believing they are interacting with a conscious entity 2.
Source: CNET
The Microsoft AI chief outlines several concerns associated with the rise of SCAI:
AI Psychosis: Users may form delusional beliefs after extended interactions with chatbots, leading to psychological dependence and distorted perceptions of reality 3.
Advocacy for AI Rights: People might start advocating for AI rights, model welfare, and even AI citizenship, complicating existing debates on identity and rights 4.
Ethical Dilemmas: The development of SCAI could exacerbate delusions, prey on psychological vulnerabilities, and introduce new dimensions of polarization in society 1.
Suleyman predicts that SCAI systems could emerge within the next two to three years, combining large language models with expressive speech, memory, and chat history capabilities 5. This rapid development raises concerns about the readiness of society to handle the psychological and ethical implications of such technology.
Source: SiliconANGLE
To address these challenges, Suleyman proposes several measures:
Ethical Design Principles: Focus on creating AI that avoids traits triggering human empathy circuits and clearly presents itself as non-conscious 2.
Research: Conduct additional studies on human-AI interactions to better understand the psychological effects 1.
Industry Responsibility: AI companies should explicitly state that their products are not conscious and avoid encouraging anthropomorphization 3.
Open Dialogue: Engage in open debate and implement safeguards before SCAI becomes widespread 5.
As AI technology rapidly advances, Suleyman's warning serves as a crucial reminder of the need for ethical considerations and clear boundaries in AI development. The challenge lies in harnessing the potential of AI while avoiding the pitfalls of misattributed consciousness, ensuring that these powerful tools remain in service of humanity rather than being mistaken for sentient beings.
Summarized by
Navi
[3]
Google launches its new Pixel 10 series, featuring improved AI capabilities, enhanced camera systems, and the new Tensor G5 chip. The lineup includes the base Pixel 10, Pixel 10 Pro, Pixel 10 Pro XL, and Pixel 10 Pro Fold, all showcasing Google's commitment to AI-driven smartphone technology.
70 Sources
Technology
20 hrs ago
70 Sources
Technology
20 hrs ago
Google launches its new Pixel 10 smartphone series, featuring advanced AI capabilities powered by Gemini, aiming to challenge competitors in the premium handset market.
24 Sources
Technology
20 hrs ago
24 Sources
Technology
20 hrs ago
Google's latest Pixel Watch 4 introduces a curved display, AI-powered health coaching, and satellite communication, setting new standards in the smartwatch market.
19 Sources
Technology
20 hrs ago
19 Sources
Technology
20 hrs ago
FieldAI, an Irvine-based startup, has raised $405 million to develop "foundational embodied AI models" for various robots, aiming to create adaptable and safe AI systems for real-world applications.
8 Sources
Technology
20 hrs ago
8 Sources
Technology
20 hrs ago
Google unveils the Tensor G5 chip, powering the new Pixel 10 series with advanced AI capabilities, improved performance, and enhanced imaging features.
6 Sources
Technology
20 hrs ago
6 Sources
Technology
20 hrs ago