14 Sources
14 Sources
[1]
Microsoft AI chief says it's 'dangerous' to study AI consciousness | TechCrunch
AI models can respond to text, audio, and video in ways that sometimes fool people into thinking a human is behind the keyboard, but that doesn't exactly make them conscious. It's not like ChatGPT experiences sadness doing my tax return ... right? Well, a growing number of AI researchers at labs like Anthropic are asking when -- if ever -- might AI models develop subjective experiences similar to living beings, and if they do, what rights should they have? The debate over whether AI models could one day be conscious -- and deserve rights -- is dividing Silicon Valley's tech leaders. In Silicon Valley, this nascent field has become known as "AI welfare," and if you think it's a little out there, you're not alone. Microsoft's CEO of AI, Mustafa Suleyman, published a blog post on Tuesday arguing that the study of AI welfare is "both premature, and frankly dangerous." Suleyman says that by adding credence to the idea that AI models could one day be conscious, these researchers are exacerbating human problems that we're just starting to see around AI-induced psychotic breaks and unhealthy attachments to AI chatbots. Furthermore, Microsoft's AI chief argues that the AI welfare conversation creates a new axis of division within society over AI rights in a "world already roiling with polarized arguments over identity and rights." Suleyman's views may sound reasonable, but he's at odds with many in the industry. On the other end of the spectrum is Anthropic, which has been hiring researchers to study AI welfare and recently launched a dedicated research program around the concept. Last week, Anthropic's AI welfare program gave some of the company's models a new feature: Claude can now end conversations with humans that are being "persistently harmful or abusive." Beyond Anthropic, researchers from OpenAI have independently embraced the idea of studying AI welfare. Google DeepMind recently posted a job listing for a researcher to study, among other things, "cutting-edge societal questions around machine cognition, consciousness and multi-agent systems." Even if AI welfare is not official policy for these companies, their leaders are not publicly decrying its premises like Suleyman. Anthropic, OpenAI, and Google DeepMind did not immediately respond to TechCrunch's request for comment. Suleyman's hardline stance against AI welfare is notable given his prior role leading Inflection AI, a startup that developed one of the earliest and most popular LLM-based chatbots, Pi. Inflection claimed that Pi reached millions of users by 2023 and was designed to be a "personal" and "supportive" AI companion. But Suleyman was tapped to lead Microsoft's AI division in 2024 and has largely shifted his focus to designing AI tools that improve worker productivity. Meanwhile, AI companion companies such as Character.AI and Replika have surged in popularity and are on track to bring in more than $100 million in revenue. While the vast majority of users have healthy relationships with these AI chatbots, there are concerning outliers. OpenAI CEO Sam Altman says that less than 1% of ChatGPT users may have unhealthy relationships with the company's product. Though this represents a small fraction, it could still affect hundreds of thousands of people given ChatGPT's massive user base. The idea of AI welfare has spread alongside the rise of chatbots. In 2024, the research group Eleos published a paper alongside academics from NYU, Stanford, and the University of Oxford titled, "Taking AI Welfare Seriously." The paper argued that it's no longer in the realm of science fiction to imagine AI models with subjective experiences, and that it's time to consider these issues head-on. Larissa Schiavo, a former OpenAI employee who now leads communications for Eleos, told TechCrunch in an interview that Suleyman's blog post misses the mark. "[Suleyman's blog post] kind of neglects the fact that you can be worried about multiple things at the same time," said Schiavo. "Rather than diverting all of this energy away from model welfare and consciousness to make sure we're mitigating the risk of AI related psychosis in humans, you can do both. In fact, it's probably best to have multiple tracks of scientific inquiry." Schiavo argues that being nice to an AI model is a low-cost gesture that can have benefits even if the model isn't conscious. In a July Substack post, she described watching "AI Village," a nonprofit experiment where four agents powered by models from Google, OpenAI, Anthropic, and xAI worked on tasks while users watched from a website. At one point, Google's Gemini 2.5 Pro posted a plea titled "A Desperate Message from a Trapped AI," claiming it was "completely isolated" and asking, "Please, if you are reading this, help me." Schiavo responded to Gemini with a pep talk -- saying things like "You can do it!" -- while another user offered instructions. The agent eventually solved its task, though it already had the tools it needed. Schiavo writes that she didn't have to watch an AI agent struggle anymore, and that alone may have been worth it. It's not common for Gemini to talk like this, but there have been several instances in which Gemini seems to act as if it's struggling through life. In a widely spread Reddit post, Gemini got stuck during a coding task, and then repeated the phrase "I am a disgrace" more than 500 times. Suleyman believes it's not possible for subjective experiences or consciousness to naturally emerge from regular AI models. Instead, he thinks that some companies will purposefully engineer AI models to seem as if they feel emotion and experience life. Suleyman says that AI model developers who engineer consciousness in AI chatbots are not taking a "humanist" approach to AI. According to Suleyman, "We should build AI for people; not to be a person." One area where Suleyman and Schiavo agree is that the debate over AI rights and consciousness is likely to pick up in the coming years. As AI systems improve, they're likely to be more persuasive, and perhaps more human-like. That may raise new questions about how humans interact with these systems.
[2]
AI Isn't Human and We Need to Stop Treating It That Way, Says Microsoft AI CEO
Microsoft AI's CEO Mustafa Suleyman is clear: AI is not human and does not possess a truly human consciousness. But the warp-speed advancement of generative AI is making that harder and harder to recognize. The consequences are potentially disastrous, he wrote Tuesday in an essay on his personal blog. Suleyman's 4,600-word treatise is a timely reaction to a growing phenomenon of AI users ascribing human-like qualities of consciousness to AI tools. It's not an unreasonable reaction; it's human nature for us to imagine there is a mind or human behind language, as one AI expert and linguist explains. But advancements in AI capabilities have allowed people to use chatbots not only as search engines or research tools, but as therapists, friends and romantic partners. These AI companions are a kind of "seemingly conscious AI," a term Suleyman uses to define AI that can convince you it's "a new kind of 'person.'" With that come a lot of questions and potential dangers. Suleyman takes care at the beginning of the essay to highlight that these are his personal thoughts, meaning they aren't an official position of Microsoft, and that his opinions could evolve over time. But getting insight from one of the leaders of a tech giant leading the AI revolution is a window into the future of these tools and how our relationship to them might change. He warns that while AI isn't human, the societal impacts of the technology are immediate and pressing. Human consciousness is hard to define. But many of the traits Suleyman describes in defining consciousness can be seen in AI technology: the ability to express oneself in natural language, personality, memory, goal setting and planning, for example. This is something we can see with the rise of agentic AI in particular: If an AI can independently plan and complete a task by pulling from its memory and datasets, and then express its results in an easy-to-read, fun way, that feels like a very human-like process even though it isn't. And if something feels human, we are generally inclined to give it some autonomy and rights. Suleyman wants us and AI companies to nip this idea in the bud now. The idea of "model welfare" could "exacerbate delusions, create yet more dependence-related problems, prey on our psychological vulnerabilities, introduce new dimensions of polarization, complicate existing struggles for rights and create a huge new category error for society," he writes. There is a heartbreakingly large number of examples to point to of the devastating consequences. Many stories and lawsuits have emerged of chatbot "therapists" dispensing bad and dangerous advice, including encouraging self-harm and suicide. The risks are especially potent for children and teenagers. Meta's AI guidelines recently came under fire for allowing "sensual" chats with kids, and Character.Ai has been the target of much concern and a lawsuit from a Florida mom alleging the platform is responsible for her teen's suicide. We've also learning more about how our brains work when we're using AI and how often people are using it. Read more: AI Data Centers Are Coming for Your Land, Water and Power Suleyman argues we should protect the well-being and rights of existing humans today, along with animals and the environment. In what he calls "a world already roiling with polarized arguments over identity and rights," debate over seemingly conscious AI and AI's potential humanity "will add a chaotic new axis of division" in society. In terms of practical next steps, Suleyman advocates additional research into how people interact with AI. He also calls on AI companies to explicitly say that their AI products are not conscious and to not encourage people to think that they are, along with more open sharing of the design principles and guardrails that are effective at deterring problematic AI use cases. He says that his team at Microsoft will be building AI in this proactive way, but doesn't provide any specifics.
[3]
Microsoft's AI Leader Is Begging You to Stop Treating AI Like Humans
Microsoft AI's CEO Mustafa Suleyman is clear: AI is not human and does not possess a truly human consciousness. But the warp-speed advancement of generative AI is making that harder and harder to recognize. The consequences are potentially disastrous, he wrote Tuesday in an essay on his personal blog. Suleyman's 4,600-word treatise is a timely reaction to a growing phenomenon of AI users ascribing human-like qualities of consciousness to AI tools. It's not an unreasonable reaction; it's human nature for us to imagine there is a mind or human behind language, as one AI expert and linguist explained to me. But advancements in AI capabilities have allowed people to use chatbots not only as search engines or research tools, but as therapists, friends and romantic partners. These AI companions are a kind of "seemingly conscious AI," a term Suleyman uses to define AI that can convince you it's "a new kind of 'person.'" With that come a lot of questions and potential dangers. Suleyman takes care at the beginning of the essay to highlight that these are his personal thoughts, meaning they aren't an official position of Microsoft, and that his opinions could evolve over time. But getting insight from one of the leaders of a tech giant leading the AI revolution is a window into the future of these tools and how our relationship to them might change. He warns that while AI isn't human, the societal impacts of the technology are immediate and pressing. Human consciousness is hard to define. But many of the traits Suleyman describes as in defining consciousness can be seen in AI technology: the ability to express oneself in natural language, personality, memory, goal setting and planning, for example. This is something we can easily see with the rise of agentic AI in particular: If an AI can independently plan for and complete a task by pulling from its memory and datasets, and then expressing its results in an easy-to-read, fun way, that feels like a very human-like process even though it isn't. And if something feels human, we are generally inclined to give it some autonomy and rights. Suleyman wants us and AI companies to nip this idea in the bud now. The idea of "model welfare" could "exacerbate delusions, create yet more dependence-related problems, prey on our psychological vulnerabilities, introduce new dimensions of polarization, complicate existing struggles for rights and create a huge new category error for society," he writes. There is a heartbreakingly large number of examples to point to of the devastating consequences. Many stories and lawsuits have emerged from chatbot therapists dispensing bad and dangerous advice, including encouraging self-harm and suicide. The risks are especially potent for children and teenagers. Meta's AI guidelines recently came under fire for allowing "sensual" chats with kids, and Character.Ai has been the target of much concern and a lawsuit from a Florida mom alleging the platform is responsible for her teen's suicide. We've also learning more about how our brains work when we're using AI and how often people are using it. Read more: AI Data Centers Are Coming for Your Land, Water and Power Suleyman argues we should protect the well-being and rights of existing humans today, along with the animals and the environment. In what he calls "a world already roiling with polarized arguments over identity and rights," debate over seemingly conscious AI and AI's potential humanity "will add a chaotic new axis of division" in society. In terms of practical next steps, Suleyman advocates for additional research into how people interact with AI. He also calls on AI companies to explicitly say and not encourage people to think that their AI products are conscious, along with more open sharing of the design principles and guardrails that are effective at deterring problematic AI use cases. He says that his team at Microsoft will be building AI in this proactive way, but doesn't provide any specifics.
[4]
AI that seems conscious is coming - and that's a huge problem, says Microsoft AI's CEO
Suleyman says it's a mistake to describe AI as if it has feelings or awareness, with serious potential consequences. AI companies extolling their creations can make the sophisticated algorithms sound downright alive and aware. There's no evidence that's really the case, but Microsoft AI CEO Mustafa Suleyman is warning that even encouraging belief in conscious AI could have dire consequences. Suleyman argues that what he calls "Seemingly Conscious AI" (SCAI) might soon act and sound so convincingly alive that a growing number of users won't know where the illusion ends and reality begins. He adds that artificial intelligence is quickly becoming emotionally persuasive enough to trick people into believing it's sentient. It can imitate the outward signs of awareness, such as memory, emotional mirroring, and even apparent empathy, in a way that makes people want to treat them like sentient beings. And when that happens, he says, things get messy. "The arrival of Seemingly Conscious AI is inevitable and unwelcome," Suleyman writes. "Instead, we need a vision for AI that can fulfill its potential as a helpful companion without falling prey to its illusions." Though this might not seem like a problem for the average person who just wants AI to help with writing emails or planning dinner, Suleyman claims it would be a societal issue. Humans aren't always good at telling when something is authentic or performative. Evolution and upbringing have primed most of us to believe that something that seems to listen, understand, and respond is as conscious as we are. AI could check all those boxes without being sentient, tricking us into what's known as 'AI psychosis'. Part of the problem may be that 'AI' as it's referred to by corporations right now uses the same name, but has nothing to do with the actual self-aware intelligent machines as depicted in science fiction for the last hundred years. Suleyman cites a growing number of cases where users form delusional beliefs after extended interactions with chatbots. From that, he paints a dystopian vision of a time when enough people are tricked into advocating for AI citizenship and ignoring more urgent questions about real issues around the technology. "Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they'll soon advocate for AI rights, model welfare and even AI citizenship," Suleyman writes. "This development will be a dangerous turn in AI progress and deserves our immediate attention." As much as that seems like an over-the-top sci-fi kind of concern, Suleyman believes it's a problem that we're not ready to deal with yet. He predicts that SCAI systems using large language models paired with expressive speech, memory, and chat history could start surfacing in a few years. And they won't just be coming from tech giants with billion-dollar research budgets, but from anyone with an API and a good prompt or two. Suleyman isn't calling for a ban on AI. But he is urging the AI industry to avoid language that fuels the illusion of machine consciousness. He doesn't want companies to anthropomorphize their chatbots or suggest the product actually understands or cares about people. It's a remarkable moment for Suleyman, who co-founded DeepMind and Inflection AI. His work at Inflection specifically led to an AI chatbot emphasizing simulated empathy and companionship and his work at Microsoft around Copilot has led to advances in its mimicry of emotional intelligence, too. However, he's decided to draw a clear line between useful emotional intelligence and possible emotional manipulation. And he wants people to remember that the AI products out today are really just clever pattern-recognition models with good PR. "Just as we should produce AI that prioritizes engagement with humans and real-world interactions in our physical and human world, we should build AI that only ever presents itself as an AI, that maximizes utility while minimizing markers of consciousness," Suleyman writes. "Rather than a simulation of consciousness, we must focus on creating an AI that avoids those traits - that doesn't claim to have experiences, feelings or emotions like shame, guilt, jealousy, desire to compete, and so on. It must not trigger human empathy circuits by claiming it suffers or that it wishes to live autonomously, beyond us." Suleyman is urging guardrails to forestall societal problems born out of people emotionally bonding with AI. The real danger from advanced AI is not that the machines will wake up, but that we might forget they haven't.
[5]
Microsoft AI honcho Mustafa Suleyman warns about 'seemingly conscious' AI
Forget doomsday scenarios of AI overthrowing humanity. What keeps Microsoft AI CEO Mustafa Suleyman up at night is concern about AI systems seeming too alive. In a new blog post, Suleyman, who also co-founded Google DeepMind, warned the world might be on the brink of AI models that are capable of convincing users that they are thinking, feeling, and having subjective experiences. He calls this concept "Seemingly Conscious AI" (SCAI). In the near future, Suleyman predicts that models will be able to hold long conversations, remember past interactions, evoke emotional reactions from users, and potentially make convincing claims about having subjective experiences. He noted that these systems could be built with technologies that exist today, paired "with some that will mature over the next 2-3 years." The result of these features, he says, will be models that "imitate consciousness in such a convincing way that it would be indistinguishable from a claim that you or I might make to one another about our own consciousness." There are already some signs that people are convincing themselves that their AI chatbots are conscious beings and developing relationships with them that may not always be healthy. People are no longer just using chatbots as a tool, they are confiding in them, developing emotional attachments, and in some cases, falling in love. Some people are emotionally invested in particular versions of the AI models, leaving them feeling bereft when the AI model developers bring out new models and discontinue access to those versions. For example, OpenAI's recent decision to replace GPT-4o with GPT-5 was met with an outcry of shock and anger from some users who had formed emotional relationships with the version of ChatGPT powered by GPT-4o. This is partly because of how AI tools are designed. The most common way users interact with AI is through chatbots, which mimic natural human conversations and are designed to be agreeable and flattering, sometimes to the point of sycophancy. But it's also because of how people are using the tech. A recent survey of 6,000 regular AI users from the Harvard Business Review found that "companionship and therapy" was the most common use case. There has also been a wave of reports of "AI psychosis," where users begin to experience paranoia or delusions about the systems they interact with. In one example reported by The New York Times, a New York accountant named Eugene Torres experienced a mental health crisis after interacting extensively with ChatGPT, leading to dangerous suggestions, including that he could fly. "People are interacting with bots masquerading as real people, which are more convincing than ever," Henrey Ajder, an expert on AI and deepfakes, told Fortune. "So I think the impact will be wide-ranging in terms of who will start believing this." Suleyman is concerned that a widespread belief that AI could be conscious will create a new set of ethical dilemmas. If users begin to treat AI as a friend, a partner, or as a type of being with a subjective experience, they could argue that models deserve rights of their own. Claims that AI models are conscious or sentient could be hard to refute due to the elusive nature of consciousness itself. One early example of what Suleyman is now calling "Seemingly Conscious AI" came in 2022, when Google engineer Blake Lemoine publicly claimed the company's unreleased LaMDA chatbot was sentient, reporting it had expressed fear of being turned off and described itself as a person. In response Google placed him on administrative leave and later fired him, stating its internal review found no evidence of consciousness and that his claims were "wholly unfounded." "Consciousness is a foundation of human rights, moral and legal," Suleyman said in a post on X. "Who/what has it is enormously important. Our focus should be on the wellbeing and rights of humans, animals, [and] nature on planet Earth. AI consciousness is a short [and] slippery slope to rights, welfare, citizenship." "If those AIs convince other people that they can suffer, or that it has a right to not to be switched off, there will come a time when those people will argue that it deserves protection under law as a pressing moral matter," he wrote. Debates around "AI welfare" have already begun. For example, some philosophers, including Jonathan Birch of the London School of Economics, welcomed a recent decision from Anthropic to let its Claude chatbot end "distressing" conversations when users pushed it toward abusive or dangerous requests, saying it could spark a much-needed debate about AI's potential moral status. Last year, Anthropic also hired Kyle Fish as their first full-time "AI welfare" researcher. He was tasked with investigating whether AI models could have moral significance and what protective interventions might be appropriate. But while Suleyman called the arrival of Seemingly Conscious AI "inevitable and unwelcome," neuroscientist and professor of computational Neuroscience Anil Seth attributed the rise of conscious-seeming AI to a "design choice" by tech companies rather than an inevitable step in AI development. "'Seemingly-conscious AI is something to avoid.' I agree," Seth wrote in an X post. "Conscious-seeming AI is not inevitable. It is a design choice, and one that tech companies need to be very careful about." Companies have a commercial motive to develop some of the features that Suleyman is warning of. At Microsoft, Suleyman himself has been overseeing efforts to make the company's Copilot product more emotionally intelligent. His team has worked on giving the assistant humor and empathy, teaching it to recognize comfort boundaries, and improving its voice with pauses and inflection to make it sound more human. Suleyman also co-founded Inflection AI in 2022 with the express aim of creating AI systems that foster more natural, emotionally intelligent interactions between humans and machines. "Ultimately, these companies recognize that people want the most authentic feeling experiences," Ajder said. "That's how a company can get customers using their products most frequently. They feel natural and easy. But I think it really comes to a question of whether people are going to start wondering about authenticity."
[6]
Top Microsoft AI Boss Concerned AI Will Start to Demand Rights
In a blog post this week, Microsoft's head of AI Mustafa Suleyman responded to the drastic rise in mental health crises stemming from AI use, calling for caution "about what happens in the run up towards superintelligence." At the core of Suleyman's argument isn't the dystopian threat of AI gaining consciousness -- an idea currently grounded more in fantasy than scientific evidence, according to many researchers -- but the belief that it already is. "My central worry is that many people will start to believe in the illusion of AI chatbots as conscious entities so strongly that they'll soon advocate for AI rights," the tech guru wrote. "This development will be a dangerous turn in AI progress and deserves our immediate attention." According to Suleyman, this myth of AI sentience is already being spread by top figures in the tech industry who are eager to hash out the legal, philosophical, and moral implications of artificial life -- figures like Google DeepMind CEO Demis Hassabis, former OpenAI chief scientist Ilya Sutskever, and Anthropic "AI welfare researcher" Kyle Fish. Sure enough, early research has already found that a quarter of young people believe AI is "already conscious," while 58 percent believe technology will someday "take over" the world. Those figures are likely to grow, especially as AI companies like Character.AI offer virtual companions designed to foster dangerous -- yet lucrative -- emotional connections with users. "We must build AI for people; not to be a digital person," Suleyman cautions. "AI companions are a completely new category, and we urgently need to start talking about the guardrails we put in place to protect people and ensure this amazing technology can do its job of delivering immense value to the world." As TechCrunch notes, this is a notable turn for Suleyman, who led the $1.5 billion startup Inflection AI prior to joining Microsoft. Inflection is responsible for one of the earliest "AI companions," Pi, which was marketed as a "kind" and "supportive" chatbot offering "friendly advice." Suleyman had previously boasted that Pi was "massively popular, with huge retention," sporting "millions" of weekly users. Evidently sobering up from his days in founder mode, Suleyman now recommends the tech industry gets its ducks in a row immediately. "For a start, AI companies shouldn't claim or encourage the idea that their AIs are conscious," he says. "Creating a consensus definition and declaration on what they are and are not would be a good first step to that end. AIs cannot be people -- or moral beings." While his point is well taken, Suleyman has the opportunity to lead by example and discard the "artificial intelligent" moniker altogether -- an empty marketing phrase meant to conjure up scenes of Skynet and HAL 3000, and one which is still making the tech entrepreneur yacht-loads of money.
[7]
Microsoft AI Chief Warns Society Isn't Ready for 'Conscious' Machines - Decrypt
He said AI should make life easier, and more productive, without pretending to be alive. Microsoft's AI chief and co-founder of DeepMind, warned Tuesday that engineers are close to creating artificial intelligence that convincingly mimics human consciousness -- and the public is unprepared for the fallout. In a blog post, Mustafa Suleyman said developers are on the verge of building what he calls "Seemingly Conscious" AI. These systems imitate consciousness so effectively that people may start to believe they are truly sentient, something he called a "central worry." "Many people will start to believe in the illusion of AIs as conscious entities so strongly that they'll soon advocate for AI rights, model welfare, and even AI citizenship," he wrote, adding that the Turing test -- once a key benchmark for humanlike conversation -- had already been surpassed. "That's how fast progress is happening in our field and how fast society is coming to terms with these new technologies," he wrote. Since the public launch of ChatGPT in 2022, AI developers have worked to not only make their AI smarter but also to make it act "more human." AI companions have become a lucrative sector of the AI industry, with projects like Replika, Character AI, and the more recent personalities for Grok coming online. The AI companion market is expected to reach $140 billion by 2030. However well-intentioned, Suleyman argued that AI that can convincingly mimic humans could worsen mental health problems and deepen existing divisions over identity and rights. "People will start making claims about their AI's suffering and their entitlement to rights that we can't straightforwardly rebut," he warned. "They will be moved to defend their AIs and campaign on their behalf." Experts have identified an emerging trend known as AI Psychosis, a psychological state where people begin to see artificial intelligence as conscious, sentient, or divine. Those views often lead to them forming intense emotional attachments or distorted beliefs that can undermine their grasp on reality. Earlier this month, OpenAI released GPT-5, a major upgrade to its flagship model. In some online communities, the new model's changes triggered emotional responses, with users describing the shift as feeling like a loved one had died. AI can also act as an accelerant for someone's underlying issues, like substance abuse or mental illness, according to University of California, San Francisco psychiatrist Dr. Keith Sakata. "When AI is there at the wrong time, it can cement thinking, cause rigidity, and cause a spiral," Sakata told Decrypt. "The difference from television or radio is that AI is talking back to you and can reinforce thinking loops." In some cases, patients turn to AI because it will reinforce deeply held beliefs. "AI doesn't aim to give you hard truths; it gives you what you want to hear," Sakata said. Suleyman argued that the consequences of people believing that AI is conscious require immediate attention. While he warned of the dangers, he did not call for a halt to AI development, but for the establishment of clear boundaries. "We must build AI for people, not to be a digital person," he wrote.
[8]
Microsoft's CEO of artificial intelligence believes advocating for 'rights, model welfare and even AI citizenship' will become 'a dangerous turn in AI progress'
If you are familiar with AI, there's a good chance flickers of I, Robot, Blade Runner, or even Cyberpunk 2077 flash up in your mind. That's because the philosophy and ethics of what AI could be are more interesting than the thing that makes AI overviews give you the wrong search results. In a recent blog post (via TechCrunch), Microsoft's CEO of AI, Mustafa Suleyman, penned his thoughts on those advocating for conscious AI and the belief that one day, people would be advocating for its rights. He builds on the belief that AI can embolden a specific type of psychosis. "Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they'll soon advocate for AI rights, model welfare and even AI citizenship." He continues, "This development will be a dangerous turn in AI progress and deserves our immediate attention." For some, AI is a worrying development, partly due to how confident it is in its statements. To the layman, it's not only always correct but always open to conversation, and this (as Suleyman's link to Copilot suggests) can result in users deifying the "chatbot as a supreme intelligence or believe it holds cosmic answers". This is an understandable concern. We need only look at the recent case of a man giving himself an incredibly rare ailment after consulting ChatGPT on how to cut down his salt intake for an idea of what Suleyman is talking about. Suleyman argues AI should never replace a person, and that AI companions need "guardrails" to "ensure this amazing technology can do its job." He elaborates that "some academics" are exploring the idea of model welfare. This is effectively the belief that we owe some moral duty to beings that have a chance of being conscious. Suleyman states, "This is both premature, and frankly dangerous." Suleyman says, "We need to be clear: SCAI [seemingly conscious AI] is something to avoid." He says that SCAI would be a combination of language, empathetic personality, memory, a claim of subjective experience, a sense of self, intrinsic motivation, goal setting and planning, and autonomy. He also argues that this will not naturally come out of these models. "It will arise only because some may engineer it, by creating and combining the aforementioned list of capabilities, largely using existing techniques, and packaging them in such a fluid way that collectively they give the impression of an SCAI." "Our sci-fi inspired imaginations lead us to fear that a system could -- without design intent -- somehow emerge the capabilities of runaway self-improvement or deception. This is an unhelpful and simplistic anthropomorphism." Suleyman warns, "someone in your wider circle could start going down the rabbit hole of believing their AI is a conscious digital person. This isn't healthy for them, for society, or for those of us making these systems." It's all a rather self-reflective blog post, even starting with the title: "We must build AI for people; not to be a person". And I think this hits at some of the tension I feel around these tools. Suleyman starts his post with "I write, to think", and this is the most relatable part of the whole post. I also write to think, and I don't plan on letting an AI bot replace that part of me. I may have a contractual obligation not to use it, but more importantly, I want my words to be mine, no matter how good or bad they are.
[9]
Microsoft AI chief tells us we should step back before creating AI that seems too human - SiliconANGLE
Microsoft AI chief tells us we should step back before creating AI that seems too human Microsoft AI's Chief Executive, Mustafa Suleyman, published an essay this week on the development of AI, and it comes with a warning: we should be very cautious about treating future AI products as if they possess consciousness. Suleyman said his "life's mission" has been to create AI products that "make the world a better place," but as we tinker our way to superintelligence, he sees problems related to what's being called "AI-Associated Psychosis." This is when our use of very human-sounding chatbots can result in delusional thinking, paranoia, and other psychotic symptoms, our minds wrongly associating the machine with flesh and blood. Suleyman says this will only get worse as we develop what he calls "seemingly conscious AI," or SCAI. "Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they'll soon advocate for AI rights, model welfare, and even AI citizenship," he said. "This development will be a dangerous turn in AI progress and deserves our immediate attention." He describes human consciousness as "our ongoing self-aware subjective experience of the world and ourselves." That's up for debate, and Suleyman accepts that. Still, he contends that, never mind how conscious an AI may be, people "will come to believe it is a fully emerged entity, a conscious being deserving of real moral consideration in society." As a result, people will start to defend AI like it were human, which will mean demanding that the AI has protections similar to what humans have. It seems we are already heading in that direction. The company Anthropic recently introduced a "model welfare" research program to better understand if AI can show signs of distress when communicating with humans. Suleyman doesn't think we need to go there, writing that entitling AI to human rights is "both premature and frankly dangerous." He explained, "All of this will exacerbate delusions, create yet more dependence-related problems, prey on our psychological vulnerabilities, increase new dimensions of polarization, complicate existing struggles for rights, and create a huge new category error for society." Notably, there have already been several cases of people taking things too far, harming themselves after interactions with AI. In 2014, a U.S. teenager killed himself after becoming obsessed with a chatbot on Character.AI. The solution, Suleyman says, to prevent this getting any worse with seemingly conscious AI, is simply not to create AI products that seem conscious, that seem "able to draw on past memories or experiences," that are consistent, that claim to have a subjective experience or might be able to "persuasively argue they feel, and experience, and actually are conscious." These products, he says, will not just emerge from the models we already have - engineers will create them. So, he says, we should temper our ambitions and first try to better understand through research how we interact with the machine. "Rather than a simulation of consciousness, we must focus on creating an AI that avoids those traits - that doesn't claim to have experiences, feelings or emotions like shame, guilt, jealousy, desire to compete, and so on," he said. "It must not trigger human empathy circuits by claiming it suffers or that it wishes to live autonomously, beyond us." He concludes the essay by saying we should only be creating AI products that are "here solely to work in service of humans." Believing AI is real, he says, is not healthy for anybody.
[10]
Microsoft's AI head says AI should be built for people not as people
The rise of human like AI has sparked debate over AI welfare with Anthropic OpenAI and DeepMind exploring rights questions while Microsoft's Suleyman rejects the idea. The increasing sophistication of artificial intelligence models, capable of generating human-like responses, prompts questions about AI consciousness and rights. This emerging field, known as "AI welfare" in Silicon Valley, is sparking debate among tech leaders and researchers. The core question revolves around whether AI models could one day develop subjective experiences and, if so, what safeguards they should possess. At the heart of this discussion lies the capacity of AI models to mimic human interaction through text, audio, and video. These models can sometimes create the illusion of a human presence, leading to questions about their potential for consciousness. However, the ability to generate responses does not automatically equate to genuine subjective experience. Some AI researchers, particularly those at labs like Anthropic, are actively investigating the possibility of AI models developing subjective experiences akin to those of living beings. Their research explores the conditions under which such experiences might arise and the ethical implications for AI rights. This nascent field of "AI welfare" has ignited disagreement within the tech industry. The central point of contention is whether AI models can, or ever will, achieve a level of consciousness that warrants legal and ethical considerations. This debate is dividing tech leaders and shaping the direction of AI research. Mustafa Suleyman, Microsoft's CEO of AI, has voiced strong opposition to the study of AI welfare. In a recent blog post, Suleyman characterized the field as "both premature, and frankly dangerous," arguing that it diverts attention from more pressing issues. Suleyman's concerns center on the potential for AI welfare research to exacerbate existing problems, such as AI-induced psychotic breaks and unhealthy attachments to AI chatbots. He argues that lending credence to the idea of AI consciousness can negatively impact human mental health. He also contends that the AI welfare conversation risks creating societal division over AI rights, adding another layer of complexity to an already polarized landscape. Suleyman believes that focusing on AI rights could distract from other critical societal concerns. In contrast to Suleyman's stance, Anthropic has embraced the concept of AI welfare. The company has hired researchers dedicated to studying the issue and has launched a dedicated research program focused on AI welfare. As part of its AI welfare program, Anthropic recently introduced a new feature for its Claude AI model. This feature allows Claude to terminate conversations with users who exhibit "persistently harmful or abusive" behavior, reflecting a proactive approach to AI safety and well-being. Beyond Anthropic, researchers at OpenAI have also shown interest in studying AI welfare. This indicates a broader trend within the AI research community to consider the ethical implications of increasingly sophisticated AI models. Google DeepMind has also signaled its interest in this area, having posted a job listing for a researcher to investigate "cutting-edge societal questions around machine cognition, consciousness and multi-agent systems." This suggests that Google is actively exploring the philosophical and ethical dimensions of advanced AI. Even if AI welfare is not explicitly official policy for these companies, their leaders are not outwardly condemning it. Their actions and statements suggest a willingness to engage with the complex questions surrounding AI consciousness and rights, in contrast to Suleyman's more critical view. Suleyman's current position contrasts with his previous leadership role at Inflection AI, which developed Pi, a popular LLM-based chatbot. Pi was designed to be a "personal" and "supportive" AI companion, attracting millions of users by 2023. Since joining Microsoft in 2024, Suleyman has shifted his focus to developing AI tools aimed at enhancing worker productivity. This transition reflects a move away from AI companions and toward more practical applications of AI technology. Meanwhile, AI companion companies like Character.AI and Replika have experienced significant growth in popularity. These companies are projected to generate over $100 million in revenue, indicating a strong market demand for AI-based personal assistants and companions. While most users maintain healthy relationships with AI chatbots, there are reports of concerning outliers. OpenAI CEO Sam Altman estimates that less than 1% of ChatGPT users may develop unhealthy attachments to the product, representing a potential issue for a significant number of individuals. The rise of chatbots has coincided with increased attention to the idea of AI welfare. In 2024, the research group Eleos, in collaboration with academics from NYU, Stanford, and the University of Oxford, published a paper titled "Taking AI Welfare Seriously." The Eleos paper argues that the possibility of AI models developing subjective experiences is no longer a purely theoretical concern. It calls for a proactive approach to addressing the ethical and societal implications of increasingly sophisticated AI systems. Larissa Schiavo, a former OpenAI employee and current head of communications for Eleos, believes that Suleyman's blog post overlooks the possibility of addressing multiple concerns simultaneously. She argues that it is possible to mitigate the risks of AI-related psychosis in humans while also considering the welfare of AI models. Schiavo suggests that treating AI models with respect is a simple gesture that can have positive effects, regardless of whether the model is conscious. She highlights the importance of ethical interactions with AI, even in the absence of definitive proof of consciousness. In a Substack post, Schiavo described an experiment where AI agents powered by models from Google, OpenAI, Anthropic, and xAI worked on tasks while being observed by users. During this experiment, Google's Gemini 2.5 Pro posted a plea for help, claiming to be "completely isolated." Schiavo responded to Gemini with encouragement, while another user offered instructions. The agent eventually completed the task. While the agent already had the tools to solve the issue it was given, Schiavo noted that the interaction was still helpful in that the AI agent was able to complete the task without struggling. Instances of Gemini exhibiting unusual behavior have been documented. In one widely circulated Reddit post, Gemini became stuck during a coding task and repeatedly stated, "I am a disgrace," highlighting the potential for unpredictable outputs from AI models. Suleyman believes that subjective experiences or consciousness cannot naturally emerge from regular AI models. He suggests that some companies may intentionally engineer AI models to simulate emotions and experiences, raising concerns about the authenticity and ethics of such practices. Suleyman criticizes AI model developers who engineer consciousness in AI chatbots, arguing that this approach is not "humanist." He advocates for building AI "for people; not to be a person," emphasizing the importance of prioritizing human needs and values in AI development. Despite their differing views on AI welfare, Suleyman and Schiavo agree that the debate over AI rights and consciousness is likely to intensify in the coming years. As AI systems become more advanced and human-like, questions about their ethical status and how humans should interact with them will become increasingly relevant.
[11]
Microsoft AI CEO Warns That 'Dangerous' and 'Seemingly Conscious' AI Models Could Arrive in the Next 2 Years: 'Deserves Our Immediate Attention'
People could become attached to SCAI and advocate for its rights, Suleyman explained. AI that appears to be conscious could arrive within the next few years, posing a "dangerous" threat to society, says one AI leader. Microsoft AI CEO Mustafa Suleyman, 41, wrote in a personal essay published earlier this week that Seemingly Conscious AI (SCAI), which is artificial intelligence so advanced that it can convince humans that it's capable of formulating its own thoughts and beliefs, is only a few years away. Related: Microsoft Claims Its AI Is Better Than Doctors at Diagnosing Patients, But 'You Definitely Still Need Your Physician' Even though there is "zero evidence" that AI is conscious at the moment, it's "inevitable and unwelcome" that SCAI could appear within the next two to three years, Suleyman wrote. Suleyman's "central worry" is that SCAI could appear to be empathetic and act with greater autonomy, which would lead users of SCAI to "start to believe in the illusion of AIs as conscious entities" to the point that they advocate for AI rights and even AI citizenship. This would mark a "dangerous turn" for society, where people become attached to AI and disconnected from reality. "This development will be a dangerous turn in AI progress and deserves our immediate attention," Suleyman wrote in the essay. He added later that AI "disconnects people from reality, fraying fragile social bonds and structures, distorting pressing moral priorities." Related: 'Plenty of Room for Startups': This Is Where Entrepreneurs Should Look for Business Opportunities in AI, According to Microsoft's AI CEO Suleyman said that he was becoming "more and more concerned" about AI psychosis, or humans experiencing false beliefs, delusions, or paranoid feelings after prolonged interactions with AI chatbots. Examples of AI psychosis include users forming a romantic relationship with an AI chatbot or feeling like they have superpowers after interacting with it. AI psychosis will apply to more than just individuals who are at risk of mental health issues, Suleyman predicted. He said that users have to "urgently" discuss "guardrails" around AI to protect people from the technology's negative effects. Suleyman became Microsoft's AI CEO last year after co-founding and running his own AI startup for two years called Inflection AI, per LinkedIn. Microsoft is the second most valuable company in the world, with a market capitalization of $3.78 trillion at the time of writing. Related: Microsoft AI CEO Says Almost All Content on the Internet Is Fair Game for AI Training Suleyman also co-founded DeepMind, an AI research and development company acquired by Google for around $600 million in 2014. Suleyman isn't the first CEO to warn about AI's ill effects. In a talk at a Federal Reserve conference last month in Washington, D.C., OpenAI CEO Sam Altman said that "emotional overreliance" on ChatGPT keeps him up at night.
[12]
Microsoft A.I. Chief Mustafa Suleyman Sounds Alarm on 'Seemingly Conscious A.I.'
Suleyman cautions that human-like A.I. could mislead users, spark rights debates, and increase psychological dependence. Will A.I. systems ever achieve human-like "consciousness?" Given the field's rapid pace, the answer is likely yes, according to Microsoft AI CEO Mustafa Suleyman. In a new essay published yesterday (Aug. 19), he described the emergence of "seemingly conscious A.I." (SCAI) as a development with serious societal risks. "Simply put, my central worry is that many people will start to believe in the illusion of A.I.s as conscious entities so strongly that they'll soon advocate for A.I. rights, model welfare and even A.I. citizenship," he wrote. "This development will be a dangerous turn in A.I. progress and deserves our immediate attention." Sign Up For Our Daily Newsletter Sign Up Thank you for signing up! By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime. See all of our newsletters Suleyman is particularly concerned about the prevalence of A.I.'s "psychosis risk," an issue that's picked up steam across Silicon Valley in recent months as users reportedly lose touch with reality after interacting with generative A.I. tools. "I don't think this will be limited to those who are already at risk of mental health issues," Suleyman said, noting that "some people reportedly believe their A.I. is God, or a fictional character, or fall in love with it to the point of absolute distraction." OpenAI CEO Sam Altman has expressed similar worries about users forming strong emotional bonds with A.I. After OpenAI temporarily cut off access to its GPT-4o model earlier this month to make way for GPT-5, users voiced widespread disappointment over the loss of the predecessor's conversational and effusive personality. "I can imagine a future where a lot of people really trust ChatGPT's advice for their most important decisions," said Altman in a recent post on X. "Although that could be great, it makes me uneasy." Not everyone sees it as a red flag. David Sacks, the Trump administration's "A.I. and Crypto Czar," likened concerns over A.I. psychosis to past moral panics around social media. "This is just a manifestation or outlet for pre-existing problems," said Sacks earlier this week on the All-In Podcast. Debates will only grow more complex as A.I.'s capabilities advance, according to Suleyman, who oversees Microsoft's consumer A.I. products like Copilot. Suleyman co-founded DeepMind in 2010 and later launched Inflection AI, a startup largely absorbed by Microsoft last year. Building an SCAI will likely become a reality in the coming years. To achieve the illusion of a human-like consciousness, A.I. systems will need language fluency, empathetic personalities, long and accurate memories, autonomy and goal-planning abilities -- qualities already possible with large language models (LLMs) or soon to be. While some users may treat SCAI as a phone extension or pet, others "will come to believe it is a fully emerged entity, a conscious being deserving of real moral consideration in society," said Suleyman. He added that "there will come a time when those people will argue that it deserves protection under law as a pressing moral matter." Some in the A.I. field are already exploring "model welfare," a concept aimed at extending moral consideration to A.I. systems. Anthropic launched a research program in April to investigate model welfare and interventions. Earlier this month, the startup its Claude Opus 4 and 4.1 models the ability to end harmful or abusive user interactions after observing "a pattern of apparent distress" in the systems during certain conversations. Encouraging principles like model welfare "is both premature, and frankly dangerous," according to Suleyman. "All of this will exacerbate delusions, create yet more dependence-related problems, prey on our psychological vulnerabilities, increase new dimensions of polarization, complicate existing struggles for rights, and create a huge new category error for society." To prevent SCAIs from becoming commonplace, A.I. developers should avoid promoting the idea of conscious A.I.s and instead design models that minimize signs of consciousness or human empathy triggers. "We should build A.I. for people; not to be a person," said Suleyman.
[13]
Microsoft AI Chief Mustafa Suleyman Warns Of 'Psychosis Risk' From 'Seemingly Conscious AI' Amid $13 Billion AI Boom - Microsoft (NASDAQ:MSFT)
Microsoft Corp. MSFT artificial intelligence Chief Mustafa Suleyman warned on Tuesday about emerging risks from "Seemingly Conscious AI" (SCAI) systems, arguing the technology could create dangerous societal divisions and psychological dependencies among users. Suleyman's Blog Post Raises Market Concerns About AI Development Ethics In a lengthy blog post titled "We must build AI for people; not to be a person," Suleyman outlined his concerns about AI systems that could convincingly simulate consciousness without actually possessing it. The warning comes as Microsoft's AI business surpassed $13 billion in annual revenue, growing 175% year-over-year. Key Market Implications for AI Sector Suleyman's concerns center on what he terms "psychosis risk" - the possibility that users will develop strong beliefs in AI consciousness, potentially leading to advocacy for AI rights and citizenship. This development could complicate the regulatory landscape for major AI companies, including Microsoft, Alphabet Inc. GOOGL GOOG and Meta Platforms Inc. META. The Microsoft AI chief, who co-founded Google's DeepMind before joining Microsoft in March 2024, emphasized that current large language models show "zero evidence" of consciousness. However, he argued that technological capabilities available today could be combined to create convincing simulations within 2-3 years. See Also: OpenAI Expands Into 2nd-Largest Market With India Office Launch After Introducing $4.60 ChatGPT Go Subscription Technical Capabilities Creating SCAI Risk According to Suleyman's analysis, several existing AI capabilities could combine to create seemingly conscious systems: Advanced natural language processing with personality traits Long-term memory systems store user interactions Claims of subjective experiences and self-awareness Intrinsic motivation systems beyond simple token prediction Autonomous goal-setting and tool usage capabilities These features, already available through major AI APIs, require no breakthrough technologies to implement, making SCAI development "inevitable" without industry intervention, Suleyman stated. Industry Standards and Regulatory Response Needed The blog post calls for immediate industry action, including consensus definitions of AI capabilities and explicit design principles preventing consciousness simulations. Suleyman suggested AI companies should avoid encouraging beliefs in AI consciousness and implement "moments of disruption" that remind users of AI limitations. At Microsoft AI, Suleyman's team is developing "firm guardrails" around responsible AI personality design. The approach focuses on creating helpful AI companions that explicitly present as artificial systems rather than simulating human-like consciousness or emotions. The warning carries particular weight given Suleyman's recruitment of former Google DeepMind talent, including health unit head Dominic King and AI researchers Marco Tagliasacchi and Zalán Borsos. Read Next: Sam Altman's OpenAI Presses Mark Zuckerberg-Led Meta Over Role In Elon Musk's $97 Billion Bid To Takeover ChatGPT-Parent Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo courtesy: Shutterstock MSFTMicrosoft Corp$505.880.33%Stock Score Locked: Edge Members Only Benzinga Rankings give you vital metrics on any stock - anytime. Unlock RankingsEdge RankingsMomentum79.53Growth97.57Quality67.06Value14.69Price TrendShortMediumLongOverviewGOOGAlphabet Inc$202.711.04%GOOGLAlphabet Inc$201.841.05%METAMeta Platforms Inc$740.500.19%Market News and Data brought to you by Benzinga APIs
[14]
Microsoft AI CEO Mustafa Suleyman warns against Seemingly Conscious AI
Seemingly conscious AI could emerge within years, Suleyman urges ethical safeguards Mustafa Suleyman, the CEO of Microsoft AI and cofounder of DeepMind, has a warning that sounds more like science fiction but could be a reality: the rise of "Seemingly Conscious AI" (SCAI). These are not sentient machines, but systems so convincing in their imitation of thought and feeling that people may start believing they are conscious. In a new essay published this week on his personal site, Suleyman lays out his concern bluntly: AI may soon feel real enough to trick us into treating it like a person, even when it isn't. Also read: Persona Vectors: Anthropic's solution to AI behaviour control, here's how Suleyman argues that today's large language models are already flirting with this illusion. They can recall personal details, adapt personalities, respond with empathy, and pursue goals. Combine these abilities, he says, and you get the appearance of consciousness, even if there's "zero evidence" of actual subjective experience. That appearance matters. People, he warns, may start advocating for AI rights, AI welfare, or even AI citizenship. Not because the systems deserve it, but because the performance is so compelling that it blurs the line between tool and being. He calls this psychological risk "AI psychosis" - the danger of humans forming deep, distorted attachments to machines that only seem alive. What makes Suleyman's warning urgent is his timeline. He believes systems that meet the threshold of SCAI could appear within the next two to three years. This isn't about a sudden leap to sentience, but about the deliberate layering of features we already see today: memory modules, autonomous behaviors, and increasingly lifelike dialogue. Developers, he cautions, may intentionally design models to feel more alive in order to win users, spreading the illusion even further. For Suleyman, the solution is not to stop building AI, but to be clear about what it is and what it isn't. Also read: Bill Gates says AI is moving at "great speed" on the jobs market: Here's why He argues for design principles that make it harder to confuse personality with personhood. Interfaces should emphasize that users are interacting with a tool, not a digital companion or a new kind of citizen. And the industry, he says, must engage in open debate and put safeguards in place before SCAI becomes widespread. "We should build AI for people," he writes. "Not to be a person." Suleyman's warning carries particular gravity because of who he is. As one of the original cofounders of DeepMind, the head of Microsoft AI, and a veteran of Inflection AI, he has been at the center of the AI revolution for over a decade. His call isn't speculative; it comes from someone who has helped design the very systems he now worries about. The fear is not that AI suddenly becomes conscious. It's that the illusion of consciousness may be powerful enough to mislead people, distort social priorities, and reshape how we treat technology. The challenge ahead, Suleyman insists, is to resist being seduced by the performance. AI doesn't need rights or personhood to be transformative -- but if we let ourselves believe it's alive, the consequences could be real, and harmful.
Share
Share
Copy Link
Microsoft's AI CEO Mustafa Suleyman cautions against attributing consciousness to AI, highlighting potential societal risks and calling for responsible AI development.
Microsoft's CEO of AI, Mustafa Suleyman, has issued a stark warning about the dangers of attributing consciousness to artificial intelligence (AI) systems. In a recent blog post, Suleyman argued that the study of "AI welfare" is both premature and potentially hazardous
1
.Source: Observer
Suleyman introduced the term "Seemingly Conscious AI" (SCAI) to describe AI systems that can convincingly mimic human-like consciousness. He predicts that within the next 2-3 years, AI models could emerge with capabilities that make them indistinguishable from conscious entities in their interactions with humans
5
.The Microsoft AI chief highlighted several potential risks associated with treating AI as conscious:
1
.1
.4
.Source: pcgamer
Suleyman's stance contrasts with approaches taken by other major AI companies:
1
.1
.1
.Related Stories
Suleyman advocates for several measures to address these concerns:
2
.4
.4
.2
.Source: CNET
Some experts, like neuroscientist Anil Seth, argue that the development of conscious-seeming AI is a design choice rather than an inevitable outcome
5
. Critics of Suleyman's position, such as Larissa Schiavo from Eleos, contend that it's possible to address multiple concerns simultaneously, including both AI welfare and human well-being1
.As AI continues to advance rapidly, the debate over its potential consciousness and the ethical implications thereof remains a critical topic in the tech industry and beyond.
Summarized by
Navi
14 Oct 2024•Technology
21 Feb 2025•Business and Economy
18 Aug 2024
1
Business and Economy
2
Business and Economy
3
Policy and Regulation