16 Sources
16 Sources
[1]
Microsoft AI says it'll make superintelligent AI that won't be terrible for humanity
This kind of superintelligence, according to Suleyman, won't be "an unbounded and unlimited entity with high degrees of autonomy" and will instead be "carefully calibrated, contextualized, within limits." Suleyman joined Microsoft last year as the CEO of Microsoft AI, which has only recently launched its first in-house models for text, voice, and image generation. Though Suleyman's blog post says Microsoft AI will "reject narratives about a race to AGI," the competition between Microsoft and OpenAI is about to get much more intense. Under a new deal with OpenAI, Microsoft can now "independently pursue AGI alone or in partnership with third parties." And, as pointed out by my colleague Hayden Field, "Microsoft is perfectly within its legal rights to use OpenAI's IP to develop its own AGI and attempt to win the race."
[2]
Microsoft's superintelligence plan puts people first
Redmond's new AI boss is willing to sacrifice performance for the future of our species Microsoft has joined the ranks of tech giants chasing superintelligent artificial intelligence, but the company's AI chief Mustafa Suleyman's vision is markedly different from that articulated by other industry leaders To be clear, "superintelligent AI" hasn't been invented yet. Pundits use the term to describe systems that appear to "think" independently of what humans program them to do, an ability that some previously described as "artificial general intelligence," or AGI. Suleyman announced he's heading up a new AI Superintelligence Team at Microsoft in a blog post Thursday. The key difference in Microsoft's vision, Suleyman said, is the vision of a humanist superintelligence, or HSI, that trades blind AI ambition for carefully planned limitations to make sure AI helps rather than harms us. "This isn't about some directionless technological goal, an empty challenge, a mountain for its own sake," Suleyman said. "We are doing this to solve real concrete problems and do it in such a way that it remains grounded and controllable." The announcement that Microsoft is pursuing its own AI ambitions comes as Redmond's relationship with AI pioneer OpenAI has soured. The pair were inextricably linked for years, with Redmond pouring billions into the cash-burning enterprise. Since OpenAI decided to transform itself into a for-profit enterprise and lessen Microsoft's control over its affairs, however, the companies have been drifting apart, with OpenAI starting to diversify away from Azure and add other cloud providers. The Microsoft AI CEO expounded on his views in an interview with Semafor that laid out what seems to amount to his own set of three rules for the AI, which are comparable to science fiction author Isaac Asimov's three laws of robotics. To Suleyman's mind, humanist superintelligence can't have total autonomy, the capacity to self-improve, or the ability to set its own goals. If allowed to do those, the Microsoft man worries superintelligent AI could quickly become a threat. "The project of superintelligence should not be about replacing or threatening our species," Suleyman told Semafor. "It's crazy to have to actually declare that - that should be self-evident, but I'm seeing lots of indications that people don't always agree." Suleyman didn't name names in his blog post or the interview, but he did mention that it's incredibly dangerous when chatbot users anthropomorphize AI by ascribing human feelings to it, which naturally leads to discussion of granting rights to algorithms. "That mentality, if it really takes hold, will lead to a huge amount of conflict and threat to our species," Suleyman warned. "I'm embarking on the project of building [Microsoft's] superintelligence explicitly designed to avoid those things." Suleyman said Microsoft's human-first approach will differ from the rest of the industry by requiring AIs to interact with humans in ways we can understand, instead of talking among themselves in "vector space". "If [AIs] just communicate in vector space, we're always going to be at the mercy of compressing their vector-to-vector space into a set of words that we feel we can hold accountable," Suleyman explained. "We will be limiting ourselves from performance maximization because it's probably true that vector-to-vector communication is going to be more efficient," he added. "We're going to forego efficiency or performance or some improved capability because we're going to prioritize safety, and safety means human- understandable, human-interpretable, rigid guidelines." ®
[3]
Microsoft forms superintelligence team under AI chief Suleyman 'to serve humanity'
Microsoft on Thursday said it's forming a team that will be tasked with performing advanced artificial intelligence research. Mustafa Suleyman, CEO of the Microsoft AI group that includes Bing and the Copilot assistant, announced the formation of the MAI Superintelligence Team, and said in a blog post that he'll be leading it. "We are doing this to solve real concrete problems and do it in such a way that it remains grounded and controllable," Suleyman wrote. "We are not building an ill-defined and ethereal superintelligence; we are building a practical technology explicitly designed only to serve humanity." The decision comes months after Facebook parent Meta spent billions to hire talent for its new Meta Superintelligence Labs unit that's working on research and products. The term superintelligence typically refers to machines deemed more intelligent than the smartest people. Suleyman was a co-founder of AI lab DeepMind, which Google bought in 2014. After leaving Google in 2022, he co-founded and led AI startup Inflection. Microsoft hired Suleyman and several other Inflection employees last year. Top technology companies have rushed to hire leading AI engineers and researchers, augmenting their products with generative AI capabilities. The boom started with OpenAI's launch of ChatGPT in 2022. Microsoft uses OpenAI models in Bing and Copilot, while OpenAI runs workloads in Microsoft's Azure cloud. Microsoft also owns a $135 billion equity stake in OpenAI following a restructuring. Microsoft has taken steps to reduce its dependence on OpenAI. After the Inflection deal, the software company also began drawing on models from Google and from Anthropic, which was founded by former OpenAI executives. The new Microsoft AI research group will focus on providing useful companions for people that can help in education and other domains, Suleyman wrote in his blog post. It will also pursue narrow areas in medicine and in renewable energy production. "We'll have expert level performance at the full range of diagnostics, alongside highly capable planning and prediction in operational clinical settings," Suleyman wrote. As investors and analysts are increasingly voicing their concerns about overspending on AI without a clear path to profits, Suleyman said he wants "to make clear that we are not building a superintelligence at any cost, with no limits."
[4]
Microsoft forms Superintelligence team to pursue 'humanist' AI under Mustafa Suleyman
Microsoft has formed a new Superintelligence team within its AI division, aiming to develop what it calls "humanist superintelligence" -- advanced AI that remains under human control. The team, announced Thursday morning in a post by Mustafa Suleyman, CEO of Microsoft AI, reflects the company's ambitions to shape the next era of artificial intelligence while addressing concerns about safety and control in the development of advanced AI systems. "We are doing this to solve real concrete problems and do it in such a way that it remains grounded and controllable," Suleyman wrote. "We are not building an ill-defined and ethereal superintelligence; we are building a practical technology explicitly designed only to serve humanity." The approach contrasts with the broader pursuit of artificial general intelligence, or AGI -- the goal of creating AI systems that can match or surpass human capabilities across virtually any task -- which is core to the mission of Microsoft's longtime partner, OpenAI, and its CEO Sam Altman. In his message, Suleyman cited early directions in areas such as healthcare, where Microsoft researchers are developing expert-level diagnostic models, and clean energy, where AI could accelerate breakthroughs in materials, batteries, and fusion research. The goal, he wrote, is to advance technology "within limits" -- keeping humanity in control while harnessing AI's potential to improve lives on a global scale Suleyman will lead the new MAI Superintelligence Team, joined by Microsoft AI Chief Scientist Karén Simonyan and other core Microsoft AI leaders and researchers. Key leaders who've been involved in Microsoft's model development work are also expected to be part of the effort. The company hasn't disclosed how large the group is expected to become.
[5]
Microsoft wants AI that serves humanity, not replaces or dominates it
The promise of safer superintelligence depends on untested control mechanisms Microsoft is turning its attention from the race to build general-purpose AI to something it calls Humanist Superintelligence (HSI). In a new blog post, the company outlined how its concept aims to create systems that serve human interests rather than pursue open-ended autonomy. Unlike "artificial general intelligence," which some see as potentially uncontrollable, Microsoft's model seeks a balance between innovation and human oversight. Microsoft says HSI is a controllable and purpose-driven form of advanced intelligence that focuses on solving defined societal problems. One of the first areas where the company hopes to prove the value of HSI is medical diagnosis, with its diagnostic system, MAI-DxO, reportedly achieving an 85% success rate in complex medical challenges - surpassing human performance. Microsoft argues that such systems could expand access to expert-level healthcare knowledge worldwide. The company also sees potential in education, envisioning AI companions that adjust to each student's learning style, working alongside teachers to build customized lessons and exercises. It sounds promising but raises familiar questions about privacy, dependence, and the long-term effect of replacing parts of human interaction with algorithmic systems, with questions remaining about how these AI tools will be validated, regulated, and integrated into real-world clinical environments without creating new risks. Behind the scenes, superintelligence relies on heavy computational power. Microsoft's HSI ambitions will depend on large-scale data centers packed with CPU-intensive hardware to process massive amounts of information. The company acknowledges that electricity consumption could rise by more than 30% by 2050, driven in part by expanding AI infrastructure. Ironically, the same technology expected to optimize renewable energy production is also increasing demand for it. Microsoft insists AI will help design more efficient batteries, reduce carbon emissions, and manage energy grids, but the net environmental impact remains uncertain. Mustafa Suleyman, Microsoft's AI chief, notes "superintelligent AI" must never be allowed full autonomy, self-improvement, or self-direction. He calls the project a "humanist" one, explicitly designed to avoid the risks of systems that evolve beyond human control. His statements suggest a growing unease within the tech world about how to manage increasingly powerful models, as the idea of containment sounds reassuring, but there's no consensus on how such limits could be enforced once a system becomes capable of modifying itself. Microsoft's vision for Humanist Superintelligence is intriguing but still untested, and whether it can deliver on its promises remains uncertain.
[6]
Mustafa Suleyman leads Microsoft's new superintelligence moonshot
Why it matters: The move follows Microsoft's renegotiated deal with OpenAI and signals the company's intent to catch up in an expensive and crowded race to build artificial general intelligence. Zoom in: Suleyman will lead the Microsoft AI (MAI) Superintelligence team. * Karén Simonyan, Microsoft AI chief scientist, and other core MAI leaders and researchers will shift their focus to what Suleyman called "Humanist Superintelligence." * "We're definitely expanding the team and looking at folks with more fundamental research capabilities," Suleyman told Axios. Zoom out: AGI and superintelligence both refer broadly to AI systems that can equal or surpass human intelligence across a broad set of disciplines. Driving the news: Suleyman detailed the effort in an interview with Axios and a blog post Thursday, calling for Microsoft to build a highly powerful AI that's focused on serving humanity, as opposed to maximizing performance or other goals. * The software giant had been contractually prevented from pursuing artificial general intelligence under its prior deal with OpenAI, though the company has been developing smaller language models and algorithms for image and voice. What they're saying: "The project of superintelligence has to be about designing an AI which is subservient to humans, and one that keeps humans at the top of the food chain," Suleyman told Axios, noting that it's remarkable that such a statement even needs to be made. * "Humans matter more than AI," he writes in his blog. Yes, but: Suleyman rejects the narrative of the AI "race" to AGI. * He says results from the new Superintelligence Lab will take time and should be seen as "a wider and deeply human endeavor to improve our lives and future prospects." * "I think it's still going to be a good year or two before the superintelligence team is producing frontier models," he said in the interview. The big picture: OpenAI, Anthropic, Google, Meta and Ilya Sutskever's Safe Superintelligence are all pursuing similar ambitions, as are a variety of institutions in China. * Wednesday, Nvidia CEO Jensen Huang said China is on track to win the AI race. Between the lines: Microsoft's focus on safety and human-centricity comes as the regulatory environment moves away from a focus on those areas. A key risk is that Microsoft's approach could prove costlier or less efficient than those developed with fewer safeguards.
[7]
Microsoft to pursue superintelligence after OpenAI deal
Microsoft Corp. is pursuing a more powerful form of AI called "superintelligence" it hopes will be capable of making advances in areas like medicine and materials science. Mustafa Suleyman, chief of the Microsoft AI group, will lead what the company is calling the MAI Superintelligence Team that will target hypothetical milestones that are even more ambitious than artificial general intelligence. That's the often ambiguous term that outfits like OpenAI use to describe systems capable of demonstrating human-level performance. Previously, Microsoft had agreed not to pursue AGI as part of its partnership with OpenAI. "If AGI is often seen as the point at which an AI can match human performance at all tasks, then superintelligence is when it can go far beyond that performance," Suleyman said in a blog post announcing the push, which he says will work toward personal AI companions and breakthroughs in health care and clean energy. The team will aim to build what he calls Humanist Superintelligence, seeking to avoid potential risks associated with development of powerful automated tools and work for the benefit of people instead of technological milestones. OpenAI, Meta Platforms Inc. and other companies are also increasingly focusing on superintelligence as the new goalpost for AI development. The term "superintelligence," like AGI, is imprecise: It's unclear how capable, exactly, AI needs to be at certain tasks before it crosses the threshold from "general" to "super" intelligence. The announcement comes in the wake of a renegotiated agreement between Microsoft and OpenAI that determined the software maker's stake in the startup and altered portions of their relationship. That included removing a prior prohibition on Microsoft's development of advanced AI tools, which had limited much of the Redmond, Washington-based company's work to smaller, less broadly capable models than those that power ChatGPT. Thursday's announcement formalizes a project that Microsoft had been laying the groundwork for since last March, when the company hired Suleyman and licensed the intellectual property of his startup, Inflection AI. With a combination of reorganized Microsoft teams and new hires, Suleyman set about building a new family of Microsoft AI models, which to date remain much smaller in scale than the most capable products from OpenAI or Alphabet Inc."s Google. Suleyman told employees in September that Microsoft would make "significant investments" to expand the capability of those models. 2025 Bloomberg L.P. Distributed by Tribune Content Agency, LLC.
[8]
Microsoft, freed from its reliance on OpenAI, is now chasing 'superintelligence' -- and AI chief Mustafa Suleyman wants to ensure it serves humanity | Fortune
When Mustafa Suleyman joined Microsoft in March 2024 to lead the company's new consumer AI unit -- home to products like Copilot -- there were clear limits to what he could do. Because of Microsoft's landmark deal with OpenAI, the company was barred from pursuing its own AGI research. The agreement even capped how large of a model Microsoft could train, restricting the company from building systems beyond a certain computing threshold. (This limit was measured in FLOPS, or the number of mathematical calculations an AI model performs per second. It is a rough approximation of the cumulative computing power used to train a model.) "For a company of our scale, that's a big limitation," Suleyman told Fortune. That's all changing now: Suleyman announced the formation of the new MAI Superintelligence Team on Thursday. Led by Suleyman and part of the broader Microsoft AI business, the team will work towards "Humanist Superintelligence (HSI)," which Suleyman defined in a blog post as "incredibly advanced AI capabilities that always work for, in service of, people and humanity more generally." Microsoft is the just latest company to rebrand its advanced AI efforts as a drive towards "superintelligence" -- the idea of artificial intelligence systems that would potentially be wiser than all of humanity combined smarts. But for now, it's better marketing than science. No such systems currently exist and scientists debate whether superintelligence is even achievable with current AI methods. That has not stopped companies, however, from announcing superintelligence as a goal and setting up teams branded as "superintelligence." Most notably, Meta rebranded its AI efforts as Meta Superintelligence Labs in June 2025. OpenAI CEO Sam Altman has written that his company has already figured out how to build artificial general intelligence, or AGI -- the idea of an AI system that is as capable as an individual human at most cognitive tasks -- and, even though it has yet to release an AI model that meets that initial goal, that it has begun to look beyond AGI to superintelligence. Meanwhile, Ilya Sutskever, OpenAI's former chief scientist, cofounded an AI startup called Safe Superintelligence that is also dedicated to creating this hypothetical superpowerful AI and making sure it remains controllable. He had previously led a similar effort within OpenAI. AI company Anthropic also has a team dedicated to researching how to control a hypothetical future superintelligence. Microsoft's framing of its own new superintelligence drive as "humanist superintelligence" is a deliberate effort to contrast it to the more technological goals of rivals like OpenAI and Meta[hotlink]. "We reject narratives about a race to AGI, and instead see it as part of a wider and deeply human endeavor to improve our lives and future prospects," Suleyman wrote in the blog post. "We also reject binaries of boom and doom; we're in this for the long haul to deliver tangible, specific, safe benefits for billions of people. We feel a deep responsibility to get this right." </p> <p>For the last year or so Microsoft AI has been on a journey to establish an AI "self-sufficiency effort," Suleyman told <em>Fortune, </em>while also seeking to <a href="https://fortune.com/2025/10/28/openai-for-profit-restructuring-microsoft-stake/">extend its OpenAI partnership</a> through 2030 so that it continues to get early access to OpenAI's best models and IP.</p> <p>Now, he explained, "we have a best-of-both environment, where we're free to pursue our own superintelligence and also work closely with them." </p> <p>That new self-sufficiency has required significant investments in AI chips for the team to train its models, though Suleyman declined to comment on the size of the team's GPU stash. But most of all, he said, the effort is about "making sure we have a culture in the team that is focused on developing the absolute frontier [of AI research]." It will take several years before the company is fully on that path, he acknowledged, but said that it's a "key priority" for Microsoft. </p> <p>Karén Simonyan will serve as the chief scientist of the new Humanist Superintelligence team. Simonyan joined Microsoft in the <a href="http://Why Microsoft's surprise deal with $4 billion startup Inflection is the most important non-acquisition in AI | Fortune ↗">same March 2024 deal</a> that brought Suleyman and a nunber of other key researchers from the AI startup he founded, Inflection, to the company. The team also includes several researchers that Microsoft had already poached from [hotlink]Google DeepMind, Meta, OpenAI and Anthropic. The new superintelligence effort, with its focus on keeping humanity at the forefront, does not mean that the company won't be innovating quickly, Suleyman insisted-even though at the same time he admitted that developing a "humanist" superintelligence would always involve being cautious about capabilities that are "not ready for prime time." When asked about how his viewpoints align with AI leaders in the Trump Administration, such as AI and crypto 'czar' David Sacks, who are pushing for no-holds-barred AI acceleration and less regulation, Suleyman said that, in many ways, Sacks is correct. "David's totally right, we should accelerate, it's critical for America, it's critical for the West in general," he said. However, he added, AI developers can push the envelope while also understanding potential risks like misinformation, social manipulation and autonomous systems that act outside of human intent. "We should be going as fast as possible within the constraints of making sure it doesn't harm us," he said.
[9]
Microsoft provides update on its AI efforts following OpenAI partnership change - SiliconANGLE
Microsoft provides update on its AI efforts following OpenAI partnership change A few days after revising its research collaboration with OpenAI, Microsoft Corp. has shared an update about its artificial intelligence development efforts. Mustafa Suleyman, the Chief Executive Officer of the company's Microsoft AI group, detailed the engineering push in a blog post published today. The tech giant has formed a unit called the MAI Superintelligence Team to spearhead its AI research efforts. The unit will seek to develop superintelligence, a hypothetical future AI that would be capable of outperforming humans across a broad range of tasks. Suleyman wrote that Microsoft intends to develop superintelligence equipped with safety guardrails. Additionally, the company plans to provide a way for humans to oversee the AI's work. "We want to both explore and prioritize how the most advanced forms of AI can keep humanity in control while at the same time accelerating our path towards tackling our most pressing global challenges," Suleyman explained. The executive went on to list three of the new MAI Superintelligence Team's research priorities. Microsoft's first goal is to develop better AI assistants for consumers. The company will focus on making AI assistants more affordable and personalized. Microsoft believes that the personalization features available to consumers will include, among others, the ability to generate highly customized learning materials. The second research priority Suleyman listed is "medial superintelligence". According to the executive, that's a form of AI capable of delivering "expert level performance at the full range of diagnostics, alongside highly capable planning and prediction in operational clinical settings." He expects the technology to arrive within a few years. Microsoft has already started investing in AI-powered medical software. In June, the company debuted an AI system called MAI-DxO that is optimized to tackle the New England Journal of Medicine's Case Challenges. Those are lists of symptoms that readers are asked to diagnose. Human doctors correctly identify the root cause of the symptoms about 20% of the time, while MAI-DxO achieved a 85% score in a benchmark test. The third research focus that Suleyman listed is renewable energy. In today's blog post, he predicted that AI will enhance the way scientists research energy generation and store technologies. The executive believes AI could expediate breakthroughs such as new carbon negative materials. Suleyman added that Microsoft's AI research efforts encompass "many more" areas. He said the company plans to share more information about the use cases it's prioritizing in the near future. According to Suleyman, Microsoft's AI research work is supported by a newly launched cluster of GB200 chips from Nvidia Corp. Each chip includes two Blackwell graphics processing units and one central processing unit. Microsoft is also building AI clusters based on Nvidia's newer GB300 accelerator, which features two top-end Blackwell Ultra GPUs.
[10]
Microsoft to pursue superintelligence after OpenAI deal
Microsoft is pursuing a more powerful form of AI called "superintelligence" it hopes will be capable of making advances in areas like medicine and materials science. Mustafa Suleyman, chief of the Microsoft AI group, will lead what the company is calling the MAI Superintelligence Team that will target hypothetical milestones that are even more ambitious than artificial general intelligence. That's the often ambiguous term that outfits like OpenAI use to describe systems capable of demonstrating human-level performance. Previously Microsoft had agreed not to pursue AGI as part of its partnership with OpenAI. "If AGI is often seen as the point at which an AI can match human performance at all tasks, then superintelligence is when it can go far beyond that performance," Suleyman said in a blog post on Thursday announcing the push, which he says will work toward personal AI companions and breakthroughs in health care and clean energy. The team will aim to build what he calls Humanist Superintelligence, seeking to avoid potential risks associated with development of powerful automated tools and work for the benefit of people instead of technological milestones. OpenAI, Meta and other companies are also increasingly focusing on superintelligence as the new goal post for AI development. The term "superintelligence," like AGI, is imprecise: It's unclear how capable, exactly, AI needs to be at certain tasks before it crosses the threshold from "general" to "super" intelligence. The announcement comes in the wake of a renegotiated agreement between Microsoft and OpenAI that determined the software maker's stake in the startup and altered portions of their relationship. That included removing a prior prohibition on Microsoft's development of advanced AI tools, which had limited much of the Redmond-based company's work to smaller, less broadly capable models than those that power ChatGPT. Thursday's announcement formalizes a project that Microsoft had been laying the groundwork for since last March, when the company hired Suleyman and licensed the intellectual property of his startup, Inflection AI. With a combination of reorganized Microsoft teams and new hires, Suleyman set about building a new family of Microsoft AI models, which to date remain much smaller in scale than the most capable products from OpenAI or Alphabet Inc.'s Google. Suleyman told employees in September that Microsoft would make "significant investments" to expand the capability of those models.
[11]
Microsoft Wants to Be Self-Sufficient in AI in the Post-OpenAI World
Suleyman's comments about AI come after Microsoft's new OpenAI deal Microsoft's artificial intelligence (AI) chief, Mustafa Suleyman, has been vocally speaking about the technology and the company's long-term vision in recent days. The executive has spoken about how the Redmond-based tech giant wants to build a "humanist AI," create ethical guardrails around superintelligence, and keep erotica far away from the technology. As per a report, Suleyman has now spoken about the need for Microsoft to be self-sufficient in the AI space, a strategy the company has shied away from so far. Mustafa Suleyman Believes Microsoft Should Handle End-to-End AI Systems The CEO of Microsoft AI recently spoke with Business Insider, sharing the company's vision around AI. "Microsoft needs to be self-sufficient in AI. And to do that, we have to train frontier models of all scales with our own data and compute at the state-of-the-art level," Suleyman told the publication. Interestingly, this is the first time a Microsoft representative has used "self-sufficiency" to describe the company's AI strategy. And the new deal with OpenAI might be the catalyst that triggered it. As per the report, the older deal with the AI giant barred the Windows maker from independently developing or researching artificial general intelligence (AGI). But the new deal now has a specific clause that allows Microsoft to not only develop the technology, but also partner with other AI players for this. The company has taken advantage of this as well. Microsoft has already started forging AI partnerships outside of OpenAI, with Anthropic now powering Copilot and Claude gaining access to Excel. But this will be a long road for the tech giant. While Microsoft has developed large language models (LLMs), they were small in size and largely aimed at research purposes. The OpenAI deal also meant that the company relied on it to provide most of the AI resources. To focus on solving this, Microsoft created a new Superintelligence team. Led by Suleyman, this team will be working towards the end goal of developing a superintelligent AI, and likely pray to make it there before established players such as OpenAI, Google, or Anthropic can. "We've got a huge mission ahead of us. We have $300 billion of revenues, a huge responsibility to make sure that all of our products are AI-first, that we deploy agents everywhere, and we really make all the workflows that customers use today much more intelligent," Suleyman told the publication, adding, "We have the data, we also have the distribution, and we have the user interface. So I think it's just a matter of time before these things become really, really magical."
[12]
Microsoft AI head's stern warning as tech giants race to build superintelligent system, says 'not going to be a better world if...'
Microsoft's AI chief Mustafa Suleyman issues a strong warning about advanced artificial intelligence. He stresses that humanity must maintain control over AI's development. Suleyman advocates for a 'Humanist Superintelligence' approach, prioritizing AI that serves human needs. Microsoft is focusing on practical applications like medical diagnostics and personalized education. This cautious strategy aims for a future where AI benefits everyone. Mustafa Suleyman, the head of AI at Microsoft, has issued a stark warning about the risks of advanced artificial intelligence, saying the world will not be better if humanity loses control of it, adding that raw capability must take a backseat to human control. His remarks came days after the tech giant, who recently laid off its employees, unveiled Microsoft's new MAI Superintelligence Team. He emphasised that while tech firms such as Mark Zuckerberg's operations are spending billions on AI, raw capability alone is not enough. "It's not going to be a better world if we lose control of it," he said. "It's got to be for humanity's sake, for a future we actually want to live in. It's not going to be a better world if we lose control of it," he was quoted as saying by TOI. Microsoft says it has achieved what Suleyman describes as "AI self-sufficiency", meaning it is no longer bound by limits on how large a model it could train under its former partnership with OpenAI. Under the old agreement, Microsoft was restricted in model size by computing thresholds measured in FLOPS (floating-point operations per second). ALSO READ: 'Go work at McDonald's': World's third richest man has an unconventional career tip for entrepreneurs, says 'you don't need to start...' As companies race to build next-generation systems, Suleyman argued that uncontrolled advancement could be dangerous. He emphasised that Microsoft is taking a different approach -- one he calls "Humanist Superintelligence". He said: "We are not building an ill-defined and ethereal superintelligence; we are building a practical technology explicitly designed only to serve humanity. Suleyman openly stated that there is no "reassuring answer" yet for how to "contain, let alone align, a system that is -- by design -- intended to keep getting smarter than us." He also noted that Microsoft's cautious path might turn out to be more expensive or less efficient than approaches with fewer safeguards. Mustafa Suleyman indicated it will still be "a good year or two before the superintelligence team is producing frontier models," emphasising that Microsoft is playing a longer game. ALSO READ: Comet 3I/ATLAS showed sudden increase in speed and changed colour? New revelations about interstellar object leave scientists surprised The human-centred approach will focus on AI that is "carefully calibrated, contextualised, within limits" rather than an unrestricted intelligence with high autonomy. Examples include AI for medical diagnostics (where one project reached 85 % accuracy on hard cases versus about 20 % for human doctors), personalised education tools, and clean-energy breakthroughs. (You can now subscribe to our Economic Times WhatsApp channel)
[13]
Microsoft Vows Its Superintelligent AI Will Stay Under Human Control
Microsoft's version of superintelligence will not be an unlimited entity Microsoft wants to develop a humanist superintelligence. What that means is a superintelligent entity with advanced artificial intelligence (AI) capabilities that "always work for, in service of, people and humanity more generally." The Redmond-based tech giant's latest statement on the technology comes at a time when the race for artificial general intelligence (AGI) is heating up. All major players, including Google, OpenAI, Microsoft, and Meta, are independently trying to reach the milestone before others. The Windows maker, however, believes that a conversation on what kind of AI the world really wants is not getting enough attention. Microsoft's Humanist Superintelligence Vision In a blog post, Mustafa Suleyman, CEO at Microsoft AI, warned that thinking about the purpose of superintelligence is important before developing such an entity. He highlighted that the world has crossed the inflection point on the journey towards superintelligence with the arrival of reasoning models. He believes this is a crucial time to decide the capabilities and limitations of such a technology, before it arrives. Suleyman said that Microsoft does not want to develop an "unbounded and unlimited entity with high degrees of autonomy - but AI that is carefully calibrated, contextualised, within limits." The prioritisation will be to keep humanity at the driving seat, while the technology enables tackling major global challenges, the post added. For this purpose, the company has created the MAI Superintelligence Team, which will be a part of the Microsoft AI division led by Suleyman. This newly created team will research and build an AI system that is grounded and controllable and is designed "only to serve humanity." The AI Chief also rejected the notion that it is in a race towards AGI, highlighting that the company views it as part of a wider endeavour. "We also reject binaries of boom and doom; we're in this for the long haul to deliver tangible, specific, safe benefits for billions of people. We feel a deep responsibility to get this right." Interestingly, the comments from the AI Chief come just weeks after OpenAI and Microsoft signed a new deal that allows it to independently research and develop AGI. Suleyman has been increasingly speaking about the technology ever since, first highlighting that the company's AI services will not include erotica, and later calling the idea of a conscious AI "absurd."
[14]
Microsoft AI Chief Outlines OpenAI-Less Roadmap | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. Speaking to the Wall Street Journal (WSJ) Thursday (Nov. 6), Mustafa Suleyman discussed the company's plans to create self-sufficiency from OpenAI, which is enmeshed into many of the products Microsoft offers to its customers. As the report noted, a recent deal between the two companies allows Microsoft to establish its new MAI Superintelligence Team, which will emphasize human interests and guardrails first, Suleyman said. The WSJ said he reiterated past warnings about AI's possible risks. While praising OpenAI and the companies' collaborations, Suleyman criticized the idea of treating AI systems as though they have human-like feelings or rights. AI chatbots shouldn't fool users into thinking they are talking with sentient beings, he said. AI is "going to become more human-like, but it won't have the property of experiencing suffering or pain itself, and therefore we shouldn't over-empathize with it," Suleyman told the WSJ. "We want to create types of systems that are aligned to human values by default. That means they are not designed to exceed and escape human control." Following the new deal with OpenAI, Suleyman said Microsoft is focused on software tools for the workplace, healthcare diagnostics and to play a part in the science of developing sources of clean, renewable energy. These efforts are happening at a time when -- as PYMNTS wrote last week -- Microsoft is transitioning from its software/cloud roots, becoming "an AI infrastructure provider shaping how work, creativity and productivity will function in the next decade." For proof, the report added, look to the company's restructured relationship with OpenAI, which gave Microsoft a 27% stake and raised its valuation above $4 trillion. One day after that deal, the company reported for its first quarter 2026 earnings nearly $78 billion in quarterly revenue, driven by 40% cloud growth and an expanding AI moat "built atop a foundation of trust and technical scale," the report said. Microsoft also reported a $3.1 billion hit from its investment in OpenAI. "Our planet-scale cloud and AI factory, together with Copilots across high value domains, is driving broad diffusion and real-world impact," said Satya Nadella, chairman and chief executive officer of Microsoft. "It's why we continue to increase our investments in AI across both capital and talent to meet the massive opportunity ahead." As PYMNTS noted, even that "planet-scale" cloud can still stumble. The same day Microsoft announced its earnings, it found itself dealing with an Azure outage, leaving cloud customers unable to use Microsoft services, among them airlines like Alaska and Hawaiian.
[15]
Microsoft is creating team dedicated to superintelligence led by Mustafa Suleyman
On Thursday, Microsoft announced the creation of a new advanced artificial intelligence research team, called the MAI Superintelligence Team, led by Mustafa Suleyman, who is currently in charge of the Microsoft AI division. The stated goal is to develop "practical and controllable" superintelligence geared toward concrete applications in the fields of education, healthcare, and renewable energy. The project stands out for its desire to produce AI rooted in reality, with targeted uses such as digital learning companions, medical diagnostic tools, and energy optimization solutions. This announcement comes amid increased competition between tech giants, just a few months after the creation of Meta Superintelligence Labs. The concept of superintelligence, generally associated with artificial cognitive abilities that exceed those of humans, is approached with caution by Suleyman, who insists on a measured and responsible approach. A former co-founder of DeepMind who worked at Inflection AI before joining Microsoft, he embodies the group's desire to gradually break free from its dependence on OpenAI, while capitalizing on recently acquired talent. Microsoft, which powers Bing and Copilot with OpenAI models, also hosts its operations on Azure, but is now exploring internal and external alternatives, such as models from Alphabet or Anthropic. While some analysts question the profitability of massive investments in AI, Microsoft is demonstrating a strategy focused on measurable societal impact with this initiative. The company thus intends to assert its position as a technology leader, while promoting a structured and useful vision of superintelligence.
[16]
Mustafa Suleyman's AI plan for Microsoft beyond OpenAI: What it means
Human-centered AI: Suleyman's plan to balance innovation and ethics When Mustafa Suleyman took charge of Microsoft AI in early 2024, it marked more than just another high-profile tech hire. The DeepMind and Inflection AI co-founder brought with him a deep philosophical stance on what artificial intelligence should become. Now, with his latest essay titled "Towards Humanist Superintelligence," published on Microsoft's official AI portal, Suleyman has outlined a vision that could redefine how Microsoft approaches AI - one that moves beyond its collaboration with OpenAI and towards a more values-driven, human-centric model of progress. Also read: CALM explained: Continuous thinking AI, how it's different from GenAI LLMs so far Suleyman's concept of Humanist Superintelligence (HSI) reframes the traditional narrative around artificial general intelligence. Instead of machines surpassing human cognition and autonomy, he envisions highly advanced systems that remain fundamentally tethered to human purpose. In his words, "HSI offers an alternative vision anchored on a non-negotiable human-centrism and a commitment to accelerating technological innovation, but in that order." The essay stresses that the goal is not to build a single, omniscient artificial mind, but rather multiple specialised superintelligences, each designed for domains like medicine, energy, and education. This approach, Suleyman argues, ensures both safety and alignment. The challenge of containment and alignment - keeping systems perpetually in check even as they surpass human understanding - is one of his central preoccupations. "We need to contain and align it, not just once, but constantly, in perpetuity," he writes. This is a subtle but significant shift. Where the global AI race has often been framed as a competition for dominance, Suleyman's vision seeks to establish a framework for coexistence, AI as a collaborator, not a conqueror. The timing of this announcement is deliberate. Microsoft remains OpenAI's largest investor and closest partner, but the emergence of Suleyman's division, Microsoft AI, marks a step toward independence. Over the past year, the company has consolidated its consumer and enterprise AI operations under this new unit, overseeing Copilot, Bing Chat, and other generative products. Also read: OpenAI introduces IndQA: A new benchmark for AI's multilingual intelligence Suleyman's essay signals that Microsoft is no longer content with being a patron of OpenAI's technology. It now wants to develop its own models, research pipelines, and long-term vision for artificial intelligence. Microsoft is increasingly capable of pursuing advanced AI and perhaps even AGI on its own terms. That autonomy comes with strategic value. It allows Microsoft to align AI development with its broader corporate identity, a company that builds tools, not threats. By emphasizing humanist rather than general superintelligence, Suleyman reframes the company's goals away from a "race to AGI" and toward a more pragmatic mission: creating AI systems that demonstrably improve quality of life. Central to Suleyman's proposal is a focus on high-impact applications rather than abstract capability. One of the most striking promises in his essay is that "everyone who wants one will have a perfect and cheap AI companion helping you learn, act, be productive and feel supported." These AI companions are envisioned as personal assistants that adapt to each user's needs, offering cognitive support without replacing human connection. Another pillar of the plan is what he calls medical superintelligence. Suleyman claims Microsoft's internal systems have already shown "expert-level performance" on difficult diagnostic challenges, far exceeding human averages. The goal is to make world-class medical knowledge universally available, regardless of geography or income. The third domain of focus is clean energy and scientific innovation. "Energy drives the cost of everything," he writes, predicting that by 2040, abundant renewable generation and storage will be within reach - with AI playing a pivotal role in accelerating discovery. From new battery materials to more efficient grids, AI could help solve the engineering bottlenecks that have long slowed sustainability transitions. Taken together, these three pillars - companionship, health, and energy - sketch a roadmap for Microsoft's next decade of AI. They represent a fusion of idealism and pragmatism, grounded in technologies that can both uplift society and sustain the company's business model. Microsoft's partnership with OpenAI remains one of the most consequential alliances in tech. Yet, Suleyman's essay underscores a subtle decoupling. He rejects the "race to AGI" narrative that OpenAI helped popularize, calling instead for a long-term, coordinated approach among governments, labs, and startups. For Microsoft, this philosophical shift doubles as a business strategy. By championing its own "humanist" model of AI, the company positions itself not as a follower of OpenAI's breakthroughs but as a parallel innovator with distinct goals. That also gives Microsoft flexibility to explore new architectures, training paradigms, and applications that may not align perfectly with OpenAI's roadmap. In effect, Suleyman is crafting a new identity for Microsoft AI - one that retains the resources of a trillion-dollar corporation but borrows the moral language of an academic movement. It is both a branding exercise and a re-orientation of purpose. Still, Suleyman's vision leaves tough questions unanswered. The notion of "perpetual alignment" sounds noble, but maintaining ethical and technical control over self-improving systems remains an unresolved research problem. Commercial viability will also matter: while AI companions and medical tools are socially compelling, their monetization paths are unclear. Then there is the matter of credibility. Suleyman's leadership style has drawn scrutiny in the past, and his dismissal of certain research directions, like exploring AI consciousness, may clash with broader scientific curiosity. Critics may see "humanist superintelligence" as more branding than blueprint. And finally, there is the question of cooperation. Suleyman calls for "every commercial lab, every startup, every government" to coordinate on alignment. But global AI development has always been competitive, fragmented, and secretive. Building consensus on how to contain intelligence that continuously improves is, for now, more aspiration than policy. For India and other emerging economies, Suleyman's plan could have tangible ripple effects. An AI-powered healthcare system could democratize diagnostics in regions short on doctors. Personalised educational companions could support students across languages and learning levels. AI-enhanced energy infrastructure could help nations like India balance sustainability with growth. But beyond national interest, Humanist Superintelligence reframes what progress in AI should look like. It asks the industry to stop equating intelligence with autonomy and instead measure it by its service to humanity. That redefinition could mark the beginning of a new chapter in the story of artificial intelligence - one where power is measured not in computation, but in compassion.
Share
Share
Copy Link
Microsoft forms new AI Superintelligence team under Mustafa Suleyman, focusing on developing controlled AI systems with human oversight. The initiative aims to create practical AI solutions for healthcare and education while explicitly avoiding autonomous systems that could threaten humanity.
Microsoft has announced the formation of a new AI Superintelligence team under the leadership of Mustafa Suleyman, CEO of Microsoft AI, marking a significant shift in the company's approach to advanced artificial intelligence development
1
. The initiative, dubbed "Humanist Superintelligence" (HSI), represents Microsoft's commitment to developing AI systems that remain under human control while solving real-world problems2
.
Source: Gadgets 360
Suleyman's vision for HSI stands in stark contrast to the broader industry pursuit of artificial general intelligence (AGI). According to his announcement, Microsoft's approach will be "carefully calibrated, contextualized, within limits" rather than creating "an unbounded and unlimited entity with high degrees of autonomy"
1
. The company explicitly states it is "not building an ill-defined and ethereal superintelligence" but rather "a practical technology explicitly designed only to serve humanity" .Suleyman has established three fundamental principles for HSI that echo Isaac Asimov's laws of robotics: superintelligent AI cannot have total autonomy, the capacity for self-improvement, or the ability to set its own goals
2
. This approach deliberately sacrifices performance optimization in favor of safety and human interpretability.
Source: TechRadar
The new team is focusing on specific domains where AI can provide immediate value while maintaining human oversight. In healthcare, Microsoft has developed MAI-DxO, a diagnostic system that reportedly achieves an 85% success rate in complex medical challenges, surpassing human performance
5
. Suleyman envisions systems with "expert level performance at the full range of diagnostics, alongside highly capable planning and prediction in operational clinical settings" .Education represents another key focus area, with Microsoft developing AI companions that can adjust to individual learning styles and work alongside teachers to create customized lessons and exercises
5
. The company also sees potential applications in renewable energy production and materials research4
.Related Stories
This announcement comes at a time when Microsoft's relationship with OpenAI has become increasingly strained. Under a new agreement, Microsoft can now "independently pursue AGI alone or in partnership with third parties," giving the company legal rights to use OpenAI's intellectual property for its own AGI development
1
. Microsoft has already begun reducing its dependence on OpenAI by incorporating models from Google and Anthropic into its products .The formation of this team follows similar moves by other tech giants, with Meta recently establishing its own Superintelligence Labs unit . However, Microsoft's emphasis on human-centric development distinguishes its approach from competitors focused primarily on achieving AGI.
Microsoft's HSI ambitions will require significant computational resources, with the company acknowledging that electricity consumption could rise by more than 30% by 2050, driven partly by expanding AI infrastructure
5
. The company argues that AI will help design more efficient batteries and manage energy grids, though the net environmental impact remains uncertain.
Source: SiliconANGLE
Suleyman has emphasized that Microsoft will require AIs to communicate in human-understandable formats rather than in "vector space," even if this approach proves less efficient
2
. This commitment to transparency and interpretability reflects the company's broader philosophy of prioritizing safety over performance maximization.Summarized by
Navi
[2]
1
Business and Economy

2
Technology

3
Policy and Regulation
