3 Sources
3 Sources
[1]
Microsoft is building its own AI model
Microsoft $MSFT wants out of the "powered by someone else" business. The company's AI chief, Mustafa Suleyman, told the Financial Times that the company is pushing toward AI "self-sufficiency." That means developing its own advanced foundation models and continuing to reduce its reliance on OpenAI, even as the two companies keep their relationship intact. Microsoft's October 2025 reset with OpenAI preserved the core perks: Microsoft says OpenAI remains its "frontier model partner," and Microsoft's IP rights and Azure API exclusivity run "through 2032," including models "post-AGI." So this is Microsoft buying itself even more room to negotiate, route, and replace. Because when your flagship AI product sits inside Microsoft 365, "single supplier" starts sounding like a vulnerability you have to try to explain on earnings calls. Microsoft can keep selling "Copilot everywhere," but the real prize is making sure the underlying compute, security, and billing stay Microsoft-shaped, no matter which model is hot this quarter. Microsoft is also trying to prove it's not just talking. In August 2025, Microsoft AI previewed MAI-1-preview, calling it "an in-house mixture-of-experts model" that was "pre-trained and post-trained on ~15,000 NVIDIA H100 GPUs," with plans to roll it into certain Copilot text use cases. That's a clear marker of intent: Microsoft is building models, and it's doing it at a meaningful scale, on the same hardware reality as everyone else. Microsoft's new Maia 200 chip is positioned as an inference accelerator "engineered to dramatically improve the economics of AI token generation" -- or, essentially, to take aim at Nvidia $NVDA's software, pairing custom silicon with a software package meant to loosen CUDA's grip. Inference is where the bills stack up -- and where hyperscalers most want leverage. Meanwhile, Microsoft is widening its menu on purpose, hosting models from xAI, Meta $META, Mistral, and Black Forest Labs in its data centers. It has also been willing to use Anthropic models in Microsoft 365 Copilot experiences after internal testing found them better for certain Office tasks, a shift that even involved paying AWS for access. So yes, Microsoft wants to be the place where every winning model runs -- and it wants at least one winner to have a Microsoft badge on it.
[2]
Microsoft eyes AI 'self-sufficiency' after OpenAI deal rejig
Microsoft's (MSFT) AI head Mustafa Suleyman said that the company is pursuing "true self-sufficiency" in AI by developing its own powerful models and reducing its dependence on OpenAI (OPENAI), The Financial Times reported. Suleyman told the news outlet that the Microsoft aims for 'true self-sufficiency' with advanced, internally developed AI to capture more of the enterprise market and reduce reliance on external partners like OpenAI. Investors worry large AI infrastructure spending may be creating a bubble, negatively impacting big tech stock performance, despite long-term expectations for revenue growth. Microsoft is accelerating its in-house AI development alongside investments in external companies to compete for enterprise AI deals, even as rivals like Anthropic lead in key areas.
[3]
Microsoft Aims at AI Self-Sufficiency, Plots to Dump OpenAI and ChatGPT
Mustafa Suleyman, one of the earliest pioneers in neural network models at DeepMind which was acquired by Google in 2014, and currently the AI lead at Microsoft, has suggested that the Big Tech giant plans to progressively reduce their reliance on OpenAI and build self-sufficiencies in the artificial intelligence space. "We have to develop our own foundation models, which are at the absolute frontier, with gigawatt-scale compute and some of the very best AI training teams in the world," Suleyman was quoted as saying by an article published in the Financial Times. The question now is whether the move will put OpenAI's future in further doubt, especially with a potential IPO in 2026. However, a report published by Windows Central says the move is perhaps less surprising that one may think. Right now, Microsoft's entire AI operation is powered by ChatGPT and other OpenAI models, including DALLE 3. With the company witnessing growth in demand for enterprise-grade AI tools, including Microsoft 365 Copilot and Github Copilot, a need to chart out a unique AI course must sound quite logical, the report says. Moreover, both companies have been in a none-too-happy relationship ever since Microsoft first pumped in $10 billion into Sam Altman's OpenAI after they surprised many with the early efforts at training large language models (LLMs) and its ability to write leave letters (just kidding). However, things went sour when Altman first announced that OpenAI would become a non-profit company with a for-profit arm controlled by the former. What caused the issues between Microsoft and OpenAI? However, last September the duo smoked the peace pipe after a year-long tussle over changes in their partnership. The two signed a non-binding MOU for the next phase of their partnership. "Together, we remain focused on delivering the best AI tools for everyone, grounded in our shared commitment to safety," they claimed then. Today, Microsoft holds 27% of the new "for profit" arm of OpenAI and maintains IP rights for OpenAI models till 2032. As part of the larger deal, OpenAI was free to seek compute from competing cloud firms while Microsoft could divest some of the risk by aligning with other AI companies, the first of which was Anthropic that came in to power CoPilot. However, Suleyman's concern may be related to the near trillion dollar debt that OpenAI has built up without clear revenue growth plans. Late in January OpenAI shared details linking its growth with computing capacity. While the latter grew by 9.5 times in the two years up to 2025, the annualised recurring revenues expanded by ten times in the same period to $20 billion last year. In spite of all this, Microsoft has invested in competitors like Anthropic in the past. Of course, the company doesn't want to ruffle feathers yet, as was evident from a post on X by the Microsoft's head of communications Frank X. Shaw who noted that OpenAI has a huge role to play for us and "we are building frontier models for specific things we want to do as well." The time for a reset is now, feels Microsoft However, Suleyman seems to think that it was time for a reset. "We have to reset that and make the assumption that we should only bring a system like that into the world, that we are sure we can control and operates in a subordinate way to us. These tools, like any other past technology, are designed to enhance human wellbeing and serve humanity, not exceed," he told FT. That the company has placed great emphasis on AI agents in its Azure toolkit that allows users to automate prompts and workflows while adopting corporate and legal compliance is yet another marker for Microsoft's AI strategy. Suleyman told FT that white collar work will get automated within two years but indicates that such workers should be thrilled at the news as AI would provide them ways to grow faster in their jobs. Given this narrative, maybe all that Microsoft wants is to achieve self-reliance or at least reduce their reliance on OpenAI and seek other options, preferably some that their engineers create in the future.
Share
Share
Copy Link
Microsoft's AI chief Mustafa Suleyman announced the company is pursuing AI self-sufficiency by developing its own advanced foundation models. The strategic shift aims to reduce reliance on OpenAI while maintaining their partnership through 2032. Microsoft previewed MAI-1, trained on 15,000 NVIDIA H100 GPUs, and launched the Maia 200 inference chip to control AI economics.
Microsoft AI is charting a new course toward what its chief Mustafa Suleyman calls "true self-sufficiency" in artificial intelligence, marking a strategic shift in AI that could reshape the company's relationship with OpenAI
1
. Suleyman, a DeepMind pioneer who now leads Microsoft's AI efforts, told the Financial Times that the company must develop its own foundation models "at the absolute frontier, with gigawatt-scale compute and some of the very best AI training teams in the world"3
. The announcement signals Microsoft's intent to reduce reliance on OpenAI, even as their partnership remains legally intact through 2032, including IP rights and Azure API exclusivity for models "post-AGI"1
.
Source: CXOToday
The push for in-house AI development isn't just talk. In August 2025, Microsoft AI previewed MAI-1-preview, described as "an in-house mixture-of-experts model" that was "pre-trained and post-trained on ~15,000 NVIDIA H100 GPUs," with plans to integrate it into certain Microsoft 365 Copilot text use cases
1
. This represents a clear marker of intent at meaningful scale, demonstrating Microsoft's commitment to develop in-house AI models using the same hardware infrastructure as competitors. The company's entire AI operation currently runs on ChatGPT and other OpenAI models, including DALLE 3, powering enterprise AI tools like Microsoft 365 Copilot and GitHub Copilot3
.Microsoft's new Maia 200 chip positions the company to control the economics of AI at the inference layer, where costs accumulate rapidly. Engineered as an inference accelerator to "dramatically improve the economics of AI token generation," the custom silicon aims to challenge NVIDIA's CUDA dominance
1
. These custom AI inference chips represent Microsoft's attempt to gain leverage where hyperscalers need it most—at the point where AI infrastructure spending translates into actual token generation costs. The move addresses investor concerns that large AI infrastructure spending may be creating a bubble that negatively impacts big tech stock performance, despite long-term expectations for revenue growth2
.Related Stories
The relationship between Microsoft and OpenAI has been strained since Microsoft's initial $10 billion investment in Sam Altman's company. Tensions escalated when OpenAI announced plans to transition from non-profit to a for-profit structure, leading to a year-long dispute that ended with a non-binding MOU signed in September
3
. Microsoft now holds 27% of OpenAI's for-profit arm, while OpenAI gained freedom to seek compute from competing cloud firms3
. Microsoft has diversified aggressively, hosting models from xAI, Meta, Mistral, and Black Forest Labs in its data centers. The company even pays AWS for access to Anthropic models used in Microsoft 365 Copilot experiences after internal testing found them superior for certain Office tasks1
.For Microsoft, having its flagship AI product embedded in Microsoft 365 while depending on a single supplier creates a vulnerability that requires explanation on earnings calls
1
. The company aims to capture more of the enterprise market by ensuring that underlying compute, security, and billing remain "Microsoft-shaped, no matter which model is hot this quarter"1
. Suleyman's emphasis on AI agents in the Azure toolkit, which allows users to automate prompts and workflows while maintaining corporate and legal compliance, signals where Microsoft sees its competitive advantage3
. He predicts white collar work will become automated within two years, though he frames this as an opportunity for workers to accelerate their career growth3
. OpenAI's near-trillion dollar debt without clear revenue growth plans may be fueling Microsoft's urgency, even as OpenAI reported annualized recurring revenues of $20 billion, representing a ten-times expansion over two years3
.Summarized by
Navi
[1]
[2]
08 Mar 2025•Technology

06 Nov 2025•Technology

24 Dec 2024•Technology

1
Business and Economy

2
Policy and Regulation

3
Health
