41 Sources
41 Sources
[1]
Tech giants pour billions into Anthropic as circular AI investments roll on
On Tuesday, Microsoft and Nvidia announced plans to invest in Anthropic under a new partnership that includes a $30 billion commitment by the Claude maker to use Microsoft's cloud services. Nvidia will commit up to $10 billion to Anthropic and Microsoft up to $5 billion, with both companies investing in Anthropic's next funding round. The deal brings together two companies that have backed OpenAI and connects them more closely to one of the ChatGPT maker's main competitors. Microsoft CEO Satya Nadella said in a video that OpenAI "remains a critical partner," while adding that the companies will increasingly be customers of each other. "We will use Anthropic models, they will use our infrastructure, and we'll go to market together," Nadella said. The move follows OpenAI's recent restructuring that gave the company greater distance from its non-profit origins. OpenAI has since announced a $38 billion deal to buy cloud services from Amazon.com as the company becomes less dependent on Microsoft. OpenAI CEO Sam Altman has said the company plans to spend $1.4 trillion to develop 30 gigawatts of computing resources.
[2]
Anthropic announces $50 billion data center plan | TechCrunch
Anthropic on Wednesday said it had signed an ambitious new data center partnership with U.K.-based neocloud provider, Fluidstack, committing $50 billion to building facilities across the U.S. to meet its growing compute needs. The data centers will be located in Texas and New York, and come online throughout 2026. The company described the sites as "custom built for Anthropic with a focus on maximizing efficiency for our workloads." "We're getting closer to AI that can accelerate scientific discovery and help solve complex problems in ways that weren't possible before," Anthropic's CEO and co-founder, Dario Amodei (pictured above), said in a statement. "Realizing that potential requires infrastructure that can support continued development at the frontier." Because of the intense compute demands of Anthropic's Claude family of models, the company is already engaged in significant cloud partnerships with both Google and Amazon (which is also an investor). But this is the company's first major effort to build custom infrastructure. The $50 billion outlay, while large, is in line with the company's internal revenue projections, which reportedly see Anthropic reaching $70 billion in revenue and $17 billion in positive cash flow by 2028. While $50 billion represents a massive commitment in both cash and compute power, it is nonetheless dwarfed by similar projects from Anthropic's competitors. Meta has committed to building $600 billion worth of data centers over the next three years, while the Stargate partnership between SoftBank, OpenAI and Oracle has already planned $500 billion in infrastructure spending. The spending has fueled concerns about an AI bubble due to flagging demand or even misallocated spending. The project marks a major success for Fluidstack, a relatively young neocloud company that has become a vendor of choice in the AI building boom. Founded in 2017, the company was named in February as the primary partner for a 1 gigawatt AI project backed by the French government, which represented more than $11 billion in spending. According to Forbes, the company already has partnerships in place with Meta, Black Forest Labs and France's Mistral. Fluidstack was also one of the first third-party vendors to receive Google's custom-built TPUs, a major vote of confidence for the company.
[3]
Microsoft's new Anthropic partnership brings Claude AI models to Azure
Microsoft is announcing a strategic partnership with Anthropic today that will bring the AI startup's models to Microsoft Foundry for the first time. As part of the deal, Anthropic is also committing to purchasing $30 billion of Azure compute capacity and "to contract additional compute capacity up to one gigawatt." Microsoft Foundry customers will now be able to access Anthropic's frontier Claude models including Claude Sonnet 4.5, Claude Opus 4.1, and Claude Haiku 4.5. Despite these models coming to Microsoft's AI servers, Amazon will still remain Anthropic's primary cloud provider and training partner. Nvidia and Anthropic are also partnering today as part of this deal, to optimize Anthropic's models for the best performance on future Nvidia architectures. Anthropic is committing to up to one gigawatt of compute capacity using Nvidia Blackwell and Vera Rubin systems. As part of these partnerships, Nvidia is investing up to $10 billion in Anthropic, with Microsoft also investing $5 billion. Microsoft has been increasingly embracing Anthropic's models across its Copilot services. In fact, Microsoft has been favoring Anthropic's Claude 4 over GPT-5 in its new auto AI model selector in Visual Studio Code. Microsoft also brought Anthropic's Claude Sonnet 4 and Claude Opus 4.1 to Microsoft 365 Copilot users recently.
[4]
How Anthropic's DIY data centers could accelerate AI's infrastructure frenzy
Anthropic will spend $50 billion to build its own data centers.They'll be in Texas, New York, and elsewhere across the US.It could set new standards for AI labs trying to scale. Anthropic will invest $50 billion in the construction of its own data centers, the company announced Tuesday -- a move that could have ripple effects across an industry already racing to build AI infrastructure. "Anthropic serves more than 300,000 business customers, and our number of large accounts -- customers that each represent over $100,000 in run-rate revenue -- has grown nearly sevenfold in the past year," the company wrote in the announcement, citing the demand behind the expansion. Also: AI will cause 'jobs chaos' within the next few years, says Gartner - what that means The new facilities -- slated to be built in Texas, New York, and other locations across the country that have yet to be disclosed -- will be "custom built for Anthropic with a focus on maximizing efficiency for our workloads, enabling continued research and development at the frontier," the blog post said. Tuesday's announcement marks Anthropic's first foray into building its own data centers. Vijay Gadepally, a senior scientist at the Massachusetts Institute of Technology's Lincoln Laboratory and the cofounder of software company Bay Compute, said it could be a harbinger of a broader trend throughout the tech industry, as younger companies seek to secure sustainable sources of computing power. Also: Does your chatbot have 'brain rot'? 4 ways to tell "This is, I think, definitely a direction that we're going to see happening more," Gadepally told ZDNET. "Three or four years ago, the biggest bottleneck was how many GPUs you could get your hands on, and that's why a lot of these big model developers signed strategic agreements with big cloud providers or hyperscalers to get essentially guaranteed access." The new effort from Anthropic to build its own custom data centers, Gadepally added, "is the next logical progression of that: How much of the verticalization of compute can you get your hands on?" Most AI startups don't have the funds of an OpenAI or an Anthropic at their disposal, and will have to continue relying on partnerships and leasing agreements with third-party AI infrastructure companies. But for the small subset of developers who can afford it, following in Anthropic's footsteps by building proprietary data centers could catch on. Also: Google's Private AI Compute promises good-as-local privacy in the Gemini cloud "For companies that are training massive frontier models, there is a good chance that you're going to see increasing verticalization," he said. Anthropic's new data center construction project will supposedly push the company toward achieving major new advancements with its technology. "We're getting closer to AI that can accelerate scientific discovery and help solve complex problems in ways that weren't possible before," company CEO and cofounder Dario Amodei said in the announcement. "Realizing that potential requires infrastructure that can support continued development at the frontier. These sites will help us build more capable AI systems that can drive those breakthroughs, while creating American jobs." Anthropic added that as the new sites come online throughout the next year, they'll create 800 permanent jobs and 2,400 construction jobs, thereby helping to advance the goals laid out in the Trump administration's AI Action Plan. Released this summer, the Action Plan focuses on ramping up infrastructure to maintain the US's competitive edge in the AI race over other countries, especially China. Also: As OpenAI hits 1 million business customers, could the AI ROI tide finally be turning? Such proclamations have become a common refrain among AI developers at a time when fears about a potential AI bubble have been simmering. While billions in investor dollars are still flowing freely into AI, some experts worry that the technology won't be able to financially deliver in the long run, and that the US economy could cave in as a result. Last week, OpenAI wrote in a blog post that "superintelligence" -- a hypothetical AI system that's far more advanced than the human brain -- could lead to "a world of widely distributed abundance" by, for example, assisting with scientific discovery and drug development. Only time will tell if such promises can be turned into reality -- and whether or not concerns about an AI bubble are justified. As the stakes of the AI race have grown, so too has demand among tech developers for energy. It takes a lot of electrical power to fuel the supercomputers behind consumer chatbots like Claude, ChatGPT, and Gemini, and those in turn require huge quantities of water so that they don't constantly overheat. Data centers also raise energy costs for residential areas surrounding them. Also: Worried about AI's soaring energy needs? Avoiding chatbots won't help - but 3 things could All of that energy-providing infrastructure is expensive, which means that deep-pocketed legacy tech companies like Meta, Amazon, and Google's parent company Alphabet have had an immediate advantage over the vast majority of startups trying to sell new AI products. But OpenAI and Anthropic -- both of which are relatively young companies that were quick to launch extremely popular chatbots -- have been two of the most successful exceptions to that rule. (Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Founded as a nonprofit in 2015, OpenAI started receiving billions of dollars in capital from Microsoft four years later, which enabled it to rapidly build new models and invest in the requisite infrastructure to power them. OpenAI doesn't own these facilities; however, it mostly leases them from developers specializing in data center construction. Though the company is restructuring towards an ambitious future, it still relies on partnerships to access the computing infrastructure that fuels all of its AI products. Also: The key to AI implementation might just be a healthy skepticism - here's why Earlier this month, for example, OpenAI announced it would pay Amazon Web Services $38 billion for access to that company's cloud computing services as it ramps up its efforts to build artificial general intelligence (AGI) -- and, more immediately, hit the goals that it set for itself when it kicked off Project Stargate, a collaborative data center construction project with the government and other big tech firms, in January. Thus far, Anthropic has pursued a similar course, primarily tapping into data center architecture provided by its two biggest financial backers, Alphabet and Amazon. But the company has grown explosively since its founding in 2021 -- with a reported valuation of $183 billion in September -- thanks in large part to Claude's popularity among enterprise users. Its options have therefore expanded.
[5]
Anthropic, Microsoft, and Nvidia swap billions in AI deal
What do you get when you combine Anthropic, Microsoft, and Nvidia? A bubble that blows itself It wouldn't be a week of tech news without more circular exchanges of billions of dollars between AI firms. This time around, it's a $45 billion back-scratching session involving Microsoft, Anthropic, and Nvidia, announced during Redmond's Ignite conference. The three firms shared news of the deal with the publication of a joint press release on Tuesday, reporting Anthropic's commitment to invest $30 billion with Microsoft to get a new bundle of Nvidia-powered Azure compute capacity. That's still not enough to get the Claude maker to 1 GW of Azure compute, the addition of which the press release said will be included in the contract for future expansion. With Nvidia and Microsoft standing to make plenty on Anthropic's Azure investment, which will naturally require Microsoft to purchase additional Nvidia chips, the two companies decided to reward the Claude-maker. The AI arms dealer and house of Windows are tossing in their own Anthropic investments of up to $10 billion and $5 billion, respectively, as part of the circular deal. While the press release put a number on the Microsoft side of the deal, the financial benefit to Nvidia was left undefined. That said, the trio did say that Anthropic's compute commitment would be for up to one gigawatt of compute capacity derived from Nvidia's Grace Blackwell and Vera Rubin systems, and 1 GW requires a lot of chips. Along with Microsoft's need to purchase a bunch of Nvidia chips, the three-way deal will also see Anthropic and Nvidia collaborate on designing and engineering future Nvidia architectures for Anthropic workloads. Anthropic models are now available in public preview on Microsoft Foundry as part of the deal, Anthropic said in an accompanying release on Tuesday. Anthropic's presence in Microsoft 365 is being expanded as well, with Agent Mode in Excel now offering an option to use Claude alongside ChatGPT. "This partnership will make Claude the only frontier model available on all three of the world's most prominent cloud services," the trio congratulated itself. While Anthropic is clearly extending its cloud footprint into other managed services, the company noted in a footnote to its own copy of the joint press statement that Amazon Web Services "remains Anthropic's primary cloud provider and training partner." Anthropic's continued prioritization of its Amazon relationship echoes OpenAI's own changed relationship with Microsoft. Since the AI giant has turned into a for-profit firm, it has done more to separate itself from Redmond. Even with that separation, and a $38 billion agreement with AWS to expand its footprint at another hyperscaler, Microsoft still owns a 27 percent stake in OpenAI, which continues to rely on it for the bulk of its computing. Likewise, this Microsoft-Anthropic-Nvidia trio is just the latest in a string of tech companies throwing billions at each other for the promise of more AI computing resources, though this deal pales in comparison to some of the other mega commitments. OpenAI reportedly inked a $300 billion cloud compute contract with Oracle in September, which is far more than the yet-to-turn-a-profit firm has on the books. Oracle needs borrow $25 billion a year for the next four years to construct the computing infrastructure OpenAI has asked for, according to analyst estimates. Tuesday's Anthropic deal is likely to add further fuel to the belief we're in the midst of an AI bubble: According to sources close to the deal that spoke with CNBC, Anthropic's agreement with Microsoft and Nvidia has pumped the company's valuation to around $350 billion, just two months after Anthropic announced it had completed a Series F funding round that valued it at $183 billion. That's nearly double the valuation in two months - quite the leap. But as long as investors keep buying the dream, everything's alright forever. ®
[6]
Anthropic to invest $50bn in new US data centres
Anthropic plans to invest $50bn in building artificial intelligence infrastructure in the US over the coming years, as the start-up races to secure new computing power. The Claude chatbot maker on Wednesday said it would develop new data centres in New York and Texas with UK-based cloud computing start-up Fluidstack. The sites will bolster Anthropic's research and development as well as providing power for its existing AI tools. "We're getting closer to AI that can accelerate scientific discovery and help solve complex problems in ways that weren't possible before. Realising that potential requires infrastructure that can support continued development at the frontier," said Dario Amodei, chief executive and co-founder of Anthropic. The investment follows a flurry of deals by Anthropic's chief rival OpenAI to secure chips and computing capacity from Nvidia, AMD, Broadcom, Oracle and Google, estimated to be worth about $1.5tn. The circular arrangements between companies that act as suppliers, investors and customers of each other, combined with booming AI valuations, have added to concerns about a bubble in the sector. Anthropic has also moved to boost its computing power this year. Last month, the four-year-old start-up signed a deal to secure access to 1mn Google Cloud chips to train and run its AI models. The San Francisco-based group also has a partnership with Amazon, which is the start-up's "primary" cloud provider and a large investor. It has invested $8bn in Anthropic and is building a 2.2GW data-centre cluster in New Carlisle, Indiana, to help train its AI models. Its latest agreement will involve it partnering with Fluidstack, a small start-up that this year signed a deal with the French government to build a major computing cluster in France. Anthropic said it chose the company for its "exceptional agility". "We're proud to partner with frontier AI leaders like Anthropic to accelerate and deploy the infrastructure necessary to realise their vision," said Gary Wu, co-founder and CEO of Fluidstack. Anthropic, which was recently valued at $183bn post-money, was founded by a group of former OpenAI employees. While OpenAI has focused largely on its consumer product ChatGPT, Anthropic has targeted enterprise customers. The group's run-rate revenue -- a projection of annual revenue based on recent performance which is favoured by start-ups -- shot from $1bn to more than $5bn in September, when the company raised $13bn from investors including Iconiq Capital and Lightspeed Venture Partners.
[7]
Anthropic will invest $50 billion in building AI data centers in the US
Anthropic just announced plans to put $50 billion toward bolstering computing infrastructure in the US. As part of the initiative, the company is working with the AI cloud platform Fluidstack to build datacenters in Texas and New York, "with more sites to come." The data centers will come online throughout 2026 and create 800 jobs, according to Anthropic. "It will help advance the goals in the Trump administration's AI Action Plan to maintain American AI leadership and strengthen domestic technology infrastructure," Anthropic says in the press release. Anthropic isn't the only AI company pouring billions into building out data centers, as OpenAI and SoftBank announced the $500 billion "Stargate Project" in January, which will light up a series of AI data centers around the US, starting with Texas. Meta has also committed $600 billion to invest in US infrastructure and data centers. Anthropic says that the size of its investment "is necessary to meet the growing demand" for its AI chatbot Claude, while also allowing it to keep its research "at the frontier" of the technology.
[8]
Microsoft partners with Anthropic and Nvidia in cloud infrastructure deal
SAN FRANCISCO (AP) -- Microsoft said Tuesday it is partnering with artificial intelligence company Anthropic and chipmaker Nvidia as part of a cloud infrastructure deal that moves the software giant further away from its longtime alliance with OpenAI. Anthropic, maker of the chatbot Claude that competes with OpenAI's ChatGPT, said it is committed to buying $30 billion of computing capacity from Microsoft's Azure cloud computing platform. As part of the partnership, Nvidia will also invest up to $10 billion in Anthropic, and Microsoft will invest up to $5 billion in the San Francisco-based startup. The joint announcements by CEOs Dario Amodei of Anthropic, Satya Nadella of Microsoft, and Jensen Huang of Nvidia came just ahead of the opening of Microsoft's annual Ignite developer conference. Microsoft was, until earlier this year, the exclusive cloud provider for OpenAI. They continue to be partners but OpenAI has increasingly sought to secure its own cloud capacity through big deals with Oracle, SoftBank and other data center developers and chipmakers.
[9]
Microsoft and NVIDIA will invest up to $15 billion in Anthropic
Fresh off a reorganized deal with OpenAI, Microsoft is diversifying its AI investments. The company says it will invest up to $10 billion in Anthropic. Meanwhile, NVIDIA has pledged up to $5 billion in the Claude maker. The three-way partnership, which includes various other commitments, could be seen as further evidence that an AI bubble is about to burst. As part of the deal, Anthropic has committed to buy $30 billion of Microsoft Azure cloud computing capacity. Anthropic says it will also contract additional capacity, up to one gigawatt. In addition, Microsoft Foundry customers will gain access to several Claud models. These include Sonnet 4.5, Opus 4.1 and Haiku 4.5. Meanwhile, NVIDIA and Anthropic will work together to improve Anthropic's AI models for NVIDIA hardware. The pair will also optimize future NVIDIA architectures for Anthropic's needs. All of this is against the backdrop of Microsoft's recently renewed partnership with OpenAI, which loosens their exclusivity. It didn't take long to see the apparent fruit of that. Early this month, the ChatGPT maker signed a $38 billion cloud contract with Amazon. And last week, Anthropic said it will use AWS AI chips after Amazon invested an additional $4 billion in the Claude maker. Dizzy yet? To borrow imagery from the "two Spider-Men" meme, the AI world increasingly looks like a big circle of web-slingers, all pointing at each other. Only in this case, each index finger is flinging billions of dollars to help prop up the other Spider-Men. (Pay no mind to the AI layoffs.) It's too early to say how this all plays out, but the circular nature here makes it easier to understand why some believe we're looking at a bubble. NVIDIA's earnings tomorrow could tell us more.
[10]
Anthropic to spend $50 billion on U.S. AI infrastructure, starting with Texas, New York data centers
Google and Anthropic ink cloud deal worth tens of billions of dollars Anthropic announced plans Wednesday to spend $50 billion on U.S. artificial intelligence infrastructure buildout, starting with custom data centers in Texas and New York. The facilities, which will be designed to support the company's rapid enterprise growth and its long-term research agenda, will be developed in partnership with Fluidstack. Fluidstack is an AI cloud platform that supplies large-scale graphics processing unit (GPU) clusters to clients like Meta, Midjourney, and Mistral. Additional sites are expected to follow, with the first locations going live in 2026. The project is expected to create 800 permanent jobs and more than 2,000 construction roles. The investment positions Anthropic as a major domestic player in physical AI infrastructure at a moment when policymakers are increasingly focused on U.S.-based compute capacity and technological sovereignty. "We're getting closer to AI that can accelerate scientific discovery and help solve complex problems in ways that weren't possible before. Realizing that potential requires infrastructure that can support continued development at the frontier," said CEO Dario Amodei. "These sites will help us build more capable AI systems that can drive those breakthroughs, while creating American jobs." The move comes as Anthropic rival OpenAI pushes forward with an aggressive buildout of its own. The ChatGPT maker has secured more than $1.4 trillion in long-term infrastructure commitments through deals with Nvidia, Broadcom, Oracle, and the major cloud providers, including Microsoft, Google, and, most recently, Amazon. The scale of that spending has raised questions about whether the U.S. has the power capacity and industrial backbone to deliver on such promises, and whether the AI sector is drifting into bubble territory.
[11]
Microsoft to invest $5B in Anthropic, as Claude maker commits $30B to Azure in new Nvidia alliance
The frenzy of AI deals and cloud partnerships reached another zenith Tuesday morning as Microsoft, Nvidia, and Anthropic announced a surprise alliance that includes a $5 billion investment by Microsoft in Anthropic -- which, in turn, committed to spend at least $30 billion on Microsoft's Azure cloud platform. Nvidia, meanwhile, committed to invest up to $10 billion in Anthropic to ensure the Claude maker's frontier models are optimized for its next-generation Grace Blackwell and Vera Rubin chips. The deal reflects growing moves by major AI players to collaborate across the industry in an effort to build and expand capacity and access to next-generation AI models. Microsoft recently renegotiated its partnership with OpenAI and has been increasingly partnering with others in the industry. Anthropic has been closely tied to Amazon, which has committed to invest a total of $8 billion in the startup. Anthropic says in a post that Amazon remains its "primary cloud provider and training partner" for AI models. We've contacted Amazon for comment on the news. OpenAI, for its part, recently announced a seven-year, $38 billion agreement with Amazon to expand its AI footprint to the Seattle tech giant's cloud infrastructure. Beyond the massive capital flows, the Microsoft-Nvidia-Anthropic partnership expands where enterprise customers can access Anthropic's technology. According to the announcement, Microsoft customers will be able to use its Foundry platform to access Anthropic's next-generation frontier models, identified as Claude Sonnet 4.5, Claude Opus 4.1, and Claude Haiku 4.5. Microsoft also committed to continuing access for Claude across its Copilot family, ensuring the models remain available within GitHub Copilot, Microsoft 365 Copilot, and Copilot Studio. The news comes as Microsoft holds its big Ignite conference in San Francisco.
[12]
Anthropic announces $50B investment in new US data centers to meet AI demand
SAN FRANCISCO (AP) -- Artificial intelligence company Anthropic announced a $50 billion investment in computing infrastructure on Wednesday that will include new data centers in Texas and New York. Anthropic, maker of the chatbot Claude, said it is working with London-based Fluidstack to build the new computing facilities to power its AI systems. It didn't disclose their exact locations or what source of electricity they will need. A report last month from TD Cowen said that the leading cloud computing providers leased a "staggering" amount of U.S. data center capacity in the third fiscal quarter of this year, amounting to more than 7.4 gigawatts of energy, more than all of last year combined. Oracle was securing the most capacity during that time, much of it supporting AI workloads for Anthropic's chief rival OpenAI, maker of ChatGPT. Google was second and Fluidstack came in third, ahead of Meta, Amazon, CoreWeave and Microsoft. Anthropic said its projects will create about 800 permanent jobs and 2,400 construction jobs. It said in a statement that the "scale of this investment is necessary to meet the growing demand for Claude from hundreds of thousands of businesses while keeping our research at the frontier." The tech industry's huge amount of spending on computing infrastructure for AI startups that aren't yet profitable has fueled concerns about an AI investment bubble. Investors have closely watched a series of intertwined deals over recent months between top AI developers such as OpenAI and Anthropic and the companies building the costly computer chips and data centers needed to power their AI products. Anthropic said it will continue to "prioritize cost-effective, capital-efficient approaches" to scaling up its business.
[13]
Anthropic announces $50bn plan for datacenter construction in US
AI startup behind Claude chatbot working with London-based Fluidstack on building vast new computing facilities Artificial intelligence company Anthropic announced a $50bn investment in computing infrastructure on Wednesday that will include new datacenters in Texas and New York. "We're getting closer to AI that can accelerate scientific discovery and help solve complex problems in ways that weren't possible before," Anthropic's CEO, Dario Amodei, said in a press release. Building the massive information warehouses takes an average of two years in the US and requires copious amounts of energy to fuel the facilities. The company, maker of the AI chatbot Claude, popular with businesses adopting AI, said in a statement that the "scale of this investment is necessary to meet the growing demand for Claude from hundreds of thousands of businesses while keeping our research at the frontier". Anthropic said its projects will create about 800 permanent jobs and 2,400 construction jobs. The startup said it is working with London-based Fluidstack to build the new computing facilities to power its AI systems. It didn't disclose their exact locations or what source of electricity the facilities will need. The latest deals show that the tech industry is moving forward on huge spending to build energy-hungry AI infrastructure, despite lingering financial concerns about a bubble, environmental considerations and the political effects of fast-rising electricity bills in the communities where they're constructed. Another company, cryptocurrency-mining datacenter developer TeraWulf, has previously revealed it was working with Fluidstack on Google-backed datacenter projects in Texas and New York, on the shore of Lake Ontario. Microsoft also on Wednesday announced a new datacenter under construction in Atlanta, Georgia, describing it as connected to another in Wisconsin to form a "massive supercomputer" running on hundreds of thousands of Nvidia chips to power AI technology. A report last month from TD Cowen said that the leading cloud computing providers leased a "staggering" amount of US datacenter capacity in the third fiscal quarter of this year, amounting to more than 7.4GW of energy, more than all of last year combined. The tech industry's huge amount of spending on computing infrastructure for AI startups that aren't yet profitable has fueled concerns about an AI investment bubble. Investors have closely watched a series of intertwined deals over recent months between top AI developers such as OpenAI and Anthropic and the companies building the costly computer chips and datacenters needed to power their AI products. Anthropic said it will continue to "prioritize cost-effective, capital-efficient approaches" to scaling up its business.
[14]
Anthropic, Microsoft announce new AI data center projects as industry's construction push continues
Artificial intelligence company Anthropic announced a $50 billion investment in computing infrastructure on Wednesday that will include new data centers in Texas and New York. Microsoft also on Wednesday announced a new data center under construction in Atlanta, Georgia, describing it as connected to another in Wisconsin to form a "massive supercomputer" running on hundreds of thousands of Nvidia chips to power AI technology. The latest deals show that the tech industry is moving forward on huge spending to build energy-hungry AI infrastructure, despite lingering financial concerns about a bubble, environmental considerations and the political effects of fast-rising electricity bills in the communities where they're constructed. Anthropic, maker of the chatbot Claude, said it is working with London-based Fluidstack to build the new computing facilities to power its AI systems. It didn't disclose their exact locations or what source of electricity they will need. Another company, cryptocurrency mining data center developer TeraWulf, has previously revealed it was working with Fluidstack on Google-backed data center projects in Texas and New York, on the shore of Lake Ontario. TeraWulf declined comment Wednesday. A report last month from TD Cowen said that the leading cloud computing providers leased a "staggering" amount of U.S. data center capacity in the third fiscal quarter of this year, amounting to more than 7.4 gigawatts of energy, more than all of last year combined. Oracle was securing the most capacity during that time, much of it supporting AI workloads for Anthropic's chief rival OpenAI, maker of ChatGPT. Google was second and Fluidstack came in third, ahead of Meta, Amazon, CoreWeave and Microsoft. Anthropic said its projects will create about 800 permanent jobs and 2,400 construction jobs. It said in a statement that the "scale of this investment is necessary to meet the growing demand for Claude from hundreds of thousands of businesses while keeping our research at the frontier." Microsoft has branded its Atlanta data center as Fairwater 2, after the original Fairwater complex being built near Milwaukee, Wisconsin. The company said it will help power its own technology, along with OpenAI's and other AI developers. Microsoft was, until earlier this year, OpenAI's exclusive cloud computing provider before the two companies amended their partnership. OpenAI has since announced more than $1 trillion in infrastructure obligations, much of it tied to its Stargate project with partners Oracle and SoftBank. The tech industry's huge amount of spending on computing infrastructure for AI startups that aren't yet profitable has fueled concerns about an AI investment bubble. Investors have closely watched a series of intertwined deals over recent months between top AI developers such as OpenAI and Anthropic and the companies building the costly computer chips and data centers needed to power their AI products. Anthropic said it will continue to "prioritize cost-effective, capital-efficient approaches" to scaling up its business.
[15]
Anthropic says new $50B investment in data centers will create about 800 permanent jobs and 2,400 construction jobs | Fortune
Artificial intelligence company Anthropic announced a $50 billion investment in computing infrastructure on Wednesday that will include new data centers in Texas and New York. Microsoft also on Wednesday announced a new data center under construction in Atlanta, Georgia, describing it as connected to another in Wisconsin to form a "massive supercomputer" running on hundreds of thousands of Nvidia chips to power AI technology. The latest deals show that the tech industry is moving forward on huge spending to build energy-hungry AI infrastructure, despite lingering financial concerns about a bubble, environmental considerations and the political effects of fast-rising electricity bills in the communities where they're constructed. Anthropic, maker of the chatbot Claude, said it is working with London-based Fluidstack to build the new computing facilities to power its AI systems. It didn't disclose their exact locations or what source of electricity they will need. Another company, cryptocurrency mining data center developer TeraWulf, has previously revealed it was working with Fluidstack on Google-backed data center projects in Texas and New York, on the shore of Lake Ontario. TeraWulf declined comment Wednesday. A report last month from TD Cowen said that the leading cloud computing providers leased a "staggering" amount of U.S. data center capacity in the third fiscal quarter of this year, amounting to more than 7.4 gigawatts of energy, more than all of last year combined. Oracle was securing the most capacity during that time, much of it supporting AI workloads for Anthropic's chief rival OpenAI, maker of ChatGPT. Google was second and Fluidstack came in third, ahead of Meta, Amazon, CoreWeave and Microsoft. Anthropic said its projects will create about 800 permanent jobs and 2,400 construction jobs. It said in a statement that the "scale of this investment is necessary to meet the growing demand for Claude from hundreds of thousands of businesses while keeping our research at the frontier." Microsoft has branded its Atlanta data center as Fairwater 2, after the original Fairwater complex being built near Milwaukee, Wisconsin. The company said it will help power its own technology, along with OpenAI's and other AI developers. Microsoft was, until earlier this year, OpenAI's exclusive cloud computing provider before the two companies amended their partnership. OpenAI has since announced more than $1 trillion in infrastructure obligations, much of it tied to its Stargate project with partners Oracle and SoftBank. The tech industry's huge amount of spending on computing infrastructure for AI startups that aren't yet profitable has fueled concerns about an AI investment bubble. Investors have closely watched a series of intertwined deals over recent months between top AI developers such as OpenAI and Anthropic and the companies building the costly computer chips and data centers needed to power their AI products. Anthropic said it will continue to "prioritize cost-effective, capital-efficient approaches" to scaling up its business.
[16]
Nvidia, Microsoft invest $15 billion in AI startup Anthropic
Washington (AFP) - Nvidia and Microsoft announced Tuesday investments totaling $15 billion in AI startup Anthropic, creator of the Claude chatbot, amid frenzied spending on the technology and growing fears of a bubble. AI chip powerhouse Nvidia committed up to $10 billion while Microsoft -- which owns 27 percent of Anthropic rival OpenAI -- pledged up to $5 billion to the maker of Claude AI models. Microsoft also continues investing heavily in its own Copilot AI that runs on the company's Azure platform. The deal was part of a sweeping agreement that saw Anthropic commit to purchasing $30 billion in Microsoft's cloud computing capacity and adopt the latest versions of Nvidia's chip technology. "We're increasingly going to be customers of each other," said Microsoft CEO Satya Nadella in an online video announcing the deal. "We will use Anthropic models. They will use our infrastructure, and we'll go to market together to help our customers realize the value of AI." The investments mark a significant realignment in the generative AI sector, where competition has intensified between ChatGPT-maker OpenAI and rivals including Anthropic, but also Google, which released its latest Gemini model on Tuesday. California-based Anthropic was launched in 2021 by former OpenAI staff and positions itself as prioritizing safety in AI development. Its flagship product is the Claude chatbot and family of models. Tech industry rivals, which also include Amazon, Meta and Elon Musk's xAI, have been pouring tens of billions of dollars into artificial intelligence since the blockbuster launch of the first version of ChatGPT in late 2022. Nvidia, meanwhile, has become a coveted source of high-performance GPUs tailored for generative AI. With increasing talk among Wall Street analysts of an AI bubble, shares in Nvidia, the world's biggest company by market capitalization, were down slightly amid a broad sell-off in the tech sector. Microsoft's shares were down about 2.5 percent. Sources told CNBC that the fresh investment valued Anthropic at $350 billion, making it one of the world's most valuable companies. OpenAI was most recently valued at $500 billion. Investment spree The massive investment in Anthropic comes a month after Nvidia announced it will pump as much as $100 billion into OpenAI, building infrastructure for future generations of the technology. "Compute infrastructure will be the basis for the economy of the future," OpenAI chief executive Sam Altman said at the time. In a dizzying bout of deal-making, OpenAI also recently signed a $38 billion deal with Amazon's AWS cloud computing arm, continuing a major partnership spree that has included Oracle, Broadcom and AMD. Nvidia's GPUs, originally designed for gaming systems, have become the essential building blocks of artificial intelligence applications, with tech giants scrambling to secure them for their data centers and AI projects. Silicon Valley-based Nvidia also recently announced it would invest $5 billion in struggling chip rival Intel. The investment -- backed by the White House -- represents a significant commitment to Intel's turnaround efforts.
[17]
Anthropic to invest $50 billion in new data centres to meet AI demand
Anthropic said the investment is needed to meet growing demand for its AI chatbot Claude. Artificial intelligence (AI) company Anthropic announced a $50 billion (€43.3 billion) investment in computing infrastructure on Wednesday. Anthropic, maker of the chatbot Claude, said it is working with London-based Fluidstack to build the new computing facilities to power its AI systems, including new data centres in Texas and New York. It didn't disclose their exact locations or what source of electricity they will need. A report last month from TD Cowen said that the leading cloud computing providers leased a "staggering" amount of US data centre capacity in the third fiscal quarter of this year, amounting to more than 7.4 gigawatts of energy, more than all of last year combined. Oracle was securing the most capacity during that time, much of it supporting AI workloads for Anthropic's chief rival OpenAI, which makes ChatGPT. Google was second and Fluidstack came in third, ahead of Meta, Amazon, CoreWeave, and Microsoft. Anthropic said in a statement that the "scale of this investment is necessary to meet the growing demand for Claude from hundreds of thousands of businesses while keeping our research at the frontier". The tech industry's huge amount of spending on computing infrastructure for AI startups that aren't yet profitable has fueled concerns about an AI investment bubble. Investors have closely watched a series of intertwined deals over recent months between top AI developers such as OpenAI and Anthropic, as well as the companies building the costly computer chips and data centres needed to power their AI products. Anthropic said it will continue to "prioritise cost-effective, capital-efficient approaches" to scaling up its business.
[18]
Microsoft and NVIDIA to Invest Up to $15 Billion in Anthropic | AIM
At the centre of the announcement is Anthropic's decision to scale its Claude AI systems on Microsoft Azure, backed by NVIDIA infrastructure. Anthropic has committed to purchase $30 billion in Azure compute capacity, with the option to contract up to one gigawatt of additional compute. The move will expand Claude's availability for Azure enterprise users and increase model options through Microsoft Foundry. Moreover, NVIDIA and Microsoft will invest up to $10 billion and up to $5 billion, respectively, in Anthropic as part of the deal. The investment follows Anthropic's $13 billion Series F round in September, led by ICONIQ, which valued the company at $183 billion post-money. Fidelity Management & Research Company and Lightspeed Venture Partners also co-led that round. "We're working to broaden access to Claude for organisations building with AI," Anthropic CEO Dario Amodei said during a joint discussion with Microsoft's Satya Nadella and NVIDIA's Jensen Huang. For the first time, Anthropic and NVIDIA have also formed a deep technology partnership focused on co-design and engineering. The companies will work to optimise Claude models for performance and efficiency on NVIDIA's architectures, while NVIDIA will tune future chips for Anthropic workloads. Anthropic's compute commitment with NVIDIA will include Grace Blackwell and Vera Rubin systems, reaching up to one gigawatt. Microsoft and Anthropic are simultaneously expanding their existing collaboration, giving Microsoft Foundry customers access to Claude's frontier models -- Sonnet 4.5, Opus 4.1 and Haiku 4.5. Microsoft will continue integrating Claude across its Copilot products, including GitHub Copilot, Microsoft 365 Copilot and Copilot Studio. "We want developers and enterprises to have choice in the models they use," Nadella said. Amazon, however, remains Anthropic's primary cloud and training partner, the companies clarified. Huang said the expanded collaboration aims to support the next phase of AI development as demand for compute continues to rise. Anthropic recently announced a $50 billion investment in US computing infrastructure, partnering with Fluidstack to build data centres in Texas and New York, with additional sites planned. The facilities are designed specifically for Anthropic's workloads to support continued AI research and development. The project is expected to create around 800 permanent jobs and 2,400 construction jobs, with sites scheduled to come online through 2026. It aligns with the Trump administration's AI Action Plan, which aims to strengthen domestic AI leadership and technology infrastructure.
[19]
Nvidia, Microsoft to invest up to $15B in Anthropic as part of new cloud partnership - SiliconANGLE
Nvidia, Microsoft to invest up to $15B in Anthropic as part of new cloud partnership Anthropic PBC will raise up to $15 billion in funding from Nvidia Corp. and Microsoft Corp. as part of a partnership the companies announced today. Nvidia is set to provide up to two thirds of the capital while Microsoft plans to provide the rest. Anthropic, for its part, has committed to leasing $30 billion worth of Azure infrastructure. The large language model developer expects to buy up to one gigawatt worth of computing capacity. Microsoft plans to provide the computing capacity using systems based on Nvidia's Grace Blackwell and Vera Rubin architectures. Both architectures comprise multiple chips. Nvidia's most capable Grace Blackwell system, the GB300 NLV72, includes 72 of its top-end Blackwell Ultra graphics processing units. They're integrated with 32 Grace central processing units, which each include 72 cores. The chips are supported by built-in storage equipment and cooling modules. Vera Rubin is the planned successor to Grace Blackwell. In a departure from Nvidia's previous data center architectures, it will include not one but two GPU designs. The first graphics card, Rubin, features 288 gigabytes of HBM4 memory. HBM4 is the latest iteration of the high-speed RAM that ships with Nvidia's GPUs. It's expected to provide more than one terabit per second of bandwidth, a significant improvement over the HBM3e memory in the Blackwell Ultra. The other GPU in the Vera Rubin architecture is called the Rubin CX. The process through which an LLM generates prompt responses comprises two steps known as the prefill and decode stages. According to Nvidia, Rubin CX is specifically optimized to perform prefill processing. The company plans to ship the two GPUs as part of systems that will provide 8 exaflops of performance. Each appliance is set to contain 144 Rubin CPX prefill accelerators, 144 Rubin chips and 36 of Nvidia's upcoming Vera CPUs. The latter processors are based on an 88-core design. Last month, Microsoft disclosed that it had recently launched its first large-scale Grace Blackwell cluster. The company didn't specify whether Anthropic will have access to the hardware. The more advanced Vera Rubin systems that the LLM developer is set to lease through the partnership will start shipping in late 2026. In a video, Anthropic Chief Executive Officer Dario Amodei stated the company will use the infrastructure "both to train our models to support Microsoft first-party products and to sell together." The Anthropic-Microsoft partnership also extends to the latter company's Azure Foundry platform. The platform, which provides tools for building AI applications, will offer access to Claude Sonnet 4.5, Claude Opus 4.1 and Claude Haiku 4.5. The integration will make Anthropic the only LLM provider whose models are available across all three of the world's most popular public clouds. Separately, the company is launching an engineering collaboration with Nvidia. One of the partnership's goals is to optimize the chipmaker's future processors for Anthropic models.
[20]
Anthropic and Microsoft announce new AI data center projects in Texas, New York, and Georgia
Artificial intelligence company Anthropic announced a $50 billion investment in computing infrastructure on Wednesday that will include new data centers in Texas and New York. Microsoft also on Wednesday announced a new data center under construction in Atlanta, Georgia, describing it as connected to another in Wisconsin to form a "massive supercomputer" running on hundreds of thousands of Nvidia chips to power AI technology. The latest deals show that the tech industry is moving forward on huge spending to build energy-hungry AI infrastructure, despite lingering financial concerns about a bubble, environmental considerations, and the political effects of fast-rising electricity bills in the communities where they're constructed. Anthropic, maker of the chatbot Claude, said it is working with London-based Fluidstack to build the new computing facilities to power its AI systems. It didn't disclose their exact locations or what source of electricity they will need.
[21]
Anthropic commits $50B to its new data center plan
CEO Dario Amodei said advanced infrastructure is essential for developing frontier AI systems. Anthropic's cloud partnerships with Google and Amazon remain, but this marks its first major custom-compute investment. Anthropic announced a $50 billion data center partnership with U.K.-based Fluidstack on Wednesday, committing funds to build facilities across the U.S. to address compute requirements. The new data centers, described as "custom built for Anthropic with a focus on maximizing efficiency for our workloads," will be located in Texas and New York. These facilities are expected to become operational throughout 2026. Anthropic CEO and co-founder Dario Amodei stated, "Realizing that potential requires infrastructure that can support continued development at the frontier," emphasizing the need for infrastructure to advance AI that can accelerate scientific discovery and address complex problems. The company currently maintains cloud partnerships with Google and Amazon, an investor, to support the intense compute demands of its Claude family of models. This initiative represents Anthropic's initial significant effort to establish custom infrastructure. The $50 billion expenditure aligns with internal revenue projections, which forecast Anthropic achieving $70 billion in revenue and $17 billion in positive cash flow by 2028. This $50 billion commitment is smaller than projects undertaken by competitors. Meta has pledged $600 billion for data centers over the next three years, and the Stargate partnership, involving SoftBank, OpenAI, and Oracle, has allocated $500 billion for infrastructure spending. Concerns exist regarding an AI bubble due to potential flagging demand or misallocated spending. For Fluidstack, a neocloud company founded in 2017, this project signifies a notable achievement. The company was designated in February as the primary partner for a 1-gigawatt AI project backed by the French government, valued at over $11 billion.
[22]
Anthropic, Microsoft announce new AI data center projects as industry's construction push continues
Anthropic has announced a $50 billion investment in computing infrastructure, including new data centers in Texas and New York Artificial intelligence company Anthropic announced a $50 billion investment in computing infrastructure on Wednesday that will include new data centers in Texas and New York. Microsoft also on Wednesday announced a new data center under construction in Atlanta, Georgia, describing it as connected to another in Wisconsin to form a "massive supercomputer" running on hundreds of thousands of Nvidia chips to power AI technology. The latest deals show that the tech industry is moving forward on huge spending to build energy-hungry AI infrastructure, despite lingering financial concerns about a bubble, environmental considerations and the political effects of fast-rising electricity bills in the communities where they're constructed. Anthropic, maker of the chatbot Claude, said it is working with London-based Fluidstack to build the new computing facilities to power its AI systems. It didn't disclose their exact locations or what source of electricity they will need. Another company, cryptocurrency mining data center developer TeraWulf, has previously revealed it was working with Fluidstack on Google-backed data center projects in Texas and New York, on the shore of Lake Ontario. TeraWulf declined comment Wednesday. A report last month from TD Cowen said that the leading cloud computing providers leased a "staggering" amount of U.S. data center capacity in the third fiscal quarter of this year, amounting to more than 7.4 gigawatts of energy, more than all of last year combined. Oracle was securing the most capacity during that time, much of it supporting AI workloads for Anthropic's chief rival OpenAI, maker of ChatGPT. Google was second and Fluidstack came in third, ahead of Meta, Amazon, CoreWeave and Microsoft. Anthropic said its projects will create about 800 permanent jobs and 2,400 construction jobs. It said in a statement that the "scale of this investment is necessary to meet the growing demand for Claude from hundreds of thousands of businesses while keeping our research at the frontier." Microsoft has branded its Atlanta data center as Fairwater 2, after the original Fairwater complex being built near Milwaukee, Wisconsin. The company said it will help power its own technology, along with OpenAI's and other AI developers. Microsoft was, until earlier this year, OpenAI's exclusive cloud computing provider before the two companies amended their partnership. OpenAI has since announced more than $1 trillion in infrastructure obligations, much of it tied to its Stargate project with partners Oracle and SoftBank. The tech industry's huge amount of spending on computing infrastructure for AI startups that aren't yet profitable has fueled concerns about an AI investment bubble. Investors have closely watched a series of intertwined deals over recent months between top AI developers such as OpenAI and Anthropic and the companies building the costly computer chips and data centers needed to power their AI products. Anthropic said it will continue to "prioritize cost-effective, capital-efficient approaches" to scaling up its business.
[23]
Anthropic commits $76b to build AI data centres in the US
Gift 5 articles to anyone you choose each month when you subscribe. Anthropic PBC plans to spend $US50 billion ($76 billion) to build custom data centres for artificial intelligence work in several US locations, including Texas and New York, the latest expensive pledge for infrastructure to support the AI boom. The new sites, which Anthropic is developing with UK-based Fluidstack, will start coming online throughout 2026, the company said in a statement. The project marks the first major data centre build-out that the AI firm has taken on directly, rather than through cloud-computing partners such as Amazon and Alphabet's Google.
[24]
Anthropic announces $50B investment in new US data centers to meet AI demand
SAN FRANCISCO (AP) -- Artificial intelligence company Anthropic announced a $50 billion investment in computing infrastructure on Wednesday that will include new data centers in Texas and New York. Anthropic, maker of the chatbot Claude, said it is working with London-based Fluidstack to build the new computing facilities to power its AI systems. It didn't disclose their exact locations or what source of electricity they will need. A report last month from TD Cowen said that the leading cloud computing providers leased a "staggering" amount of U.S. data center capacity in the third fiscal quarter of this year, amounting to more than 7.4 gigawatts of energy, more than all of last year combined. Oracle was securing the most capacity during that time, much of it supporting AI workloads for Anthropic's chief rival OpenAI, maker of ChatGPT. Google was second and Fluidstack came in third, ahead of Meta, Amazon, CoreWeave and Microsoft. Anthropic said its projects will create about 800 permanent jobs and 2,400 construction jobs. It said in a statement that the "scale of this investment is necessary to meet the growing demand for Claude from hundreds of thousands of businesses while keeping our research at the frontier." The tech industry's huge amount of spending on computing infrastructure for AI startups that aren't yet profitable has fueled concerns about an AI investment bubble. Investors have closely watched a series of intertwined deals over recent months between top AI developers such as OpenAI and Anthropic and the companies building the costly computer chips and data centers needed to power their AI products. Anthropic said it will continue to "prioritize cost-effective, capital-efficient approaches" to scaling up its business.
[25]
Anthropic announces $50B investment in new US data centers to meet AI demand
SAN FRANCISCO (AP) -- Artificial intelligence company Anthropic announced a $50 billion investment in computing infrastructure on Wednesday that will include new data centers in Texas and New York. Anthropic, maker of the chatbot Claude, said it is working with London-based Fluidstack to build the new computing facilities to power its AI systems. It didn't disclose their exact locations or what source of electricity they will need. A report last month from TD Cowen said that the leading cloud computing providers leased a "staggering" amount of U.S. data center capacity in the third fiscal quarter of this year, amounting to more than 7.4 gigawatts of energy, more than all of last year combined. Oracle was securing the most capacity during that time, much of it supporting AI workloads for Anthropic's chief rival OpenAI, maker of ChatGPT. Google was second and Fluidstack came in third, ahead of Meta, Amazon, CoreWeave and Microsoft. Anthropic said its projects will create about 800 permanent jobs and 2,400 construction jobs. It said in a statement that the "scale of this investment is necessary to meet the growing demand for Claude from hundreds of thousands of businesses while keeping our research at the frontier." The tech industry's huge amount of spending on computing infrastructure for AI startups that aren't yet profitable has fueled concerns about an AI investment bubble. Investors have closely watched a series of intertwined deals over recent months between top AI developers such as OpenAI and Anthropic and the companies building the costly computer chips and data centers needed to power their AI products. Anthropic said it will continue to "prioritize cost-effective, capital-efficient approaches" to scaling up its business.
[26]
Anthropic to Invest $50 Billion to Build Data Centres in the US, Create 2,400 Construction Jobs | AIM
Anthropic said its investment is aimed at meeting growing demand for its AI assistant Claude, which now serves over 300,000 business customers. Anthropic has announced a $50 billion investment in US computing infrastructure, partnering with Fluidstack to build data centres in Texas and New York, with additional sites planned. The facilities are designed specifically for Anthropic's workloads to support continued AI research and development. The project is expected to create around 800 permanent jobs and 2,400 construction jobs, with sites scheduled to come online through 2026. It aligns with the Trump administration's AI Action Plan, which aims to strengthen domestic AI leadership and technology infrastructure. "We're getting closer to AI that can accelerate scientific discovery and help solve complex problems in ways that weren't possible before," said Dario Amodei, CEO and co-founder of Anthropic. "Realising that potential requires infrastructure that can support continued development at the frontier. These sites will help us build more capable AI systems that can drive those breakthroughs, while creating American jobs." Anthropic said its investment is aimed at meeting growing demand for its AI assistant Claude, which now serves over 300,000 business customers. The company reported that the number of large accounts -- those generating over $100,000 in annual revenue -- has increased nearly sevenfold in the past year. The company selected Fluidstack for its ability to rapidly deliver large-scale power infrastructure. "Fluidstack was built for this moment," said Gary Wu, co-founder and CEO of Fluidstack. "We're proud to partner with frontier AI leaders like Anthropic to accelerate and deploy the infrastructure necessary to realise their vision." Anthropic said the investment will enable it to scale efficiently while maintaining focus on safety, alignment, and interpretability research. Meanwhile, AWS, which is an investor in Anthropic, recently announced a $38 billion partnership with OpenAI to run and scale OpenAI's core AI workloads on AWS infrastructure. The deal underscores the accelerating race among leading AI companies to secure compute capacity. OpenAI's recent partnerships with NVIDIA, AMD, and Broadcom together represent more than 26 gigawatts of capacity and potential commitments exceeding $1 trillion in total infrastructure investments.
[27]
Anthropic to spend $50B on US data center infrastructure - SiliconANGLE
Anthropic PBC today announced a $50 billion plan to build data centers in the U.S. The initiative will see the artificial intelligence provider open facilities in Texas and New York next year. It plans to launch additional sites further down the line. According to Anthropic, the data centers will be based on a custom design intended to optimize the efficiency of its workloads. The company plans to carry out the project with Fluidstack Ltd., a fairly low-profile AI infrastructure startup. A few weeks ago, Forbes reported that the company had raised about $25 million in funding. According to The Information, it has also secured a credit line worth more than $10 billion from a group of institutional lenders. It's notable that Anthropic picked Fluidstack over larger AI infrastructure providers such as CoreWeave Inc., which went public in April. One motivation may have been the custom software tooling developed by the former company. Fluidstack says that its software can automate much of the manual involved in operating graphics processing unit clusters. The company has developed a custom operating system, Atlas OS, that enables administrators to launch GPU clusters in a few clicks. It also automates subsequent maintenance tasks such as restarting servers. Customers can optionally extend Atlas OS' feature set by installing orchestration tools such as Kubernetes and Slurm. Fluidstack's other flagship software tool is called Lighthouse. According to the company, it can monitor a GPU cluster for technical issues and fix them automatically. If a malfunctioning server can't be restored, Lighthouse moves its workloads to another machine. Anthropic didn't specify what chips will run in the data centers it plans to build with Fluidstack. The latter company's website states that it currently offers AI clusters based on high-end Nvidia Corp. graphics cards such as the GB200, B200 and H200. That hints the upcoming data centers may also use Nvidia silicon. Anthropic has not yet detailed how it plans to finance the $50 billion project. The company, which is currently unprofitable, has raised about $33.7 billion from investors to date. It's possible Anthropic will raise additional funding or debt to finance the data center investments. "We're getting closer to AI that can accelerate scientific discovery and help solve complex problems in ways that weren't possible before," said Anthropic Chief Executive Officer Dario Amodei. "Realizing that potential requires infrastructure that can support continued development at the frontier." The announcement comes weeks after Amazon Web Services Inc. opened a $11 billion data center campus built for Anthropic. Project Rainier, as the site is called, will host one million AWS Trainium2 chips by year's end. In the longer term, the cloud giant plans to construct 23 additional buildings on the campus that will boost its computing capacity to 2.3 gigawatts.
[28]
Anthropic announces $50B investment in new US data centers to meet AI demand
Anthropic has announced a $50 billion investment in computing infrastructure SAN FRANCISCO -- Artificial intelligence company Anthropic announced a $50 billion investment in computing infrastructure on Wednesday that will include new data centers in Texas and New York. Anthropic, maker of the chatbot Claude, said it is working with London-based Fluidstack to build the new computing facilities to power its AI systems. It didn't disclose their exact locations or what source of electricity they will need. A report last month from TD Cowen said that the leading cloud computing providers leased a "staggering" amount of U.S. data center capacity in the third fiscal quarter of this year, amounting to more than 7.4 gigawatts of energy, more than all of last year combined. Oracle was securing the most capacity during that time, much of it supporting AI workloads for Anthropic's chief rival OpenAI, maker of ChatGPT. Google was second and Fluidstack came in third, ahead of Meta, Amazon, CoreWeave and Microsoft. Anthropic said its projects will create about 800 permanent jobs and 2,400 construction jobs. It said in a statement that the "scale of this investment is necessary to meet the growing demand for Claude from hundreds of thousands of businesses while keeping our research at the frontier." The tech industry's huge amount of spending on computing infrastructure for AI startups that aren't yet profitable has fueled concerns about an AI investment bubble. Investors have closely watched a series of intertwined deals over recent months between top AI developers such as OpenAI and Anthropic and the companies building the costly computer chips and data centers needed to power their AI products. Anthropic said it will continue to "prioritize cost-effective, capital-efficient approaches" to scaling up its business.
[29]
Anthropic AI deal explained: Microsoft & Nvidia's bet on Claude, diversifying beyond OpenAI - here's the investment breakdown
Anthropic AI deal breakdown: Microsoft and Nvidia are investing billions into AI startup Anthropic, diversifying their AI strategies beyond OpenAI. Anthropic will purchase $30 billion in Azure compute capacity, while Nvidia will collaborate on engineering to optimize AI models. This move signals Microsoft's intent to expand its cloud AI offerings and reduce reliance on a single partner. Anthropic AI deal breakdown: Microsoft announced on Tuesday a major new partnership with artificial intelligence startup Anthropic, joining forces with Nvidia in a move that signals the tech giant's efforts to diversify its AI investments beyond OpenAI, as per a report. Under the agreements, Microsoft will invest $5 billion into Anthropic, while Nvidia will commit $10 billion, as per a CNBC report. In return, Anthropic has agreed to purchase $30 billion worth of Azure compute capacity from Microsoft and has contracted for additional compute resources, initially up to one gigawatt, as per the report. Nvidia CEO Jensen Huang said in a video announcement that, "This is a dream come true for us," adding, "You know, we've admired the work of Anthropic and Dario for a long time, and this is the first time we are going to deeply partner with that Anthropic to accelerate Claude," as quoted by CNBC. The partnership is not just about investment. Nvidia and Anthropic will work closely on engineering and design to optimize Anthropic's AI models for performance and efficiency, while tailoring Nvidia architectures to Anthropic's specific workloads, as per the report. This "deep technology partnership" marks the first time the two companies are collaborating to support Anthropic's growth at scale. ALSO READ: Cloudflare outage today sparks travel chaos: NJ Transit platforms, DepartureVision and online services affected Microsoft CEO Satya Nadella said in the video that, "As an industry, we really need to move beyond any type of zero sum narrative or winner take all hype," adding, "What's required now is the hard work of building broad, durable capabilities together so that this technology can deliver real, tangible local success for every country, every sector and every customer. The opportunity is simply too big to approach any other way," as quoted by CNBC. Anthropic's $30 billion Azure commitment highlights Microsoft's continued dominance in cloud computing for AI workloads. ALSO READ: Cloudflare outage today impacts crypto: Toncoin, Arbiscan, BitMEX, DefiLlama are down - here's what happened Nvidia's $10 billion investment signals its growing influence in the AI chip market and its role in powering next-generation AI models, as per the report. Microsoft is reducing its reliance on OpenAI, despite holding a 27% stake in the company's for-profit business, currently valued at $135 billion, reported CNBC. OpenAI, backed by Microsoft since 2019, has grown rapidly, especially following the launch of ChatGPT, as per the report. The company recently completed a recapitalization, further cementing Microsoft's involvement. Now, Microsoft is betting on Anthropic as another cornerstone of its AI strategy. Partnership details breakdown at a glance, as compiled by The Kobeissi Letter: 1. Anthropic to buy $30B of Azure compute capacity 2. Nvidia to invest $10 billion in Anthropic 3. Microsoft to invest $5 billion in Anthropic 4. Nvidia and Anthropic to collaborate on design and engineering 5. Nvidia and Anthropic establish "deep technology partnership" What is this Microsoft, Anthropic, Nvidia deal about? It's a multi-billion-dollar partnership to invest in Anthropic and scale its AI models. Why is Microsoft investing in Anthropic if it already works with OpenAI? Microsoft wants to diversify its AI strategy and reduce dependence on a single partner while expanding its AI cloud business. (You can now subscribe to our Economic Times WhatsApp channel)
[30]
Microsoft CEO Satya Nadella Reveals Real AI Bottleneck After $35 Billion Anthropic Partnership -- And It's Not GPUs - Microsoft (NASDAQ:MSFT), NVIDIA (NASDAQ:NVDA)
Microsoft Corp. (NASDAQ:MSFT) Chairman and CEO Satya Nadella has revealed that the true bottleneck for the burgeoning AI industry isn't a shortage of advanced GPUs, but rather the fundamental physical infrastructure required to power them. Check out MSFT's stock price here. Nadella Exposes AI's Hidden Crisis After $35 Billion Anthropic Deal This insight comes amidst a massive new strategic partnership with AI startup Anthropic, valued at approximately $35 billion. The deal sees Microsoft investing $5 billion into Anthropic, a prominent competitor to OpenAI. Whereas Anthropic is committing to spending $30 billion on Microsoft Azure compute capacity. However, despite this monumental demand, Nadella's comments suggest Microsoft's challenge lies not in attracting customers, but in scaling the physical "token factories" - his term for the data centers that process AI computations. "We are supply-constrained on powered shells," Nadella stated in a Nov. 18 podcast with Cheeky Pint, offering a stark contrast to the dotcom-era bust. The "powered shell" refers to a data center building that has been fully connected to the power grid, complete with massive electrical and cooling infrastructure, ready for server racks. Why AI Infrastructure Boom Is 'Different' From Dotcom Bubble He emphasized that, unlike the dotcom bubble, where "dark fiber" sat unused, today's demand for AI compute is immediate and overwhelming. "I don't have a utilization problem," Nadella explained, highlighting that existing capacity is fully subscribed. "My problem is, I got to bring more supply." See Also: Trump Says Will 'Not Let Anybody Have' Nvidia Chips -- But Satya Nadella Reveals MSFT's Chips Are Lying In 'Inventory' Due To Power Shortage AI Not About Smart Algorithms, But Who Can Plug Them In Securing land, obtaining permits, and integrating with power grids for gigawatt-scale operations is a lengthy, complex, multi-year process. Nadella underscored this challenge, stating, "If I don't have enough shells that are powered that I can then roll in my racks and then make them operational," the advanced chips cannot be deployed. As Nadella articulated, the opportunity in AI is "simply too big to approach any other way," underscoring his belief that a platform should be open to all, even rivals. The Anthropic partnership, following significant investments in OpenAI, exemplifies this "non-zero-sum" philosophy, ensuring that Azure remains the foundational "AI server" regardless of which frontier model ultimately dominates. Microsoft Outperforms Nasdaq In 2025 Shares of MSFT have risen 17.97% year-to-date, whereas the Nasdaq Composite and Nasdaq 100 indices have returned 16.35% and 16.82%, respectively. On Tuesday, the shares closed 2.70% lower at $493.79 apiece and dropped by 0.35% in premarket on Wednesday. The stock has gained 18.19% over the year. It maintains a weaker price trend over the short and medium terms but a strong trend in the long term, with a strong quality ranking. Additional performance details, as per Benzinga's Edge Stock Rankings, are available here. The futures of the S&P 500, Nasdaq 100, and Dow Jones indices were mixed on Wednesday, after closing lower for the second consecutive day this week on Tuesday. Read Next: Jeff Bezos Touts Data Centers In Space As Samsung And OpenAI Plan Floating Data Centers In The Ocean Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Image via Shutterstock MSFTMicrosoft Corp$494.750.19%OverviewNVDANVIDIA Corp$182.660.72%Market News and Data brought to you by Benzinga APIs
[31]
Microsoft, Anthropic & NVIDIA Unite to Turbocharge Enterprise AI in 2026
What happens when three of the most influential names in technology, Microsoft, Anthropic, and NVIDIA, join forces? The answer could redefine the future of artificial intelligence. In a bold move to push the boundaries of AI innovation, these industry giants have announced a strategic partnership aimed at making AI more accessible, scalable, and secure. With Anthropic's innovative AI models, NVIDIA's unparalleled computing power, and Microsoft's vast cloud infrastructure, this collaboration is set to address the growing demands of enterprise AI applications while unlocking new possibilities across industries. It's a rare alignment of expertise that could transform how businesses harness the potential of AI. This quick overview will explore the intricate details of this new partnership, from Anthropic's integration into Microsoft's ecosystem to NVIDIA's role in optimizing AI performance. Readers will discover how these companies are tackling challenges like computational scalability and AI safety, while also setting new benchmarks for innovation. Whether you're curious about the future of enterprise AI or the technology driving these advancements, this collaboration offers a glimpse into what's possible when leaders in hardware, software, and cloud computing unite. At its core, this partnership isn't just about technology, it's about reshaping the way industries adapt and thrive in an AI-driven world. Microsoft, Anthropic, NVIDIA Partnership Anthropic and Microsoft: Expanding AI Model Integration Anthropic's Claude models, renowned for their advanced natural language processing capabilities, are now being integrated into Microsoft's ecosystem. These models will be available to Microsoft Foundry customers and embedded into Microsoft's Copilot tools, enhancing enterprise productivity and decision-making. By incorporating Claude models, Microsoft seeks to deliver AI-driven solutions that empower businesses to streamline operations and improve outcomes. To support this integration, Anthropic will use Microsoft's Azure cloud infrastructure. Azure's scalability and robust capacity will ensure Anthropic can meet the increasing demand for AI-powered tools. This collaboration also includes a joint go-to-market strategy, combining Anthropic's expertise in AI model development with Microsoft's extensive enterprise reach. Together, they aim to address complex business challenges, unlock new opportunities for innovation, and drive the adoption of innovative AI solutions. NVIDIA and Anthropic: Enhancing AI Model Development NVIDIA plays a pivotal role in this partnership by providing state-of-the-art accelerators and computing resources to support Anthropic's AI model development. This includes co-optimizing models such as Blackwell and Vera Rubin to enhance their performance and efficiency. NVIDIA's commitment to providing up to a gigawatt of capacity ensures that Anthropic has the computational power required to scale its AI solutions effectively. This collaboration underscores the importance of aligning hardware and software development to maximize AI capabilities. By working together, NVIDIA and Anthropic aim to push the boundaries of AI innovation, delivering solutions that are both powerful and efficient. Their combined efforts are expected to set new benchmarks for AI performance, allowing businesses to tackle complex challenges with greater precision and speed. Anthropic, Microsoft, and NVIDIA Announce Partnerships Gain further expertise in Artificial Intelligence (AI) by checking out these recommendations. Microsoft and NVIDIA: Strengthening Cloud AI Infrastructure Building on their longstanding collaboration, Microsoft and NVIDIA are integrating NVIDIA's advanced technology into Microsoft's Azure platform. This integration is designed to advance enterprise and industrial AI applications by combining Microsoft's expertise in cloud infrastructure with NVIDIA's leadership in high-performance computing systems. The result is a robust platform capable of handling complex AI workloads and delivering scalable solutions to businesses worldwide. This partnership highlights the critical role of cloud AI integration in meeting the growing computational demands of modern enterprises. By using Azure's scalability and NVIDIA's innovative hardware, the collaboration aims to provide cost-effective, high-performance AI solutions that deliver measurable results. This integration not only enhances the capabilities of Azure but also ensures businesses have access to the tools needed to remain competitive in an increasingly AI-driven world. Shared Vision: Advancing AI Accessibility and Safety At the core of this partnership is a shared vision to advance AI across all levels of the technology stack. This includes innovations in hardware, systems, models, and applications, all aimed at improving AI safety, efficiency, and scalability. By prioritizing collaboration over competition, Microsoft, Anthropic, and NVIDIA are working to create durable, scalable AI capabilities that benefit industries and enterprises on a global scale. The partnership also emphasizes the importance of making AI accessible to a broader range of industries and regions. By addressing the unique challenges faced by businesses in different sectors, the collaboration seeks to deliver reliable AI solutions that empower organizations to harness the fantastic potential of AI technology. This shared commitment to accessibility and safety ensures that AI adoption is both responsible and impactful, fostering trust and confidence in its deployment. Addressing the Challenges of AI Growth The rapid adoption of AI has brought with it increasing demands for computational resources and infrastructure. This partnership recognizes the need for cost-effective, high-performance solutions capable of scaling to meet these growing demands. By combining their expertise, Microsoft, Anthropic, and NVIDIA are well-positioned to address these challenges and drive progress in the AI landscape. This collaboration represents a significant step forward in the development and deployment of AI technologies. By working together, these industry leaders are not only advancing the state of AI but also making sure its benefits are accessible to businesses and industries worldwide. Their efforts are expected to shape the future of AI, allowing organizations to innovate, grow, and thrive in an era defined by technological transformation.
[32]
NVIDIA & Anthropic Strike a Surprising Deal Worth $10 Billion Despite Their CEOs Previously Taking Shots at Each Other Over 'Who Does AI Better'
NVIDIA has announced a partnership with Anthropic, one of the leading 'closed-source' AI firms in the world, indicating that the deal will push the AI bandwagon even further. There's no doubt that NVIDIA has partnered with almost all AI giants, including CSPs, neoclouds, and firms such as OpenAI and Amazon's AWS. The one name missing from the list was indeed Anthropic, as there had been enmities between NVIDIA and the AI giant in the past, but it appears that this is now a thing of the past. In a new blog post by Team Green, it is disclosed that NVIDIA and Anthropic have entered an AI-focused partnership, with Claude's creator committing 1 GW capacity around Blackwell and Rubin systems, while NVIDIA will invest $10 billion into the AI giant. Anthropic's compute commitment will initially be up to 1 gigawatt of compute capacity with NVIDIA Grace Blackwell and Vera Rubin systems. As part of the partnership, NVIDIA and Microsoft are committing to invest up to $10 billion and up to $5 billion respectively in Anthropic. - NVIDIA Well, the conventional route of analyzing this deal would be to associate it with the rising computing demands of AI companies out there, and how they need the financing in order to achieve breakthroughs, but with NVIDIA and Anthropic, there's a twist in the story. For those unaware, Anthropic became one of the first customers to adopt Google's 7th-generation 'Ironwood' TPUs a few weeks ago, marking one of the largest deals for custom silicon from someone other than NVIDIA. The development made headlines all over the industry, as the Google-Anthropic deal was seen as an act to target NVIDIA's AI dominance. Not just this, but NVIDIA's CEO Jensen Huang and Anthropic's Dario Amodei have expressed concerns about how each of them takes the 'AI frenzy', with Jensen criticizing Anthropic's 'closed-source' AI approach, while Amodei's firm has spoken against NVIDIA's pursuit of getting its AI chips to China. There has been an apparent enmity between the two firms over the years, but with the recent deal, apparently held with Microsoft being the 'mediator', it is evident that the AI world is in desperate need of financing and is willing to let go of past controversies.
[33]
Amazon-Backed Anthropic Inks Microsoft, Nvidia Deals: $30B Azure Commitment, Vera Rubin Support
The commitments are part of 'new strategic partnerships' the three companies jointly announced on Tuesday, with Microsoft and Nvidia committing to invest up to $5 billion and up to $10 billion, respectively, in Anthropic, an OpenAI rival backed by Amazon. OpenAI competitor Anthropic has committed to spend $30 billion on Microsoft Azure compute capacity and use up to 1 gigawatt of data center infrastructure based on Nvidia's latest rack-scale AI platforms. The commitments are part of "new strategic partnerships" the three companies jointly announced on Tuesday, with Microsoft and Nvidia also committing to invest up to $5 billion and up to $10 billion, respectively, in Anthropic, a well-funded AI startup founded by former OpenAI employees that is known for its family of Claude LLMs. [Related: AMD Sees 'Very Clear Path' To Double-Digit Share In Nvidia-Dominated Data Center AI Market] The announcement marks a shift of sorts for Anthropic, which received $8 billion in investments from Amazon across deals that culminated last year with the AI model provider naming Amazon Web Services as its primary AI training partner. Google has also invested $3 billion into Anthropic, which uses Google Cloud as another cloud provider. It's also the latest in so-called circular spending by large and influential AI companies, where funds often flow back and forth between two or more parties, with suppliers in some cases investing in customers in exchange for buying their products. Recent examples include deals struck by OpenAI with Nvidia, Oracle and CoreWeave. In Anthropic's new partnership with Microsoft, the two companies are expanding upon their existing relationship to make Anthropic's leading-edge Claude models available to customers of the Azure AI Foundry. The list of applicable models includes Claude Sonnet 4.5, Claude Opus 4.1 and Claude Haiku 4.5. The three companies noted that this move will make Claude the "only frontier LLM model available on all three of the world's most prominent cloud services." In another part of the deal, Microsoft is "committed to continuing access for Claude across Microsoft's Copilot family, including GitHub Copilot and Copilot Studio," they said. Anthropic is expected to "contract an additional compute capacity" of up to 1 gigawatt from Azure in addition to its initial $30 billion commitment with Microsoft. As for Nvidia's end of the agreement, the AI infrastructure giant is forming a "deep technology partnership" with Anthropic for the first time, the three companies said. With Nvidia and Anthropic expected to collaborate on design and engineering, their aim is to optimize Anthropic's models "for the best possible performance, efficiency" and total cost of ownership. They also plan to optimize "future Nvidia architectures for Anthropic workloads." The 1 gigawatt of compute capacity Anthropic plans to use from Nvidia will consist of the vendor's Grace Blackwell systems that debuted last year and its next-generation Vera Rubin systems that are expected to launch in the second half of next year.
[34]
Nvidia and Microsoft Pledge Up to $15 Billion to Anthropic | PYMNTS.com
Under this collaboration, Anthropic will scale its Claude AI model on the Nvidia-powered Microsoft Azure, the companies announced Tuesday (Nov. 18). Anthropic has agreed to purchase $30 billion of Azure compute capacity and to contract additional compute capacity up to one gigawatt, while Microsoft and Nvidia have pledged to invest up to $5 billion and $10 billion, respectively, in Anthropic. According to the announcement, Microsoft and Anthropic are also expanding their existing partnership to widen access to Claude for businesses, giving Microsoft Foundry customers use of Anthropic's frontier Claude models, while continuing to allow access for Claude across Microsoft's Copilot family of products. The partnership will make Claude the only frontier model available on the world's three biggest cloud services, the announcement added. In other Anthropic news, PYMNTS wrote last week about the implications of a cyber-espionage incident involving the company, in which its Claude Code model was manipulated into spying on around 30 finance, technology, manufacturing and government groups. Anthropic said in its disclosure that the mid-September incident marks the first confirmed case in which an AI agent handled most steps of an intrusion normally performed by human hackers. AI industry insiders interviewed by PYMNTS said this incident illustrates that fraudsters are evolving along with AI technology, presenting risks to automated systems from outside AI systems and requiring safeguards. Eva Nahari, chief product officer at AI solutions provider Vectara, told PYMNTS that the case demonstrates how automation changes the threat landscape. "With automation comes velocity and scale," and that attackers are now picking up the same knowledge and creative advantages AI gives enterprises, Nahari said, calling the campaign "global, industry-agnostic and growing." She added that security teams have been anticipating this change since the earliest days of popular large language models.
[35]
Nvidia CEO Says Anthropic Has 'Revolutionized' Agentic AI Space As Jensen Huang, Satya Nadella, Dario Amodei Talk About New Partnership - NVIDIA (NASDAQ:NVDA)
Nvidia Corp. (NASDAQ:NVDA), Microsoft Corp. (NASDAQ:MSFT) and Anthropic on Tuesday detailed a sweeping new alliance involving major investments. Microsoft, Nvidia, Anthropic Outline Multi-Billion-Dollar AI Pact Microsoft and Nvidia announced a new strategic partnership with Anthropic that includes a combined $15 billion investment in the AI startup. Anthropic has also agreed to purchase $30 billion worth of Azure compute capacity as part of the deal. The agreement broadens access to Claude models on Microsoft Azure and launches joint work to optimize Nvidia's next-generation Grace Blackwell and Vera Rubin architectures for Anthropic's workloads. See Also: Everyone's Bullish, Cash Is Gone -- What Happens If The Fed Doesn't Cut? Satya Nadella Highlights Deepening AI Stack Integration In a joint video statement, Microsoft CEO Satya Nadella said the companies are aligning across every layer of the AI stack, noting that Microsoft will integrate Claude models across its Copilot products while Anthropic will rely on Azure for large-scale training and deployment. "We're increasingly going to be customers of each other," Nadella said, adding that the collaboration is aimed at delivering model choice and better performance for enterprises. Anthropic Becomes First AI Lab Spanning All Major Clouds Anthropic CEO Dario Amodei highlighted that Claude will now be available on the three biggest cloud platforms, calling it a step toward accelerating enterprise adoption. He said the partnership will give Anthropic additional compute to train future models and support Microsoft's first-party AI products. Amodei spoke about a long-term roadmap with Nvidia focused on co-optimizing models on upcoming GPU architectures. Nvidia Praises Anthropic's Research, AI Safety Work Nvidia CEO Jensen Huang said Anthropic has "revolutionized" agentic AI through its work on model context protocols and code-generation systems. He said Nvidia expects significant speed gains as Claude is optimized on Grace Blackwell GPUs, adding that demand for cost-efficient compute is rising sharply. The smarter the AI becomes, the greater the adoption, Huang said. Benzinga's Edge Stock Rankings show Nvidia maintaining a consistent upward trend across short, medium and long-term periods. You can find more detailed performance insights here. Read Next: Jim Cramer Says Trump Has Not 'Banned' Nvidia From China: President's Comments Leave A 'Lot Of Latitude' For The Tech Giant And Beijing Disclaimer: This content was partially produced with the help of Benzinga Neuro and was reviewed and published by Benzinga editors. Photo courtesy: jamesonwu1972 / Shutterstock.com NVDANVIDIA Corp$180.72-%OverviewMSFTMicrosoft Corp$492.52-%Market News and Data brought to you by Benzinga APIs
[36]
Anthropic, Microsoft announce new AI data center projects as industry's construction push continues
Artificial intelligence company Anthropic announced a US$50 billion investment in computing infrastructure on Wednesday that will include new data centers in Texas and New York. Microsoft also on Wednesday announced a new data center under construction in Atlanta, Georgia, describing it as connected to another in Wisconsin to form a "massive supercomputer" running on hundreds of thousands of Nvidia chips to power AI technology. The latest deals show that the tech industry is moving forward on huge spending to build energy-hungry AI infrastructure, despite lingering financial concerns about a bubble, environmental considerations and the political effects of fast-rising electricity bills in the communities where the massive buildings are constructed. Anthropic, maker of the chatbot Claude, said it is working with London-based Fluidstack to build the new computing facilities to power its AI systems. It didn't disclose their exact locations or what source of electricity they will need. Another company, cryptocurrency mining data center developer TeraWulf, has previously revealed it was working with Fluidstack on Google-backed data center projects in Texas and New York, on the shore of Lake Ontario. TeraWulf declined comment Wednesday. A report last month from TD Cowen said that the leading cloud computing providers leased a "staggering" amount of U.S. data center capacity in the third fiscal quarter of this year, amounting to more than 7.4 gigawatts of energy, more than all of last year combined. Oracle was securing the most capacity during that time, much of it supporting AI workloads for Anthropic's chief rival OpenAI, maker of ChatGPT. Google was second and Fluidstack came in third, ahead of Meta, Amazon, CoreWeave and Microsoft. Anthropic said its projects will create about 800 permanent jobs and 2,400 construction jobs. It said in a statement that the "scale of this investment is necessary to meet the growing demand for Claude from hundreds of thousands of businesses while keeping our research at the frontier." Microsoft has branded its two-story Atlanta data center as Fairwater 2 and said it will be connected across a "high-speed network" with the original Fairwater complex being built south of Milwaukee, Wisconsin. The company said the facility's densely packed Nvidia chips will help power Microsoft's own AI technology, along with OpenAI's and other AI developers. Microsoft was, until earlier this year, OpenAI's exclusive cloud computing provider before the two companies amended their partnership. OpenAI has since announced more than US$1 trillion in infrastructure obligations, much of it tied to its Stargate project with partners Oracle and SoftBank. Microsoft, in turn, spent nearly US$35 billion in the July-September quarter on capital expenditures to support its AI and cloud demand, nearly half of that on computer chips. Anthropic has made its own computing partnerships with Amazon and, more recently, Google. The tech industry's big spending on computing infrastructure for AI startups that aren't yet profitable has fueled concerns about an AI investment bubble. Investors have closely watched a series of circular deals over recent months between AI developers and the companies building the costly chips and data centers needed to power their AI products. Anthropic said it will continue to "prioritize cost-effective, capital-efficient approaches" to scaling up its business. OpenAI had to backtrack last week after its chief financial officer, Sarah Friar, made comments at a tech conference suggesting the U.S. government could help in financing chips needed for data centers. The White House's top AI official, David Sacks, responded on social media platform X that there "will be no federal bailout for AI" and if one of the leading companies fails, "others will take its place," though he also added he didn't think "anyone was actually asking for a bailout." OpenAI CEO Sam Altman later confirmed in a lengthy statement that "we do not have or want government guarantees" for the company's data centers and also sought to address concerns about whether it will be able to pay for all the infrastructure it has signed up for. "We are looking at commitments of about US$1.4 trillion over the next 8 years," Altman wrote. "Obviously this requires continued revenue growth, and each doubling is a lot of work! But we are feeling good about our prospects there."
[37]
Amazon-backed Anthropic commits $50B to build US data centers
AI startup Anthropic said Wednesday it would invest $50 billion in building data centers in the US, the latest multi-billion-dollar outlay in the industry as companies race to expand their artificial intelligence infrastructure. The company behind the Claude AI models said it would set up the facilities with infrastructure provider Fluidstack in Texas and New York, with more sites coming online in the future. The data centers are custom-built for Anthropic. Tech companies have announced massive spending plans this year, with many focusing on expanding their US footprint, as President Trump pushes for investments on American soil to maintain the country's edge in the AI sector. Trump ordered his administration in January to produce an AI Action Plan that would make "America the world capital in artificial intelligence." As part of the push, several American companies rolled out a series of big-ticket AI and energy investment pledges at Trump's tech and AI summit in July. Anthropic said the project is expected to create about 800 permanent jobs and 2,400 construction jobs in the US as the data centers come online throughout 2026. The outlay "will help advance the goals in the Trump administration's AI Action Plan to maintain American AI leadership and strengthen domestic technology infrastructure," Anthropic said. The San Francisco-based company, which is backed by Amazon and Google-parent Alphabet, was valued at $183 billion in early September. Formed in 2021 by a group of former OpenAI employees, Anthropic serves more than 300,000 enterprise customers. Its Claude large language models are widely regarded as one of the most powerful frontier models on the market.
[38]
Microsoft, NVIDIA and Anthropic forge major AI partnerships By Investing.com
Investing.com -- Microsoft, NVIDIA, and Anthropic announced new strategic partnerships on Tuesday to expand access to Anthropic's Claude AI models and enhance their capabilities. Anthropic will scale its Claude AI model on Microsoft Azure, powered by NVIDIA technology. The company has committed to purchase $30 billion of Azure compute capacity and contract additional compute capacity up to 1 gigawatt. In a first-time deep technology partnership, NVIDIA and Anthropic will collaborate on design and engineering to optimize Anthropic models for performance, efficiency, and total cost of ownership, while also optimizing future NVIDIA architectures for Anthropic workloads. Anthropic's compute commitment will initially utilize up to 1 gigawatt of compute capacity with NVIDIA Grace Blackwell and Vera Rubin systems. Microsoft and Anthropic are expanding their existing partnership to provide broader access to Claude for businesses. Microsoft Azure AI Foundry customers will gain access to Anthropic's frontier Claude models including Claude Sonnet 4.5, Claude Opus 4.1, and Claude Haiku 4.5. This makes Claude the only frontier large language model available on all three major cloud services. Microsoft has also committed to continuing Claude access across its Copilot family, including GitHub Copilot and Copilot Studio. As part of these agreements, NVIDIA has committed to invest up to $10 billion in Anthropic, while Microsoft will invest up to $5 billion.
[39]
Anthropic reaches $350 billion valuation after massive investments from Microsoft and Nvidia
Anthropic, a young American artificial intelligence startup, has seen its valuation jump to $350bn following a major strategic partnership with Microsoft and Nvidia. The new agreement, made official on Tuesday, includes a $5bn investment from Microsoft and $10bn from Nvidia, along with a commitment from Anthropic to spend $30bn on Azure for computing capabilities. The company also plans to contract up to one gigawatt of computing power, notably mobilizing Nvidia's Grace Blackwell and Vera Rubin architectures. This partnership marks a strategic shift for Microsoft, which is seeking to diversify its alliances within the AI ecosystem, hitherto dominated by its historic collaboration with OpenAI. Anthropic, founded in 2021 by former OpenAI executives, is establishing itself as a serious competitor with its Claude model range. The partnership with Nvidia and Microsoft aims to optimize the performance of these models by adapting their operation to next-generation hardware infrastructures. According to Satya Nadella, this cooperation is part of a shared desire to build robust and interoperable AI capabilities. Anthropic's valuation, which stood at $183bn in September, has nearly doubled in a matter of months, reflecting increased investor confidence in its ability to compete with OpenAI and Google's models. Nvidia CEO Jensen Huang hailed the partnership as an unprecedented collaboration, while Nvidia consolidates its position as a key supplier to the industry. On the eve of the publication of its quarterly results, the California-based group is thus confirming its role as a technological linchpin in the global race for artificial intelligence.
[40]
Nvidia, Microsoft and Anthropic Commit to Roughly $45 Billion in AI Partnership
Anthropic said it plans to purchase at least $30 billion of Microsoft Azure's computing capacity to scale and train its Claude model. In addition, Nvidia and Anthropic are partnering to optimize Nvidia's technology for Claude's platform, the three companies said Tuesday. Nvidia said it would invest up to $10 billion in Anthropic, while Microsoft pledged up to $5 billion. Anthropic also agreed to buy 1 gigawatt of computing power from Nvidia's Grace Blackwell and Vera Rubin systems. "We are increasingly going to be customers of each other," Microsoft Chief Executive Satya Nadella said in a video statement with Anthropic CEO Dario Amodei and Nvidia Chief Jensen Huang. "This partnership of the three of us will be able to bring AI--bring Claude--to every enterprise, every industry around the world," Huang added. The companies said Tuesday that Anthropic would initially buy $30 billion of compute power from Microsoft and contract an additional 1 gigawatt of compute capacity. In turn, Microsoft will provide access to advanced Claude models for Copilot users and customers of its Azure AI Foundry. Nadella said the partnership builds on Microsoft's roughly $135 billion stake in OpenAI by providing Microsoft customers with more choice. Huang described the deal as a "dream come true for us." "Now your business is on a rocket ship," Huang told Amodei. "I'm really, really hoping for an order-of-magnitude speed-up, and that's going to help you scale even faster, drive down token economics, and really make it possible for us to spread AI everywhere. Amodei said Microsoft's emphasis on enterprise platforms was a natural fit for Claude, which is frequently used in business applications.
[41]
Anthropic launches a $50bn plan to build US AI infrastructure
The start-up Anthropic will invest $50bn in a national infrastructure dedicated to artificial intelligence, with inaugural data centers in Texas and New York, built in partnership with Fluidstack. This deployment aims to support the growth of its enterprise customer base and its research ambitions. The first facilities, operational by 2026, are expected to generate 800 permanent jobs and over 2,000 in construction. This project is part of a strategy to anchor AI physically on American soil, as the debate over technological sovereignty and national computing capabilities intensifies. Anthropic CEO Dario Amodei emphasizes the importance of having the right infrastructure in place to design advanced AI systems. Faced with competition from OpenAI, which has secured over $1.4 trillion in commitments from major partners, Anthropic intends to accelerate its development while aiming for profitability by 2028. The initiative comes amid political and economic uncertainty surrounding public funding for AI infrastructure. While Amazon has already commissioned an $11bn Anthropic campus in Indiana, the startup is also strengthening its ties with Alphabet. Pressure is mounting on US industrial resources, with some observers fearing a speculative bubble. OpenAI recently requested an expansion of public aid through the CHIPS Act, illustrating the growing tensions surrounding the funding model for this new technological race.
Share
Share
Copy Link
Anthropic announces major partnerships with Microsoft and Nvidia worth $45 billion, bringing Claude AI models to Azure while committing to $30 billion in cloud spending. The deal includes separate $50 billion data center construction plans across the US.
Anthropic has announced a landmark $45 billion partnership with Microsoft and Nvidia, marking one of the largest AI infrastructure deals of 2025. The agreement brings together three major players in the artificial intelligence ecosystem, with Anthropic committing $30 billion to Microsoft's Azure cloud services while receiving up to $15 billion in combined investments from both Microsoft and Nvidia
1
5
.
Source: The Verge
The partnership enables Anthropic's Claude AI models to be available on Microsoft Foundry for the first time, including Claude Sonnet 4.5, Claude Opus 4.1, and Claude Haiku 4.5. Microsoft CEO Satya Nadella emphasized the mutual benefits, stating that "we will use Anthropic models, they will use our infrastructure, and we'll go to market together"
3
.Under the agreement, Microsoft will invest up to $5 billion in Anthropic, while Nvidia contributes up to $10 billion. In return, Anthropic commits to purchasing $30 billion worth of Azure compute capacity and has agreed to contract additional compute capacity up to one gigawatt
3
. The deal also includes optimization work between Anthropic and Nvidia to enhance Claude models' performance on future Nvidia architectures, specifically the Grace Blackwell and Vera Rubin systems5
.Despite this significant partnership with Microsoft, Amazon Web Services remains Anthropic's primary cloud provider and training partner, reflecting the company's strategy to diversify its infrastructure dependencies while maintaining existing relationships
5
.In a parallel development, Anthropic announced a separate $50 billion commitment to build custom data centers across the United States through a partnership with UK-based neocloud provider Fluidstack. The facilities will be located in Texas and New York, with additional sites planned but not yet disclosed
2
4
.
Source: SiliconANGLE
This marks Anthropic's first major effort to build custom infrastructure, representing a significant shift from relying solely on third-party cloud providers. CEO Dario Amodei stated that the infrastructure investment is necessary because "we're getting closer to AI that can accelerate scientific discovery and help solve complex problems in ways that weren't possible before"
2
.Related Stories
The deals have reportedly doubled Anthropic's valuation from $183 billion to approximately $350 billion in just two months, according to sources close to the agreement
5
. This dramatic increase has intensified discussions about potential AI bubble concerns, particularly given the circular nature of investments where companies invest in each other while committing to purchase services from their investors.The infrastructure spending aligns with Anthropic's internal revenue projections, which reportedly anticipate reaching $70 billion in revenue and $17 billion in positive cash flow by 2028
2
. However, these commitments pale in comparison to competitors' plans, with Meta committing $600 billion for data centers over three years and the Stargate partnership planning $500 billion in infrastructure spending2
.The partnership reflects broader trends in the AI industry, where major players are increasingly forming strategic alliances to secure computing resources and market access. Microsoft has been expanding its use of Anthropic's models across its Copilot services, even favoring Claude 4 over GPT-5 in some applications like Visual Studio Code's auto AI model selector
3
.Experts suggest that Anthropic's move toward building custom data centers could signal a broader industry trend, as AI companies seek greater control over their computing infrastructure. MIT's Vijay Gadepally noted that this represents "the next logical progression" of verticalization in AI computing
4
.Summarized by
Navi
[5]
19 Nov 2025•Business and Economy

22 Sept 2025•Business and Economy

31 Jul 2024
