2 Sources
2 Sources
[1]
Amazon's $38B OpenAI Deal Proves NVIDIA's Monopoly Is Already Breaking - Apple (NASDAQ:AAPL), Amazon.com (NASDAQ:AMZN)
The fact that OpenAI needed $660 billion across five cloud providers to avoid vendor lock-in tells you everything about NVIDIA's pricing power problem Amazon.com Inc's (NASDAQ:AMZN) $38 billion OpenAI deal sent AMZN shares up 5% and NVIDIA Corporation (NASDAQ:NVDA) up 3% this week. Wall Street interpreted the announcement as validation that AWS could compete in the AI infrastructure race. The narrative seemed simple: OpenAI secures hundreds of thousands of cutting-edge NVIDIA GPUs, and Amazon solidifies its AI cloud position. But investors are missing a critical detail from five days earlier. Amazon revealed that Anthropic (OpenAI's biggest rival and a company Amazon invested $8 billion in) is now running on 500,000 of Amazon's custom Trainium2 chips, scaling to over 1 million chips by year-end. While OpenAI commits to NVIDIA's premium GPUs, Anthropic is demonstrating that Amazon's custom silicon can train frontier AI models at a fraction of the cost. If that bet succeeds, it fundamentally reshapes who controls AI infrastructure economics and threatens NVIDIA's $5 trillion market cap. The Tale of Two Strategies: Why Amazon Is Playing Both Sides Amazon is executing a dual strategy: Strategy A (OpenAI): Sell NVIDIA GPUs through AWS cloud services. Amazon captures infrastructure revenue, but NVIDIA keeps the fat profit margins on the chips themselves. Strategy B (Anthropic): Deploy Amazon's custom Trainium2 chips. Amazon captures both the infrastructure revenue and the chip margins, cutting NVIDIA out entirely. AWS claims Trainium2 delivers 30-40% better price-performance than GPU-based instances for training workloads. For a company like Anthropic spending billions annually on compute, that translates to hundreds of millions in savings. Anthropic's revenue grew from approximately $1 billion at the beginning of 2025 to over $5 billion by August, powered largely by this cost advantage. Amazon wins either way. If OpenAI succeeds with NVIDIA's ecosystem, AWS books $38 billion in revenue. If Anthropic succeeds with Trainium2, Amazon proves custom silicon can compete and every other AI lab will demand the same economics. Idea behind Amazon's position. Whether OpenAI succeeds with Nvidia GPUs or Anthropic proves viable, Amazon wins The Technical Bet That Changes Everything What makes Anthropic's deployment remarkable is that Trainium2 is specifically optimized for Anthropic's reinforcement learning workloads, which are more memory-bandwidth-bound than raw compute-bound. NVIDIA builds Ferrari engines designed for maximum horsepower. Anthropic needed fuel-efficient trucks optimized for long-haul routes. Amazon built exactly that, and Anthropic was heavily involved in the chip design process, essentially using Amazon's Annapurna Labs as a custom silicon partner. This is the same hardware-software co-design strategy that made Apple Inc's (NASDAQ:AAPL) M-series chips so dominant. Alphabet Inc's (NASDAQ:GOOGL) Google pioneered it with TPUs for DeepMind. Now Anthropic and Amazon are executing it at massive scale. Project Rainier delivers five times the compute power Anthropic used for previous model generations. As Ron Diamant, AWS vice president and engineer told reporters: "When we build our own devices, we get to optimize across the entire stack to really compress engineering time and the time to get to massive scale". The NVIDIA Vulnerability Markets Are Missing For two decades, NVIDIA's moat was CUDA, the proprietary software ecosystem that made switching away prohibitively expensive. Developers spent years mastering CUDA. Rewriting production code for alternative chips meant months of engineering work. But that lock-in is breaking. OpenAI's Triton compiler and frameworks like PyTorch 2.0 now allow developers to write code that runs on both NVIDIA GPUs and competing chips without modification. The switching cost that once measured in millions is becoming a six-month engineering project. More critically: Leading AI cloud providers, including Amazon and Google, have ramped up their in-house chip efforts rather than relying on NVIDIA. This represents systematic replacement. When your two largest customers (Microsoft Corporation (NASDAQ: MSFT) and Amazon, representing 39% of NVIDIA's revenue according to recent SEC filings) are building competing alternatives, you face a structural reset. This timeline projects Nvidia's adoption declining from 95% (2024) to 60% (2027) while Trainium2 grows from near-zero to 38% market share, with 40-50% cost advantages OpenAI's Multi-Cloud Escape Plan OpenAI's true strategy reveals itself in the numbers: The company now has commitments with Microsoft ($250 billion), Oracle Corporation (NYSE:ORCL) ($300 billion), Google (tens of billions), AWS ($38 billion), and CoreWeave ($22.4 billion). Over $660 billion in total infrastructure spending. Until recently, Microsoft had exclusive cloud partnership rights with OpenAI. Last week, those exclusivity provisions expired. Days later, OpenAI signed with Amazon. This is about breaking free from any single vendor's pricing power, especially NVIDIA's. By distributing workloads across clouds with different hardware ecosystems, OpenAI gains access to NVIDIA GPUs, Google's TPUs, AWS's Trainium chips, and future custom silicon from Broadcom Inc (NASDAQ:AVGO). When your infrastructure commitments exceed $1.4 trillion and you're burning $8-10 billion annually, vendor lock-in becomes existential risk. As Mike Krieger, Anthropic's chief product officer, told CNBC: "There is such demand for our models that I think the only way we would have been able to serve as much as we've been able to serve so far this year is this multi-chip strategy". Translation: The AI labs have figured out that dependence on NVIDIA's pricing power is unsustainable. Custom silicon is already here. The Circular Economy Problem That Could Sink Everything While Amazon announces the OpenAI deal, there's an uncomfortable truth underneath: A significant portion of AI infrastructure "demand" is circular. Amazon invested $8 billion in Anthropic. Anthropic uses AWS infrastructure. AWS revenue grows, justifying Amazon's massive capex. That capex validates AI infrastructure investments, attracting more customers, perpetuating the cycle. Similarly, OpenAI pays AWS $38 billion for infrastructure. AWS uses that revenue to build more data centers and develop Trainium3. OpenAI's ability to deploy $1.4 trillion in infrastructure commitments justifies its $500 billion valuation, which attracts investor capital, which funds more infrastructure deals. Wall Street analysts have become concerned about recent circular deals among leading artificial intelligence companies. AI infrastructure providers like Amazon and NVIDIA have invested in their customers, who then turn around and buy more of their products. As Jeremy Grantham's firm GMO warned, this looks eerily similar to Cisco Systems Inc (NASDAQ:CSCO) in the late 1990s. Lending money to startups to buy Cisco routers, then booking those sales as revenue. When the bubble popped, Cisco lost 78% of its value. The critical question: How much of AWS's 20% growth is organic customer demand versus circular ecosystem revenue from companies Amazon has invested billions in? If 15-20% of AWS growth comes from circular deals, the organic growth rate might actually be 12-15%. Still healthy, but dramatically different from headline numbers. The Cascade Scenario Worth Watching The systemic risk Wall Street analysts aren't modeling: AI productivity gains disappoint or arrive slower than expected Infrastructure utilization drops from 95% to 60-70% OpenAI and Anthropic can't sustain $1.4 trillion in combined commitments AWS, Azure, and Oracle face revenue shortfalls NVIDIA GPU demand craters (stock currently trades at 50x earnings) Circular financing breaks. The ecosystem can no longer prop up interdependent valuations AI infrastructure becomes stranded assets, forcing writedowns across the sector As the Brookings Institution warned, if AI productivity gains are "limited or delayed, a sharp correction in tech stocks, with negative knock-ons for the real economy, would be very likely." Why Amazon Still Wins (Even If The Bubble Deflates) Despite the circular financing concerns, Amazon is positioned better than almost anyone: 1. Immediate revenue recognition: OpenAI is accessing AWS capacity immediately and paying now, not deferred over seven years. 2. Hardware optionality: By supporting both NVIDIA (OpenAI) and custom silicon (Anthropic), Amazon wins regardless of which architecture dominates. 3. Customer diversification: Unlike Microsoft, which is heavily dependent on OpenAI's success, Amazon has broader enterprise cloud business plus Anthropic as a hedge. 4. Infrastructure execution: As Krieger noted, "These deals all sound great on paper, but they only materialize when they're actually racked and loaded and usable by the customer. And Amazon is incredible at that". AWS added more than 3.8 gigawatts of power capacity in the past 12 months (more than any other cloud provider) and plans to double total capacity by 2027. What Investors Should Watch The $38 billion deal is real, but sustainability depends on whether the circular financing can continue and whether AI delivers the productivity gains that justify a trillion dollars in infrastructure spending. Three critical signals: 1. Anthropic's Trainium2 success metrics Anthropic recently launched a "latency-optimized mode" for Claude 3.5 Haiku that runs 60% faster on Trainium2. If Anthropic can train and deploy frontier models on custom chips at half of NVIDIA's cost, the entire GPU premium pricing structure collapses. 2. OpenAI's path to profitability The company is burning $8-10 billion annually with total projected burn of $115 billion through 2029. If there's no clear path to positive cash flow by 2028, these massive infrastructure commitments become unsustainable. 3. AWS organic growth decomposition Monitor how much of AWS's 20% growth comes from OpenAI and Anthropic versus traditional enterprise customers. If it exceeds 20% from the circular ecosystem, the quality of growth is suspect. The Investment Implications Amazon's $38 billion OpenAI deal represents the opening move in a war over who controls AI infrastructure economics. On one side: OpenAI with NVIDIA GPUs, representing the status quo where hyperscalers pay premium prices to the chip monopoly. On the other: Anthropic with Trainium2, representing a future where hyperscalers build cost-efficient custom silicon and reclaim pricing power. For NVIDIA shareholders, the signs are clear. The company will remain dominant and profitable, but it's transforming from "irreplaceable monopoly" to "leading semiconductor company with normalizing margins." When that perception shift completes (likely within 12-18 months), NVIDIA's valuation multiple will compress from 50x earnings to 25-30x. For Amazon shareholders, this dual strategy (serving both the NVIDIA ecosystem and the custom silicon revolution) positions AWS as the Switzerland of AI infrastructure. That strategic ambiguity works in their favor. The AI revolution is real. But the winners are being determined not by who builds the best models, but by who controls the most cost-efficient infrastructure to run them. Right now, that battle is just beginning. And Monday's $38 billion deal was never really about OpenAI and Amazon at all. It was always about the war to break NVIDIA's monopoly, one custom chip at a time. Benzinga Disclaimer: This article is from an unpaid external contributor. It does not represent Benzinga's reporting and has not been edited for content or accuracy. AAPLApple Inc$267.55-1.04%OverviewAMZNAmazon.com Inc$255.094.45%AVGOBroadcom Inc$363.74-1.59%CSCOCisco Systems Inc$74.421.78%GOOGLAlphabet Inc$282.800.57%MSFTMicrosoft Corp$516.22-0.31%NVDANVIDIA Corp$208.062.75%ORCLOracle Corp$258.11-1.71%Market News and Data brought to you by Benzinga APIs
[2]
Amazon: OpenAI Deal Ignites AI Battle as Nvidia Retreats From the Frontline | Investing.com UK
While headlines focus on another mega-deal, the real story is Amazon's bet that custom chips can break NVIDIA's monopoly: Amazon's deal is really about ending chip monopolies When Amazon announced its $38 billion deal with OpenAI this Monday, Wall Street saw it as confirmation that AWS could compete in the AI infrastructure wars. Amazon's stock rose 5%. NVIDIA climbed 3%. The narrative seemed straightforward: OpenAI gets hundreds of thousands of cutting-edge NVIDIA GPUs, and Amazon cements its position as an AI cloud powerhouse. But a parallel story is unfolding that mainstream coverage overlooked. One that threatens to upend NVIDIA's $5 trillion empire and rewrite the economics of artificial intelligence. Just five days before the OpenAI announcement, Amazon revealed something far more consequential: Anthropic (OpenAI's biggest rival and a company Amazon has invested $8 billion in) is now running on 500,000 of Amazon's custom Trainium2 chips, scaling to over 1 million chips by year's end. While OpenAI gets NVIDIA's premium GPUs, Anthropic is proving that Amazon's custom silicon can train frontier AI models at a fraction of the cost. And if that bet succeeds, it fundamentally changes who controls the future of AI. Amazon is executing a dual strategy: Strategy A (OpenAI): Sell NVIDIA GPUs through AWS cloud services. Amazon captures infrastructure revenue, but NVIDIA keeps the big profit margins on the chips themselves. Strategy B (Anthropic): Deploy Amazon's custom Trainium2 chips. Amazon captures both the infrastructure revenue and the chip margins, cutting NVIDIA out entirely. AWS claims Trainium2 delivers 30-40% better price-performance than GPU-based instances for training workloads. For a company like Anthropic spending billions annually on compute, that translates to hundreds of millions in savings. Anthropic's revenue grew from approximately $1 billion at the beginning of 2025 to over $5 billion by August, powered largely by this cost advantage. Amazon wins either way. If OpenAI succeeds with NVIDIA's ecosystem, AWS books $38 billion in revenue. If Anthropic succeeds with Trainium2, Amazon proves custom silicon can compete and every other AI lab will demand the same economics. The strategic idea of Amazon's position. Whether OpenAI succeeds with Nvidia GPUs or Anthropic proves custom silicon viable, Amazon wins What makes Anthropic's deployment remarkable is that Trainium2 is specifically optimized for Anthropic's reinforcement learning workloads, which are more memory-bandwidth-bound than raw compute-bound. NVIDIA builds Ferrari engines designed for maximum horsepower. Anthropic needed fuel-efficient trucks optimized for long-haul routes. Amazon built exactly that, and Anthropic was heavily involved in the chip design process, essentially using Amazon's Annapurna Labs as a custom silicon partner. This is the same hardware-software co-design strategy that made Apple's M-series chips so dominant. Google pioneered it with TPUs for DeepMind. Now Anthropic and Amazon are executing it at massive scale. Project Rainier delivers five times the compute power Anthropic used for previous model generations. As Ron Diamant, AWS vice president and engineer told reporters: "When we build our own devices, we get to optimize across the entire stack to really compress engineering time and the time to get to massive scale". This timeline projects Nvidia's adoption declining from 95% (2024) to 60% (2027) while Trainium2 grows from near-zero to 38% market share, with 40-50% cost advantages For two decades, NVIDIA's moat was CUDA, the proprietary software ecosystem that made switching away prohibitively expensive. Developers spent years mastering CUDA. Rewriting production code for alternative chips meant months of engineering work. But that lock-in is breaking. OpenAI's Triton compiler and frameworks like PyTorch 2.0 now allow developers to write code that runs on both NVIDIA GPUs and competing chips without modification. The switching cost that once measured in millions is becoming a six-month engineering project. More critically: Leading AI cloud providers, including Amazon and Google, have ramped up their in-house chip efforts rather than relying on NVIDIA. This represents systematic replacement. When your two largest customers (Microsoft and Amazon, representing 39% of NVIDIA's revenue according to recent SEC filings) are building competing alternatives, you face a structural reset. OpenAI's true strategy reveals itself in the numbers: The company now has commitments with Microsoft ($250 billion), Oracle ($300 billion), Google (tens of billions), AWS ($38 billion), and CoreWeave ($22.4 billion). Over $660 billion in total infrastructure spending. Until recently, Microsoft had exclusive cloud partnership rights with OpenAI. Last week, those exclusive provisions expired. Days later, OpenAI signed with Amazon. This is about breaking free from any single vendor's pricing power, especially NVIDIA's. By distributing workloads across clouds with different hardware ecosystems, OpenAI gains access to NVIDIA GPUs, Google's TPUs, AWS's Trainium chips, and future custom silicon from Broadcom. When your infrastructure commitments exceed $1.4 trillion and you're burning $8-10 billion annually, vendor lock-in becomes existential risk. As Mike Krieger, Anthropic's chief product officer, told CNBC: "There is such demand for our models that I think the only way we would have been able to serve as much as we've been able to serve so far this year is this multi-chip strategy". Translation: The AI labs have figured out that dependence on NVIDIA's pricing power is unsustainable. Custom silicon is already here. While Amazon announces the OpenAI deal, there's an uncomfortable truth underneath: A significant portion of AI infrastructure "demand" is circular. Amazon invested $8 billion in Anthropic. Anthropic uses AWS infrastructure. AWS revenue grows, justifying Amazon's massive capex. That capex validates AI infrastructure investments, attracting more customers, perpetuating the cycle. Similarly, OpenAI pays AWS $38 billion for infrastructure. AWS uses that revenue to build more data centers and develop Trainium3. OpenAI's ability to deploy $1.4 trillion in infrastructure commitments justifies its $500 billion valuation, which attracts investor capital, which funds more infrastructure deals. Wall Street analysts have become concerned about recent circular deals among leading artificial intelligence companies. AI infrastructure providers like Amazon and NVIDIA have invested in their customers, who then turn around and buy more of their products. As Jeremy Grantham's firm GMO warned, this looks eerily similar to Cisco in the late 1990s. Lending money to startups to buy Cisco routers, then booking those sales as revenue. When the bubble popped, Cisco lost 78% of its value. The critical question: How much of AWS's 20% growth is organic customer demand versus circular ecosystem revenue from companies Amazon has invested billions in? If 15-20% of AWS growth comes from circular deals, the organic growth rate might actually be 12-15%. Still healthy, but dramatically different from headline numbers. The systemic risk Wall Street analysts aren't modeling: As the Brookings Institution warned, if AI productivity gains are "limited or delayed, a sharp correction in tech stocks, with negative knock-ons for the real economy, would be very likely." Despite the circular financing concerns, Amazon is positioned better than almost anyone: AWS added more than 3.8 gigawatts of power capacity in the past 12 months (more than any other cloud provider) and plans to double total capacity by 2027. The $38 billion deal is real, but sustainability depends on whether the circular financing can continue and whether AI delivers the productivity gains that justify a trillion dollars in infrastructure spending. Three critical signals: Amazon's $38 billion OpenAI deal represents the opening move in a war over who controls AI infrastructure economics. On one side: OpenAI with NVIDIA GPUs, representing the status quo where hyperscalers pay premium prices to the chip monopoly. On the other: Anthropic with Trainium2, representing a future where hyperscalers build cost-efficient custom silicon and reclaim pricing power. For NVIDIA, the signs are clear. The company will remain dominant and profitable, but it's transforming from "irreplaceable monopoly" to "leading semiconductor company with normalizing margins." When that perception shift completes (likely within 12-18 months), NVIDIA's valuation multiple will compress from 50x earnings to 25-30x. For Amazon, this dual strategy (serving both the NVIDIA ecosystem and the custom silicon revolution) positions AWS as the Switzerland of AI infrastructure. That strategic ambiguity works in their favor. The AI revolution is real. But the winners are determined not by who builds the best models, but by who controls the most cost-efficient infrastructure to run them. Right now, that battle is just beginning. And Monday's $38 billion deal was never really about OpenAI and Amazon at all. It was always about the war to slowly break NVIDIA's monopoly. ***
Share
Share
Copy Link
Amazon's dual strategy with OpenAI and Anthropic reveals a calculated approach to break NVIDIA's AI chip monopoly, using custom Trainium2 chips to offer cost-effective alternatives while maintaining revenue streams from traditional GPU partnerships.
Amazon's announcement of a $38 billion deal with OpenAI sent ripples through the tech industry this week, with Amazon shares rising 5% and NVIDIA climbing 3%.
1
While Wall Street interpreted this as validation of AWS's competitive position in AI infrastructure, the deal represents a more complex strategic maneuver that could fundamentally reshape the AI chip landscape.The partnership will see OpenAI utilizing hundreds of thousands of cutting-edge NVIDIA GPUs through AWS cloud services, positioning Amazon to capture significant infrastructure revenue. However, this announcement masks a parallel development that occurred just five days earlier, revealing Amazon's true strategic intent in the AI infrastructure wars.
Amazon simultaneously revealed that Anthropic, OpenAI's primary competitor and a company in which Amazon has invested $8 billion, is now operating on 500,000 of Amazon's custom Trainium2 chips.
2
This deployment is set to scale to over 1 million chips by year-end, representing a significant bet on custom silicon alternatives to NVIDIA's dominant GPU ecosystem.AWS claims Trainium2 delivers 30-40% better price-performance than GPU-based instances for training workloads. For Anthropic, which spends billions annually on compute resources, this translates to hundreds of millions in potential savings. The cost advantage has contributed to Anthropic's remarkable growth trajectory, with revenue expanding from approximately $1 billion at the beginning of 2025 to over $5 billion by August.
Amazon's approach reveals a calculated dual strategy designed to maximize outcomes regardless of which technological path proves superior. Strategy A involves selling NVIDIA GPUs through AWS cloud services, allowing Amazon to capture infrastructure revenue while NVIDIA retains the substantial profit margins on chip sales. Strategy B deploys Amazon's custom Trainium2 chips, enabling Amazon to capture both infrastructure revenue and chip margins while completely bypassing NVIDIA.

Source: Benzinga
The Trainium2 chips represent a hardware-software co-design approach, with Anthropic heavily involved in the chip design process through Amazon's Annapurna Labs. This collaboration mirrors successful strategies employed by Apple with its M-series chips and Google's TPU development for DeepMind. Ron Diamant, AWS vice president and engineer, emphasized the advantages: "When we build our own devices, we get to optimize across the entire stack to really compress engineering time and the time to get to massive scale."
Related Stories
For two decades, NVIDIA's competitive moat relied heavily on CUDA, the proprietary software ecosystem that made switching to alternative chips prohibitively expensive for developers. The switching costs, once measured in millions of dollars and months of engineering work, are now diminishing due to technological advances.
OpenAI's Triton compiler and frameworks like PyTorch 2.0 now enable developers to write code that runs seamlessly on both NVIDIA GPUs and competing chips without modification. This development transforms what was once a six-month engineering project into a more manageable transition, significantly reducing the barriers to adopting alternative chip architectures.
OpenAI's infrastructure strategy extends far beyond the Amazon deal, encompassing commitments totaling over $660 billion across five major cloud providers. These include Microsoft ($250 billion), Oracle ($300 billion), Google (tens of billions), AWS ($38 billion), and CoreWeave ($22.4 billion). This diversification strategy became possible after Microsoft's exclusive cloud partnership rights with OpenAI expired last week, immediately followed by the Amazon announcement.
This multi-cloud approach represents OpenAI's deliberate effort to avoid vendor lock-in and maintain negotiating power against any single provider's pricing strategies, particularly targeting NVIDIA's premium pricing model. The strategy reflects broader industry concerns about concentration risk in AI infrastructure dependencies.
Summarized by
Navi
22 Sept 2025•Business and Economy

06 Oct 2025•Business and Economy

29 Oct 2025•Business and Economy

1
Business and Economy

2
Business and Economy

3
Technology
