15 Sources
15 Sources
[1]
Google and Intel deepen AI infrastructure partnership | TechCrunch
Google and Intel announced an expanded multi-year partnership on Thursday for Google Cloud to continue utilizing Intel AI infrastructure and to keep developing processors together. Google Cloud will use Intel's Xeon processors, including Intel's latest Xeon 6 chips, for AI, cloud, and inference tasks. The company has used Intel's various Xeon processors for decades. The companies will also expand the co-development of custom infrastructure processing units (IPUs), which help accelerate and manage data center tasks by offloading them from CPUs. This chip development partnership, which started in 2021, will focus on custom ASIC-based IPUs. Intel declined to share any information regarding pricing for the deal. This expansion comes as the industry is hungry for CPUs. While GPUs are used for developing and training AI models, CPUs are crucial for running AI models and within general AI infrastructure. "AI is reshaping how infrastructure is built and scaled," Intel chief executive Lip-Bu Tan said in a company press release. "Scaling AI requires more than accelerators -- it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand." More companies have been turning their focus to CPUs in recent months as there is a growing shortage for the chips. SoftBank-owned Arm Holdings recently announced the Arm AGI CPU, the first chip that the semiconductor giant has produced itself, amid a worldwide crunch for CPUs.
[2]
Intel and Google announce multi-year chip deal -- Google will deploy Intel Xeon with custom IPUs for next-gen AI, cloud infrastructure
Intel and Google on Thursday announced a multi-year collaboration under which Google will continue deploying Intel Xeon platforms for its next generation of AI and cloud infrastructure. These platforms will rely not only on Intel's upcoming Xeon CPUs, but also on custom infrastructure processing units (IPUs) co-designed by Intel and Google. The announcement comes amid the accelerating adoption of custom Arm-based processors for AI workloads. "Scaling AI requires more than accelerators - it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand," said Lip-Bu Tan, CEO of Intel. Google currently employs Intel Xeon 5 and Intel Xeon 6 processors for a variety of workloads, including large-scale AI training coordination, latency-sensitive inference, and general-purpose computing. For example, Intel's latest Xeon CPUs power C4 and N4 instances. Although Google's custom Armv9-based Axion processors provide the cloud giant more control and efficiency at lower cost, many workloads that are run in Google's data centers need to either be backwards compatible with x86 or just need maximum single-thread performance offered by Intel Xeon CPUs. This is something that is expected to continue for years to come, which is why the two companies inked the deal. In a bid to make Intel Xeon platforms more efficient and suitable for its hyperscale data centers, Google will also co-develop custom IPUs together with Intel to offload networking, storage, and security functions from host CPUs. Ultimately, Intel Xeon platforms will combine x86 architecture with high single-thread performance and custom-built infrastructure processing, which will make them more competitive in Google's highly customized environments. "CPUs and infrastructure acceleration remain a cornerstone of AI systems -- from training orchestration to inference and deployment," said Amin Vahdat, SVP & Chief Technologist, AI Infrastructure, Google. The announcement comes at a time when hyperscalers and AI platform developers are accelerating the adoption of their own custom CPUs based on the Arm instruction set architecture. Just a week ago, Counterpoint Research released a note claiming that 90% of AI servers running custom-silicon processors will rely on the Arm ISA, leaving x86 and RISC-V about 10%. The announcement by Intel and Google clearly states that Xeon CPUs with custom IPUs will continue to be used for AI and other demanding workloads for years to come, which is something to be expected anyway. Intel's Xeon processors have powered cloud infrastructure since its inception in the 2000s, and Google's own servers before that, so x86 in general and Xeon in particular will not leave Google's data center premises any time soon. Nonetheless, the announcement clearly reemphasizes the relevance of Intel's Xeon CPUs, and when such a message comes from Google -- which has been deploying special-purpose custom accelerators for years across virtually all of its services -- it gets amplified significantly. "Intel has been a trusted partner for nearly two decades, and their Xeon roadmap gives us confidence that we can continue to meet the growing performance and efficiency demands of our workloads," Vahdat added. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
[3]
Google taps Intel for another round of custom network chips
Google will continue to work with Intel, buying SmartNICs for its public cloud rather than blazing its own trail as AWS has done with its Nitro NICs. Like most hyperscalers today, Google employs SmartNICs, or as Intel prefers to call them, infrastructure processing units (IPUs). These devices are essentially a computer on a network card and are designed to offload networking, security, and storage operations, freeing up CPU resources for tenant workloads. While Amazon employs custom ASICs from its Annapurna Labs team and Microsoft uses custom logic running on FPGAs, Google tapped Intel to develop an ASIC-based IPU called Mount Evans, which launched alongside its C3 instances in 2022. On Thursday, Intel announced Google had expanded this collaboration to develop new IPUs in a press release that reads like a desperate attempt to convince the public that its Datacenter and Networking divisions are still relevant. Intel CFO David Zinsner had alluded to increased demand for these services during the company's Q4 earnings call in January, touting that the company's custom ASIC biz grew more than 50 percent in 2025 and exited Q4 at an annualized revenue run rate above $1 billion. Intel didn't elaborate on what Google's next-gen IPUs might look like, but given the demand for high speed networking for AI compute clusters, there's a good chance it'll be significantly faster than its 200 Gbps Mount Evans IPUs. Alongside the expanded IPU collab, Intel was also keen to note that the Chocolate Factory wasn't giving up on its Xeon processors, which will power a variety of general purpose and AI workloads. In other words, it is business as usual. Like many hyperscalers, Google now has its own Arm-based CPU, codenamed Axion, which runs both internal and customer facing workloads. However, just as Graviton and Cobalt have replaced Xeon or Epyc processors in Amazon's or Microsoft's clouds, neither Intel nor AMD is at risk of being ejected by Axion any time soon. In fact, since many customers prefer to run their workloads on x86 cores, either for performance or compatibility reasons, Intel remains useful for putting pricing pressure on AMD and vice versa. In any case, Intel felt it was necessary to reassure everyone that Xeon remained a key part of Google Cloud, particularly when it comes to AI. But again, this is nothing new. Xeons have been the CPU of choice for Nvidia's 8-GPU DGX reference designs going back to the H100 in 2022. In other words, many of Google's GPU instances already had Intel inside. And, while AI workloads also need CPUs to orchestrate agents and execute the code generated by Google's GPUs and TPUs, there's nothing that makes Xeon inherently better for this job than Epyc or Axion. In fact, at cloud scale, the best CPU for agentic AI is probably whatever happens to be sitting idle at any given moment. ®
[4]
Intel Wins Google Commitment to Use Xeon Chips in Data Centers
Intel Corp., trying to promote the use of its technology in data centers, said Alphabet Inc.'s Google has committed to using future generations of its Xeon processors and other chips. As part of the multiyear agreement, announced Thursday, the search engine giant will customize Intel's IPUs, or infrastructure processing units. These chips handle functions such as networking, security and storage. The companies didn't disclose financial details or purchase commitments. The deal is part of a push by Intel to better capitalize on the build-out of artificial intelligence infrastructure. Getting a bigger slice of data center spending is critical to a comeback bid under Chief Executive Officer Lip-Bu Tan. Xeon once commanded a market share of more than 99%, making it the chief source of profit for what was then the world's largest semiconductor maker. In the last few years, Intel has lost ground to rivals such as Advanced Micro Devices Inc. and in-house efforts by customers -- including Google. One trend is working in Intel's favor: Central processing units, its hallmark product, are increasingly seen as critical to artificial intelligence computing. Though Nvidia Corp.'s AI processors are still the workhorses of this infrastructure, there's growing demand for general-purpose CPUs to help everything run smoothly. CPUs can help orchestrate the training of AI systems and handle inference -- the stage where models are put into use -- according to Amin Vahdat, senior vice president and chief technologist of AI infrastructure at Google. "Intel has been a trusted partner for nearly two decades, and their Xeon road map gives us confidence that we can continue to meet the growing performance and efficiency demands of our workloads," he said in the statement. The two companies said their collaboration will enable "a more balanced approach to AI system design -- one that improves utilization, reduces complexity and scales more efficiently."
[5]
Intel and Google to double down on AI CPUs with expanded partnership
April 9 (Reuters) - Intel and Google have expanded their partnership to advance the use of artificial intelligence-focused central processing units and to develop custom infrastructure processors, as shifting use of AI drives renewed demand for traditional computing chips. Companies are increasingly moving away from using AI for training models to deploying them, fueling the need for generalist CPU chips designed to handle heavy workloads. Under the agreement, announced on Thursday, Alphabet's Google (GOOGL.O), opens new tab unit will continue to deploy Intel's (INTC.O), opens new tab Xeon processors that support a broad range of workloads such as inference and general-purpose computing. The company will also use Intel's latest Xeon 6 chips. Intel and Google will also expand the co-development of custom infrastructure processing units (IPUs), which can handle tasks traditionally managed by the CPU, enabling more efficient computing. "Scaling AI requires more than accelerators - it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand," said Intel CEO Lip-Bu Tan. Surging demand for agentic AI systems - which perform complex, multi-step operations beyond simple chatbot functionality - has boosted the requirement for significantly more CPU processing power. The surge in demand for CPUs could help Intel to strengthen its balance sheet and acquire new customers after the chip manufacturer lost market share to rivals during the early years of the AI boom. The company said on Tuesday it will join Elon Musk's Terafab AI chip complex project with SpaceX and Tesla to power the billionaire's robotics and data center ambitions. Intel also plans to take full ownership of its Ireland manufacturing facility, where it makes Xeon server processors, by buying back the stake it had sold to Apollo Global Management. Reporting by Zaheer Kachwala in Bengaluru; Editing by Sriraj Kalluvila Our Standards: The Thomson Reuters Trust Principles., opens new tab
[6]
Google expands partnership with Intel for AI chips
Intel Xeon 6 processors are shown to CNBC at Intel's advanced packaging facility in Chanfler, Ariona, on November 17, 2025. Google has committed to using multiple generations of Intel central processing units in its artificial intelligence data centers, an expansion of an existing partnership. The internet giant has long relied on Intel processors, dating back to its earliest server rack ambitions nearly three decades ago. Intel's newest Xeon 6 CPUs will now run AI training and inference workloads, potentially giving the chipmaker a stronger position in an AI market that's so far been dominated by Nvidia. "Their Xeon roadmap gives us confidence that we can continue to meet the growing performance and efficiency demands of our workloads," Amin Vahdat, Google's chief technologist for AI infrastructure, said in a statement Thursday. No financial terms were disclosed, nor did the companies provide a timeline for the agreement. The deal lands as the CPU takes center stage in the next phase of the AI race. Dion Harris, Nvidia's head of AI infrastructure, told CNBC in March that CPUs are "becoming the bottleneck" as agentic workloads move compute needs beyond the graphics processing units that have ruled AI thus far. "Scaling AI requires more than accelerators -- it requires balanced systems," Intel CEO Lip-Bu Tan said in a statement about the Google deal on Thursday. Intel, which has been struggling for years to keep pace with new trends in technology, sold a 10% stake to the U.S. government in August, with the Trump administration touting the chipmaker's ability to make advanced chips on U.S. soil. The following month, Nvidia said it would purchase a $5 billion stake in Intel. Shares of Intel have nearly tripled in the past year, fueled by those investments. Intel makes the latest Xeon processor on its most advanced 18A technology at its Arizona chip fabrication plant that opened last year. Despite pouring billions into the foundry side of its business, Intel's own processors remain the largest customer at the new fab. But Tan posted on LinkedIn earlier this week that Elon Musk has tapped Intel to design, fabricate and package custom chips for SpaceX, xAI and Tesla at his ambitious Terafab project in Texas, though no financial details or timeline was announced. As part of Thursday's announcement, Google and Intel reiterated that they're collaborating on another type of chip, the infrastructure processing unit, or IPU, which the two companies have worked on together since 2022. In a press release, Intel said this programmable accelerator is used to "offload networking, storage and security functions from host CPUs." Google told CNBC in an email that the IPU was a first-of-its-kind chip when the companies first collaborated on it four years ago. Google said it's designed to help customers better utilize the main CPU in a traditional data center by taking over "overhead" tasks, such as routing network traffic, managing storage, encrypting data and running virtualization software. For over a decade, Google has also developed its own custom AI accelerator called the tensor processor unit, or TPU. In 2024, Google also started making its own custom CPU, Axion, choosing an Arm-based design over Intel's leading x86 architecture. -- CNBC's Kristina Partsinevelos contributed to this report.
[7]
Google Cloud deepens AI infrastructure partnership with Intel across Xeon and custom chips
In short: Google Cloud and Intel have announced a deepened multi-year AI infrastructure partnership covering both CPU deployment and custom chip co-development. Google Cloud will continue adopting Intel's Xeon 6 processors across its global infrastructure for C4 and N4 instances, while the two companies are expanding their joint development of custom Infrastructure Processing Units designed to offload networking, storage, and security from host CPUs in hyperscale AI environments. The announcement arrives as Intel's stock surged approximately 33% on the week and two days after the company signed on as the foundry partner for Tesla's Terafab megaproject. The central argument of the partnership, as framed by both companies, is that GPU accelerators alone are not sufficient to handle the demands of modern AI infrastructure. In a statement accompanying the announcement, Lip-Bu Tan, Intel's chief executive, said: "AI is reshaping how infrastructure is built and scaled. Scaling AI requires more than accelerators -- it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand." The language is deliberate. Intel has spent much of the past two years repositioning from the general-purpose computing market it once dominated toward a more specific thesis: that the CPU and custom infrastructure silicon have a structural role in AI deployments that GPU-centric narratives have consistently underestimated. Amin Vahdat, Google's senior vice president and chief technologist for AI infrastructure, made the case from the demand side. "CPUs and infrastructure acceleration remain a cornerstone of AI systems -- from training orchestration to inference and deployment," he said. "Intel has been a trusted partner for nearly two decades, and their Xeon roadmap gives us confidence that we can continue to meet the growing performance and efficiency demands of our workloads." The framing of the partnership as a multi-generational CPU roadmap commitment, rather than a one-cycle procurement agreement, is significant: it implies Google has made decisions about its infrastructure architecture several years out on the basis of Intel's product trajectory, and that trajectory includes both the Xeon line and the custom IPU co-development effort. The CPU component of the partnership centres on Intel's Xeon 6 processor family, which Google Cloud has deployed across its workload-optimised C4 and N4 instance types. Google says the C4 instances deliver more than 2.0 times the total cost of ownership benefit compared with predecessor configurations, a figure that captures the combination of performance uplift and power efficiency that Intel has positioned as Xeon 6's core competitive claim. The agreement extends beyond the current generation: Google has committed to multi-generational alignment with Intel's Xeon roadmap, meaning its infrastructure planning incorporates Intel's future CPU releases as a known variable rather than a contingent one. Google has simultaneously been deepening its custom silicon commitments on the accelerator side, supplying Anthropic with approximately one gigawatt of TPU capacity through Broadcom in a deal that anchors Anthropic's AI infrastructure through 2027 and beyond -- a parallel track that reflects how Google is building out its infrastructure portfolio across both standard and custom silicon simultaneously. The CPU architecture context matters for understanding why this commitment is being made public now. As AI workloads shift from the training phase, which is GPU-intensive and relatively concentrated among a small number of hyperscalers, toward inference at scale, which is distributed, latency-sensitive, and runs continuously across large server fleets, the cost structure of AI infrastructure changes. Inference places sustained demands on CPU resources for orchestration, data pre-processing, and system management that training pipelines do not. Google's bet on Xeon 6 for its C4 and N4 instances is, in part, a bet that inference economics will make CPU efficiency a first-order concern in the years ahead. The more strategically significant element of the partnership is the expanded co-development of Infrastructure Processing Units. IPUs are custom ASIC-based programmable accelerators designed to take over the networking, storage, and security functions that would otherwise run on host CPUs, freeing those CPUs to focus entirely on application and AI workload processing. In hyperscale environments, where these infrastructure tasks consume a substantial and growing fraction of available compute, offloading them to a dedicated accelerator can significantly improve utilisation rates, energy efficiency, and the consistency of workload performance. Intel and Google have been collaborating on IPU development, and the announcement signals that this work is expanding in scope rather than narrowing. The specific technical details of the expanded programme -- die design, process node, performance targets, and deployment timeline -- have not been disclosed publicly. Nvidia, whose fourth-quarter 2025 revenue reached $68.1 billion on 73% year-on-year growth and which used its GTC 2026 conference in March to position its full-stack platform as the default environment for AI infrastructure, is the implicit competitive reference point for both components of the Intel-Google partnership. Intel is not attempting to displace Nvidia's GPU accelerators in training workloads; it is arguing that the system around those accelerators -- the CPUs managing orchestration, the IPUs managing network and storage overhead, and the interconnects tying everything together -- is where efficiency gains are increasingly available. That argument has a natural ally in Google, which has both the infrastructure scale to validate it empirically and commercial incentives to diversify away from a single-vendor accelerator dependency. The Google partnership arrives at a moment when Intel's industrial position is changing rapidly. Two days before the Google announcement, Intel signed on as the primary foundry partner for Terafab, the $25 billion joint venture between Tesla, SpaceX, and xAI targeting one terawatt of AI compute per year, committing its 18A process node -- the company's most advanced logic manufacturing technology -- to the project. The two announcements taken together suggest Intel is pursuing a two-track strategy: deepening its hyperscale cloud partnerships for CPU and IPU deployment while simultaneously building out its foundry business for the custom AI silicon market that Nvidia, AMD, and the hyperscalers' in-house chip programmes have driven into existence. The stock market responded to the week's announcements with a roughly 33% gain in Intel's share price, the sharpest weekly move the company has recorded in years. Whether the strategic repositioning is durable depends on execution. Intel's 18A process node is the same technology that underpins its foundry credibility with customers like Tesla, and its delay history has been a persistent source of investor concern. The Xeon 6 deployment in Google Cloud and the IPU co-development programme are both contingent on Intel shipping what its roadmap promises on the timelines Vahdat's statement implies Google has factored into its own planning. The AI infrastructure market that Intel is trying to enter has become one of the most heavily capitalised segments in technology, with deals such as Meta's $27 billion agreement with Nebius in March 2026 illustrating the scale of commitments being made across the industry. The year 2025 shifted the centre of gravity in AI from model development to infrastructure deployment, establishing capital expenditure scale and infrastructure access as the primary competitive variables -- and Intel, for the first time in several years, is making a credible case that it belongs in that competition on multiple fronts simultaneously.
[8]
Can Intel's Long-Term AI Cloud Deal With Google Drive Future Profit?
Intel $INTC Corporation INTC is strengthening its position in artificial intelligence (AI) infrastructure through a multi-year partnership with Alphabet $GOOGL Inc. GOOGL. The deal highlights Intel's focus on powering next-generation cloud systems by combining Intel's compute capabilities with specialized acceleration technologies to improve efficiency and performance. Under the agreement, Intel's Xeon processors, including the latest Intel Xeon 6 CPUs (Central Processing Unit), will continue to power Google Cloud's data centers. These CPUs handle key tasks like managing workloads, processing data and supporting AI training and inference. They help Google Cloud run different types of workloads efficiently, from machine learning to fast-response applications. In addition, Intel and Google are expanding their co-development of custom Application-specific integrated circuit or ASIC-based Infrastructure Processing Units (IPUs). These custom chips are designed to offload networking, storage and security tasks from the CPU, improving overall system efficiency and freeing up compute resources for more critical operations. By integrating Xeon CPUs with IPUs, Intel is enabling a more balanced architecture that optimizes both general-purpose and specialized processing. Through this collaboration, Intel has positioned itself as a key enabler of scalable, efficient AI infrastructure for the future.
[9]
Intel inks multiyear data center chip partnership with Google - SiliconANGLE
Intel inks multiyear data center chip partnership with Google Google LLC will adopt multiple future iterations of Intel Corp.'s Xeon processor series as part of a collaboration announced today. Shares of the chipmaker closed 4.7% higher on the news. Google will deploy the chips in its cloud platform. They will power artificial intelligence models and general-purpose workloads. Google Cloud already uses Xeon 6, Intel's newest series of central processing units, to run some of its general-purpose C4 instances. The virtual machines can achieve a maximum clock speed of 3.9 gigahertz when all the cores of the underlying CPU are active. That frequency can rise to 4.2 gigahertz when only the fastest cores are online. Google's C4 instances use a specific variant of Xeon 6 called Granite Rapids. It's based on a core design called P-core that includes multiple AI-focused optimizations. One of those optimizations is AMX, a set of extensions to the machine language in which Intel chips express computations. AMX speeds up a calculation called multiply-accumulate that AI models run frequently during inference. Intel also offers a second collection of Xeon 6 chips called Sierra Forest. Those CPUs are based on a core design called E-core that trades off some of P-core's performance for increased efficiency. Intel debuted its most advanced E-core server processor in March. It includes 288 cores, or 160 more than the largest Granite Rapids processor. The chip is based on the company's latest Intel 18A manufacturing process, which offers up to 15% better performance per watt than the Intel 3 node that underpins earlier Xeon 6 chips. "Scaling AI requires more than accelerators - it requires balanced systems," said Intel chief executive officer Lip-Bu Tan. "CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand. The companies' new partnership also extends to Intel's IPU, or infrastructure processing unit, product family. The chips in the lineup are optimized to perform infrastructure management tasks such as encrypting data traffic and coordinating storage hardware. IPUs offload those tasks from a server's CPU, which leaves more computing capacity for user workloads. Intel and Google plan to expand "their co-development of custom ASIC-based IPUs." An ASIC, or application-specific integrated circuit, is a processor designed from the ground up for a specific set of use cases. That suggests Google will commission IPUs optimized for its cloud data centers.
[10]
Intel and Google Expand AI Infrastructure Collaboration with Xeon CPUs and IPUs
Intel and Google have announced a multi-year collaboration focused on advancing AI and cloud infrastructure, with an emphasis on combining general-purpose compute and specialized acceleration. The partnership highlights the role of Intel Xeon processors alongside custom infrastructure processing units (IPUs) in addressing the increasing complexity of modern AI systems. As AI workloads continue to scale, data center architectures are evolving toward more heterogeneous designs. In this context, CPUs remain a critical component, handling orchestration, data processing, and system-level coordination. Google Cloud continues to deploy Intel Xeon processors across its infrastructure, including the latest Xeon 6 series powering its C4 and N4 instances. These platforms support a broad spectrum of workloads, ranging from AI training coordination to inference and traditional compute tasks. A key element of the collaboration is the joint development of custom ASIC-based IPUs. These accelerators are designed to offload infrastructure-related tasks such as networking, storage management, and security processing from the CPU. By shifting these responsibilities, IPUs improve resource utilization and enable more predictable performance across large-scale deployments. The integration of Xeon CPUs and IPUs forms a balanced system architecture that separates general-purpose compute from infrastructure-specific processing. This approach allows data centers to scale more efficiently while maintaining flexibility across different workload types. It also helps reduce overhead and optimize performance in environments where AI workloads place significant demands on both compute and data movement. From a broader perspective, the collaboration reflects a shift toward more modular and specialized infrastructure in hyperscale environments. Rather than relying solely on accelerators such as GPUs, modern AI systems require coordination between multiple types of processors. CPUs and IPUs together provide the foundation for managing these complex systems, ensuring efficient operation across training, inference, and deployment stages. The partnership builds on a long-standing relationship between Intel and Google and aligns with ongoing industry trends toward open and scalable infrastructure. As AI adoption continues to expand, this type of collaboration is expected to play a key role in shaping the next generation of cloud platforms. Driving Performance and Efficiency at Scale "AI is reshaping how infrastructure is built and scaled," said Lip-Bu Tan, CEO of Intel. "Scaling AI requires more than accelerators - it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand." "CPUs and infrastructure acceleration remain a cornerstone of AI systems -- from training orchestration to inference and deployment," said Amin Vahdat, SVP & Chief Technologist, AI Infrastructure, Google. "Intel has been a trusted partner for nearly two decades, and their Xeon roadmap gives us confidence that we can continue to meet the growing performance and efficiency demands of our workloads." Building the Foundation for the Next Wave of AI The expanded collaboration reflects a shared commitment to advancing open, scalable infrastructure for the AI era. By combining general-purpose compute with purpose-built infrastructure acceleration, Intel and Google are enabling a more balanced approach to AI system design - one that improves utilization, reduces complexity and scales more efficiently. Together, the companies are strengthening the foundation for the next generation of AI-driven cloud services -- supporting continued innovation across enterprises, developers and users worldwide.
[11]
Intel and Google announce multi-year collaboration to advance AI infrastructure
On Thursday, Intel and Google announced a multi-year collaboration to advance AI infrastructure. Under this program, Google will continue to deploy Intel Xeon systems for its AI and cloud infrastructure going forward. Moreover, both companies agreed to work together on the expanded co-development of custom silicon, especially targeted towards AI applications. According to Intel's announcement, this collaboration is essential for the future of AI infrastructure, as Google and Intel will work together to improve performance and energy efficiency, and to optimize costs for Google's AI and cloud deployments. In simple terms, both tech giants are going to work together to best capitalize on the AI boom. The outlined program has two major parts: first, Google will continue to use Intel's Xeon processors for its Cloud, AI, inference, and general-purpose workloads. This will include both current Xeon models and future models that will likely be developed with Google's specific needs in mind. The second aspect of the deal is a bit more interesting. Intel states that the two companies will co-develop custom ASIC-based infrastructure processing units (IPUs) targeted squarely at AI applications. One can think of these "IPUs" as Intel's response to AI-specific compute products that are making their way into the mainstream, such as Arm's AGI CPU. From what we gather, these ASIC-based IPUs will work hand in hand with host CPUs and offload certain networking, security, and storage functions to improve efficiency at scale. Regardless, it seems Intel is sticking with x86 in its data centers and has no plans to change instruction sets amid rising competition. Both Intel and Google expressed confidence in the future of the Xeon platform. It remains to be seen how this partnership affects the AI ecosystem moving forward, and whether it will elicit a response from competitors such as OpenAI, AMD, and Arm. By the looks of it, however, Google seems pretty locked in with Intel for the foreseeable future when it comes to its internal infrastructure.
[12]
US Stocks: Intel and Google to double down on AI CPUs with expanded partnership
Intel and Google have expanded their partnership to advance the use of artificial intelligence-focused central processing units and to develop custom infrastructure processors, as shifting use of AI drives renewed demand for traditional computing chips. Intel and Google have expanded their partnership to advance the use of artificial intelligence-focused central processing units and to develop custom infrastructure processors, as shifting use of AI drives renewed demand for traditional computing chips. Companies are increasingly moving away from using AI for training models to deploying them, fueling the need for generalist CPU chips designed to handle heavy workloads. Under the agreement, announced on Thursday, Alphabet's Google unit will continue to deploy Intel's Xeon processors that support a broad range of workloads such as inference and general-purpose computing. The company will also use Intel's latest Xeon 6 chips. Intel and Google will also expand the co-development of custom infrastructure processing units (IPUs), which can handle tasks traditionally managed by the CPU, enabling more efficient computing. "Scaling AI requires more than accelerators - it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand," said Intel CEO Lip-Bu Tan. Surging demand for agentic AI systems - which perform complex, multi-step operations beyond simple chatbot functionality - has boosted the requirement for significantly more CPU processing power. The surge in demand for CPUs could help Intel to strengthen its balance sheet and acquire new customers after the chip manufacturer lost market share to rivals during the early years of the AI boom. The company said on Tuesday it will join Elon Musk's Terafab AI chip complex project with SpaceX and Tesla to power the billionaire's robotics and data center ambitions. Intel also plans to take full ownership of its Ireland manufacturing facility, where it makes Xeon server processors, by buying back the stake it had sold to Apollo Global Management.
[13]
Intel Stock Rises On Google AI Infrastructure Partnership - Intel (NASDAQ:INTC)
Intel stock is trending higher. What's pulling INTC shares higher? AI Infrastructure Partnership Intel and Google said the collaboration will focus on advancing AI and cloud infrastructure, with Intel's Xeon processors continuing to power Google Cloud workloads across AI, inference and general-purpose computing. The companies will align across multiple generations of Intel Xeon processors to improve performance, energy efficiency and total cost of ownership across Google's global infrastructure. Expansion Of Custom IPUs As part of the agreement, Intel and Google are expanding co-development of custom ASIC-based infrastructure processing units (IPUs), designed to offload networking, storage and security tasks from CPUs. The companies said IPUs can improve utilization, increase efficiency and enable more predictable performance across large-scale AI environments. Focus On Scalable AI Systems Intel said the collaboration reflects a broader shift toward more complex, heterogeneous AI systems, where CPUs and specialized accelerators work together to deliver performance and scalability. CEO Lip-Bu Tan said, "Scaling AI requires more than accelerators - it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand." Google's AI infrastructure lead Amin Vahdat added that Intel's Xeon roadmap supports the company's ability to meet growing performance and efficiency needs. Intel Shares Pop INTC Price Action: At the time of publication, Intel shares are trading 0.73% higher at $59.38, according to data from Benzinga Pro. Image via Shutterstock This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[14]
Intel stock gains after Google partnership on AI infrastructure By Investing.com
Investing.com -- Intel Corporation (NASDAQ:INTC) rose 1% Thursday following the announcement of a multiyear collaboration with Google to advance AI and cloud infrastructure, focusing on Intel Xeon processors and custom infrastructure processing units. Under the agreement, Intel Xeon processors will continue powering Google Cloud infrastructure across AI, inference and general-purpose workloads. The companies will also expand co-development of custom ASIC-based IPUs designed to improve efficiency, utilization and performance at scale. Google Cloud currently deploys Intel Xeon processors across its workload-optimized instances, including the latest Intel Xeon 6 processors powering C4 and N4 instances. These platforms support workloads ranging from large-scale AI training coordination to latency-sensitive inference and general-purpose computing. The custom ASIC-based IPUs will offload networking, storage and security functions from host CPUs, improving utilization and enabling more predictable performance across hyperscale AI environments. By handling infrastructure tasks traditionally managed by CPUs, the IPUs aim to unlock greater compute capacity and allow cloud providers to scale more efficiently. "AI is reshaping how infrastructure is built and scaled," said Lip-Bu Tan, CEO of Intel. "Scaling AI requires more than accelerators - it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand." Amin Vahdat, SVP & Chief Technologist, AI Infrastructure at Google, commented that CPUs and infrastructure acceleration remain a cornerstone of AI systems, adding that Intel has been a trusted partner for nearly two decades. The collaboration aligns Intel and Google across multiple generations of Intel Xeon processors to improve performance, energy efficiency and total cost of ownership across Google's global infrastructure. This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.
[15]
Intel and Google to double down on AI CPUs with expanded partnership
April 9 (Reuters) - Intel and Google have expanded their partnership to advance the use of artificial intelligence-focused central processing units and to develop custom infrastructure processors, as shifting use of AI drives renewed demand for traditional computing chips. Companies are increasingly moving away from using AI for training models to deploying them, fueling the need for generalist CPU chips designed to handle heavy workloads. Under the agreement, announced on Thursday, Alphabet's Google unit will continue to deploy Intel's Xeon processors that support a broad range of workloads such as inference and general-purpose computing. The company will also use Intel's latest Xeon 6 chips. Intel and Google will also expand the co-development of custom infrastructure processing units (IPUs), which can handle tasks traditionally managed by the CPU, enabling more efficient computing. "Scaling AI requires more than accelerators - it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand," said Intel CEO Lip-Bu Tan. Surging demand for agentic AI systems - which perform complex, multi-step operations beyond simple chatbot functionality - has boosted the requirement for significantly more CPU processing power. The surge in demand for CPUs could help Intel to strengthen its balance sheet and acquire new customers after the chip manufacturer lost market share to rivals during the early years of the AI boom. The company said on Tuesday it will join Elon Musk's Terafab AI chip complex project with SpaceX and Tesla to power the billionaire's robotics and data center ambitions. Intel also plans to take full ownership of its Ireland manufacturing facility, where it makes Xeon server processors, by buying back the stake it had sold to Apollo Global Management. (Reporting by Zaheer Kachwala in Bengaluru; Editing by Sriraj Kalluvila)
Share
Share
Copy Link
Google and Intel announced an expanded multi-year collaboration to advance AI infrastructure using Xeon processors and custom infrastructure processing units. The partnership addresses growing CPU demand as companies shift from AI training to deployment, with Google Cloud continuing to use Intel's latest Xeon 6 chips for AI workloads alongside co-developed custom IPUs that offload networking and security tasks.
Google and Intel announced an expanded multi-year collaboration on Thursday that deepens their longstanding relationship in AI infrastructure development
1
. Google Cloud will continue deploying Intel's Xeon processors, including the latest Xeon 6 chips, for AI, cloud, and inference tasks across its data centers2
. The partnership comes at a critical moment as the industry faces a growing shortage of CPUs, driven by companies shifting from AI training to deployment phases that require more general-purpose computing power5
.
Source: Tom's Hardware
The companies will expand co-development of custom Infrastructure Processing Units (IPUs), building on a chip development partnership that started in 2021
1
. These custom ASIC-based IPUs are designed to offload networking, storage, and security functions from host CPUs, enabling data center efficiency improvements3
. Google previously deployed Intel's Mount Evans IPU alongside its C3 instances in 2022, and the next generation is expected to deliver significantly faster performance than the current 200 Gbps capabilities3
. By combining x86 architecture with high single-thread performance and custom-built infrastructure processing, Intel Xeon platforms will become more competitive in Google's highly customized environments2
.
Source: The Next Web
Intel CEO Lip-Bu Tan emphasized that "scaling AI requires more than accelerators -- it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand"
1
. While GPUs remain essential for developing and training AI models, CPUs are crucial for running AI models and managing general AI infrastructure1
. Surging demand for agentic AI systems, which perform complex, multi-step operations beyond simple chatbot functionality, has boosted requirements for significantly more CPU processing power5
. Google's Amin Vahdat, SVP and Chief Technologist for AI Infrastructure, noted that "CPUs and infrastructure acceleration remain a cornerstone of AI systems -- from training orchestration to inference and deployment"2
.Related Stories
The deal represents a critical win for Intel as it attempts to capitalize on AI infrastructure buildout under CEO Lip-Bu Tan . Xeon once commanded over 99% market share, but Intel has lost ground to rivals like AMD and in-house efforts by hyperscalers, including Google's own Armv9-based Axion processors . Intel's custom ASIC business grew more than 50% in 2025 and exited Q4 at an annualized revenue run rate above $1 billion
3
. Despite Google developing its own Arm-based CPU, many workloads running in Google's data centers need backwards compatibility with x86 or require maximum single-thread performance offered by Intel Xeon CPUs2
. Intel currently powers Google's C4 and N4 instances for large-scale AI training coordination, latency-sensitive inference, and general-purpose computing2
. Vahdat confirmed that "Intel has been a trusted partner for nearly two decades, and their Xeon roadmap gives us confidence that we can continue to meet the growing performance and efficiency demands of our workloads" . The partnership signals that offloading tasks from CPUs through specialized accelerators and IPUs will remain central to next-gen AI and cloud infrastructure strategies, even as the industry witnesses accelerating adoption of Arm-based processors for general-purpose and AI workloads2
.
Source: Benzinga
Summarized by
Navi
[2]
[3]
08 Oct 2024•Technology

25 Sept 2024

18 Sept 2024
