Curated by THEOUTPOST
On Thu, 17 Oct, 1:06 PM UTC
3 Sources
[1]
AMD and Intel Form a Rare Chip Coalition to Win the A.I. Arms Race
At this year's Lenovo Tech World event in Seattle, AMD and Intel announced the x86 Ecosystem Advisory Group. In an unexpected yet groundbreaking move, Intel and AMD (AMD) -- longtime semiconductor rivals -- have joined forces in a bid to strengthen their foothold in the A.I. arms race. At this year's Lenovo Tech World event on Tuesday (Oct. 15) in Seattle, Wash., the two companies announced a coalition called the x86 Ecosystem Advisory Group. The group will act as a collaboration hub to ensure that upcoming AMD and Intel chips are "consistent and compatible" across computing applications and meet the growing demands of generative A.I. (genAI) -- whether in data centers, cloud platforms, or consumer devices. While the coalition is centered around AMD and Intel, it will seek insights and feedback from a slew of tech companies, including Broadcom, Dell, Google, Hewlett Packard Enterprise, Lenovo, Meta (META), Microsoft (MSFT), Oracle and Red Hat. Sign Up For Our Daily Newsletter Sign Up Thank you for signing up! By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime. See all of our newsletters "With the rise of genAI and advancements in system design, the demand for computing power is evolving faster than ever. This is the first time I and Lisa [Su, CEO of AMD] have agreed on something," Intel CEO Pat Gelsinger said during an onstage presentation, humorously acknowledging the collaboration. "But this time, it's for the greater good." Despite their fierce competition in the CPU and GPU markets, Intel and AMD's history of working together on key computing standards like PCI, PCIe and USB sets a strong foundation for the latest initiative. x86 is a type of chip architecture that has been the backbone of modern computing for over 40 years. Intel processors like the Pentium and Core i3/i5/i7/i9 and AMD's Ryzen Athlon are some of the most well-known x86-based CPUs. The new advisory group is intended to set a precedent for expanding the x86 architecture's influence across the tech landscape and enhancing its computing prowess for use cases where industry-wide standardization is crucial, particularly in genAI. Founding members, including Broadcom CEO Hock Tan, termed the collaboration a "crossroads in the history of computing." Through a video message played at the event, Google Cloud CEO Thomas Kurian said that the partnership will unlock "new levels of performance and efficiency." "The future of A.I. is personalized and on-device," Gelsinger said during his presentation. "Rather than relying on cloud servers for data sharing, the next generation of A.I. will utilize powerful, centralized A.I. processing directly on devices, eliminating the dependency on internet connection while maintaining the same processing capabilities and responsiveness." After Gelsinger's presentation, AMD CEO Lisa Su stepped up on stage to unveil a new range of AMD EPYC 9005 Series CPUs and Instinct MI325X processors. Su claimed that the new processors will deliver 28 percent faster data processing, ensuring GPUs can handle even the most demanding A.I. workloads. She said Instinct processors have now been fine-tuned for popular genA.I. models like Stable Diffusion 3 and Meta Llama 3, supporting over a million models available through Hugging Face. "With our 5th Gen AMD EPYC, we are pushing the boundaries of computational and A.I. workload handling capability in the market," Su said onstage. She added that leading A.I. models in the industry, such as OpenAI's GPT and Meta's Llama, are now powered by AMD's latest A.I. accelerators. She also announced AMD's collaboration with Lenovo on making A.I.-powered PCs to speed up the widespread adoption of personalized, hybrid A.I. technologies for consumers. The recent collaboration between Intel and AMD marks a strategic alignment between the two industry giants, to position them as the top producers of x86-based chips and outpace competitors like Nvidia. Interestingly, it appears Intel is now betting on AMD's recent success to remain at the core of computing innovation for years to come -- highlighting a significant shift in the dynamics of the tech world rivalry.
[2]
AMD, Intel Team Up to Challenge Qualcomm's ARM Push
In an exclusive revelation at the Advancing AI 2024 event, an AMD executive told AIM that the company was the first to bring neural processors to the x86 environment. In an unlikely turn of events, two chip rivals, AMD and Intel, have come together to form the x86 Ecosystem Advisory Group. The advisory group will focus on expanding the x86 ecosystem by simplifying software development and improving platform interoperability. Intel and AMD will provide developers with tools to create scalable solutions and identify architectural enhancements that can meet the demands of modern computing, such as AI workloads, custom chiplets, and advancements in 3D packaging. "We are on the cusp of one of the most significant shifts in the x86 architecture and ecosystem in decades - with new levels of customisation, compatibility and scalability needed to meet current and future customer needs," said Intel chief Pat Gelsinger. "We proudly stand together with AMD and the founding members of this advisory group, as we ignite the future of compute, and we deeply appreciate the support of so many industry leaders." "Establishing the x86 Ecosystem Advisory Group will ensure that the x86 architecture continues evolving as the compute platform of choice for developers and customers," said AMD chief Lisa Su. AMD and Intel believe that x86 is still relevant in the era of AI. In an exclusive revelation at the Advancing AI 2024 event, an AMD executive told AIM that the company was the first to bring neural processors to the x86 environment. "In the x86 world, we introduced those first neural processors in 2023 with a product we call 'Phoenix Point' delivering 10 TOPS of neural processing performance and enabling several workloads, such as Windows Studio effects and many other third-party ISVs that were supporting those early chatbots and assistants on the device," shared the executive. 'Phoenix Point' is the world's first fully accelerated AI inference engine on x86 processor silicon with the new XDNA architecture. This move by Intel to partner with AMD comes at a time when the company is fighting for survival. According to a recent report, Qualcomm is likely to wait until after the US presidential election in November before deciding whether to pursue an offer to buy Intel. With Intel's acquisition, Qualcomm is possibly trying to strengthen its hold in the PC market and add Intel's Lunar Lake with x86 architecture to its portfolio. Intel appears to be not ready to move to x86 architecture anytime soon, while its competitors are shifting to ARM architecture. To advance the x86 advisory group, AMD and Intel garnered support from major PC manufacturers, including HP, Microsoft, Dell, and Lenovo. During the unveiling of Lunar Lake, Gelsinger ended the ARM vs x86 debate, saying, "The final nail in the coffin of this discussion is that some claim x86 can't win on power efficiency. Lunar Lake busts this myth. This radical new SoC architecture and design delivers unprecedented power efficiency -- up to 40% lower power consumption than Meteor Lake, which was already very good." According to Statista, in the third quarter of 2024, Intel processors accounted for 63 percent of x86 computer processor tests, while AMD processors represented 33 percent. When focusing solely on laptop CPUs, Intel is the clear winner, capturing 71 percent of laptop CPU benchmark results in the second quarter of 2024. Meanwhile, AMD processors made up 21 percent of the laptop CPUs tested. Meanwhile, Qualcomm is challenging Intel with its ARM-based processors. The company recently launched Snapdragon X Elite, its latest ARM-based processor designed for Windows laptops, to compete with Apple's M-series and Intel's x86 processors. Microsoft recently introduced a new category called 'Copilot+ PC,' which can run generative AI models directly on the device without relying on cloud support. Interestingly, during the announcement Microsoft appeared to favour Qualcomm processors over Intel and AMD for its AI capabilities. Qualcomm's Snapdragon X Elite and X Plus are set to launch in new Windows Surface PCs, along with offerings from Dell, Lenovo, HP, Asus, Acer, and other major OEMs in the coming months. These processors feature NPUs capable of 45 TOPS, slightly exceeding Microsoft's minimum requirement. On the other hand, Intel claims that Lunar Lake processors are 30% faster than AMD chips and 68% faster than Qualcomm's offerings, although these claims have yet to be validated through real-world testing. Intel's Lunar Lake features 40 NPU tera operations per second (TOPS) and over 60 GPU TOPS, resulting in more than 100 platform TOPS. "It does seem like Lunar Lake is the kiss of death for Snapdragon X Elite. Similar battery life, but with broad app compatibility of x86, and an actually usable GPU. However, long term though, I hope this isn't the death of Windows-on-ARM. It's always good to have more silicon vendors, and hence more competition,"a user posted on Reddit. Similarly, AMD recently launched the new Ryzen AI PRO 300 Series, based on the AMD 'Zen 5' architecture, which is AMD's implementation of the x86-64 instruction set. The processor offers over 50 NPU TOPS of AI processing power, exceeding Microsoft's Copilot+ AI PC requirements. On the other hand, much like Qualcomm, Apple, which previously used to use x86, has also moved away to ARM architecture. Last year, Apple signed a new deal with ARM for chip technology extending beyond 2040. Apple is already using ARM's V9 architecture for its latest M4 MacBook chips, which it announced in May. The collaboration between AMD and Intel is intriguing, given that they are competitors in the AI data centre market as well. Recently, AMD launched the 5th Gen AMD EPYC processors, formerly codenamed 'Turin' which they market as the world's best server CPUs for enterprise, AI, and cloud applications. At the Advancing AI 2024, AMD claimed that its EPYC is better than Intel's Xeon "We did a 50-50 comparison between general compute workloads and AI workloads, and EPYC still outperforms Xeon, even with AMX," said an AMD executive in an exclusive interview with AIM. The company claimed that AMD EPYC 9965 processor-based servers will significantly improve over Intel's 5th generation, Xeon 8592+ CPU-based servers. They can expect 4X faster results in business applications like video transcoding, 3.9X quicker insights in scientific and HPC applications, and 1.6X better performance per core in virtualised environments. However, earlier this year, Intel also announced its sixth-generation Xeon servers. The new processors are optimised for AI workloads. The new offerings include support for GenAI solutions like Retrieval Augmented Generation (RAG) and feature Intel Advanced Matrix Extensions (Intel AMX) to improve AI performance. Moreover, the company will release the complementary Xeon 6900E and 6700P series CPUs in Q1 2025. It will be interesting to see what customers choose and how long this new brewing friendship lasts. Also, this is not the first time AMD has partnered with its rival. Recently, it partnered with NVIDIA to provide them with GPUs that can be integrated into NVIDIA's GPUs. "We've shown a 20% improvement in training and 15% improvement in inference when connecting EPYC CPUs to NVIDIA's H100 GPUs," said Ravi Kuppuswamy, senior vice president & general manager at AMD, at Advancing AI 2024, citing Llama 3.1 inference model with 8 H100 GPUs, where the CPU provided significant value in large-scale GPU clusters.
[3]
AMD Makes NVIDIA GPUs EPYC
"We've shown a 20% improvement in training and 15% improvement in inference when connecting EPYC CPUs to NVIDIA's H100 GPUs." Even as competition soars between the two leading chip manufacturers, AMD and NVIDIA, a new partnership is quietly emerging between the two rivals in the AI and computing world. Few realise that AMD's EPYC CPUs are critical in powering NVIDIA's GPUs for large-scale AI workloads. Case in point: As AI models grow in complexity, the demand for both GPU and CPU performance is increasing, with AMD CPUs showing significant potential to enhance NVIDIA's GPUs. "We've shown a 20% improvement in training and 15% improvement in inference when connecting EPYC CPUs to NVIDIA's H100 GPUs," said Ravi Kuppuswamy, senior vice president & general manager at AMD. At the Advancing AI 2024 event, Kuppuswamy cited the Llama 3.1 inference model with 8 H100 GPUs, where the CPU provided significant value in large-scale GPU clusters. He added that this is a joint collaboration between AMD and NVIDIA, where they have identified the best EPYC CPUs to optimise the configuration between CPU and GPU. AIM noted this interesting synergy as the chip manufacturers partnered to integrate AMD's EPYC CPUs into NVIDIA's HGX and MGX GPU systems. It optimises AI and data center performance by leveraging AMD's high-core processors alongside NVIDIA's parallel computing GPUs, while promoting open standards for greater flexibility and scalability. "We don't want to force choices on our customers... We will continue to push open standards and interoperate with vendors across the industry," said Madhu Rangarajan, corporate VP at AMD, EPYC Products. He emphasised AMD's open approach, supporting diverse customer needs. AMD's 5th-generation EPYC processors, with NVIDIA's HGX and MGX GPU clusters, are likely to optimise the performance of data centers and enterprise tasks to the next level. "even the fiercest of rivals can come together when it benefits their customers," said AMD, underscoring the importance of this partnership in advancing AI and high-performance computing. Previously, AMD claimed its EPYC processors deliver twice the performance of NVIDIA's Grace Hopper Superchip across multiple data center workloads, showcasing significant advantages in general-purpose computing and energy efficiency. "Our EPYC processors provide a lower total cost of ownership due to their performance, energy efficiency, and extensive x86-64 software compatibility." It highlighted how NVIDIA's Arm-based CPUs lag in non-AI workloads compared to AMD's Zen 4 EPYC processors. "This is good news, as NVIDIA is currently using the vastly inferior Intel Xeon in its systems," A reddit user posted, suggesting that AMD should leverage its strengths to capture a larger share of HPC CPU. After NVIDIA, AMD Partners with Intel In an unlikely turn of events, AMD today partnered with Intel to create an x86 ecosystem advisory group bringing together technology leaders to establish the world's most widely used computing architecture. "We are on the cusp of one of the most significant shifts in the x86 architecture and ecosystem in decades - with new levels of customisation, compatibility and scalability needed to meet current and future customer needs," said Pat Gelsinger, CEO of Intel. AMD's chief, Lisa Su, said that this collaboration brings the industry together to pave the way for future architectural enhancements and extend the success of x86 for decades. "Establishing the x86 Ecosystem Advisory Group will ensure that the x86 architecture continues evolving as the compute platform of choice for developers and customers," she added. Both AMD and Intel believe that x86 is still relevant in the era of AI. an AMD executive told AIM at the Advancing AI 2024 event that the company was the first to bring neural processors to the x86 environment. "In the x86 world, we introduced those first neural processors in 2023 with a product we call 'Phoenix Point' delivering 10 TOPS of neural processing performance and enabling several workloads, such as Windows Studio effects and many other third-party ISVs that were supporting those early chatbots and assistants on the device," shared the executive.
Share
Share
Copy Link
AMD and Intel, longtime rivals in the semiconductor industry, have joined forces to create the x86 Ecosystem Advisory Group, aiming to strengthen their position in the AI market and challenge competitors like Qualcomm and Nvidia.
In a surprising turn of events, longtime semiconductor rivals AMD and Intel have formed an unprecedented alliance to strengthen their position in the rapidly evolving AI computing landscape. The two companies announced the creation of the x86 Ecosystem Advisory Group at this year's Lenovo Tech World event in Seattle 1.
The newly formed coalition aims to ensure that upcoming AMD and Intel chips are "consistent and compatible" across computing applications, particularly focusing on meeting the growing demands of generative AI (genAI) 1. The group will seek insights and feedback from various tech giants, including Broadcom, Dell, Google, Hewlett Packard Enterprise, Lenovo, Meta, Microsoft, Oracle, and Red Hat 1.
Both AMD and Intel believe that the x86 architecture remains relevant in the AI era. AMD claims to have introduced the first neural processors to the x86 environment with their "Phoenix Point" product, delivering 10 TOPS of neural processing performance 23. Intel's CEO, Pat Gelsinger, emphasized the importance of on-device AI processing, stating, "The future of AI is personalized and on-device" 1.
This collaboration comes at a crucial time as both companies face increasing competition from rivals like Qualcomm and Nvidia. Qualcomm has been making strides with its ARM-based processors, recently launching the Snapdragon X Elite for Windows laptops 2. Meanwhile, AMD's EPYC CPUs have shown significant potential in enhancing Nvidia's GPUs for large-scale AI workloads, demonstrating a 20% improvement in training and 15% improvement in inference when connected to Nvidia's H100 GPUs 3.
During the event, AMD CEO Lisa Su unveiled a new range of AMD EPYC 9005 Series CPUs and Instinct MI325X processors, claiming 28% faster data processing and fine-tuned performance for popular genAI models like Stable Diffusion 3 and Meta Llama 3 1. Intel, on the other hand, touted its upcoming Lunar Lake processors, claiming they are 30% faster than AMD chips and 68% faster than Qualcomm's offerings 2.
The collaboration has garnered support from major PC manufacturers and tech companies. Broadcom CEO Hock Tan called it a "crossroads in the history of computing," while Google Cloud CEO Thomas Kurian stated that the partnership will unlock "new levels of performance and efficiency" 1.
This alliance marks a significant shift in the dynamics of the tech industry, with Intel and AMD setting aside their rivalry to focus on advancing x86 architecture and maintaining their dominance in the face of growing competition from ARM-based processors and specialized AI chips 123.
As the AI arms race intensifies, this collaboration between AMD and Intel could potentially reshape the landscape of AI computing, setting new standards for performance, efficiency, and compatibility in the years to come.
Reference
[2]
[3]
Intel and AMD have formed an industry consortium with major tech companies to improve the x86 instruction set architecture, aiming to enhance compatibility and performance across different platforms, particularly for AI workloads.
4 Sources
4 Sources
AMD announces its new MI325X AI accelerator chip, set to enter mass production in Q4 2024, aiming to compete with Nvidia's upcoming Blackwell architecture in the rapidly growing AI chip market.
25 Sources
25 Sources
AMD unveils its next-generation AI accelerator, the Instinct MI325X, along with new networking solutions, aiming to compete with Nvidia in the rapidly growing AI infrastructure market.
16 Sources
16 Sources
AMD's CEO Lisa Su emphasizes the company's accelerated AI roadmap and the ongoing AI industry growth. She discusses AMD's strategic positioning and future plans in the rapidly evolving AI market.
2 Sources
2 Sources
AMD's AI GPU business, led by the Instinct MI300, has grown rapidly to match the company's entire CPU operations in revenue. CEO Lisa Su predicts significant market growth, positioning AMD as a strong competitor to Nvidia in the AI hardware sector.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved