Curated by THEOUTPOST
On Wed, 25 Sept, 12:06 AM UTC
3 Sources
[1]
Intel Xeon 6900P 'Granite Rapid' CPU series with up to 128 P-Cores announced
Intel on Tuesday introduced its Xeon 6 processors, featuring Performance-cores (P-cores) and Gaudi 3 AI accelerators. This launch enhances Intel's efforts to provide powerful AI systems with improved performance per watt and reduced total cost of ownership (TCO). The Xeon 6900P "Granite Rapids" P-Core CPUs offer up to 128 cores, competing effectively against AMD's EPYC lineup. This follows the earlier release of the Xeon 6700E series in June, which has up to 144 cores, with a planned upgrade to 288 cores in early 2025. The Xeon 6900 series utilizes a chiplet design, allowing up to five chiplets for the P-Core CPUs. The Compute Die is built on the "Intel 3" process node, featuring Redwood Cove P-Cores and Integrated Memory Controller (IMC). The I/O die, based on the "Intel 7" process node, includes various I/O controllers and accelerator engines. Intel's higher-end Xeon 6900 "Granite Rapids" CPUs will support the LGA 7529 socket platform (Birch Stream), enabling 1S/2S configurations with up to 500W TDP per CPU. Key specifications include: Intel claims the Xeon 6900P delivers significant performance improvements over AMD's EPYC CPUs. Intel's Xeon 6900P offers a 60% improvement in performance per watt and reduces total cost of ownership (TCO) by 30%. In specific benchmarks, it shows a 34% boost in Summarization, 2.15x improvement in Chatbot performance, and an 18% increase in Translation tasks. Compared to AMD's EPYC Genoa and upcoming Turin chips, the Xeon 6900P delivers a 3.65x average gain in AI performance and excels in vector database and scalable vector search workloads. Intel launched five SKUs in the Xeon 6900P series: Overall, the Intel Xeon 6900P lineup represents a strong comeback for Intel in the server market, aiming to compete closely with AMD's upcoming Turin CPUs. Intel announced the general availability of Gaudi 3 AI Accelerator, designed for large-scale generative AI. With 64 Tensor processor cores (TPCs) and eight matrix multiplication engines (MMEs), it enhances deep neural network computations. It features 128GB of HBM2e memory and 24 200Gb Ethernet ports for scalability and performance, while being fully compatible with PyTorch and Hugging Face models. Intel and IBM have partnered to deploy Gaudi 3 AI accelerators on IBM Cloud. This collaboration aims to reduce the total cost of ownership (TCO) for enterprises and improve AI scalability and performance. Intel is working with Dell Technologies to co-engineer AI solutions, focusing on moving generative AI prototypes into production. Dell's retrieval-augmented generation (RAG) systems leverage Intel's technologies, addressing scalability and security challenges. Intel's Tiber Developer Cloud offers early access to Xeon 6 and Gaudi 3 systems for developers to test AI workloads. Gaudi 3 clusters will be available for large-scale deployments next quarter, alongside Intel's SeekrFlow platform, which supports AI applications. Updated AI tools, including Intel Gaudi software and PyTorch 2.4, offer enhanced acceleration. Intel continues to lead in AI innovation through partnerships and advanced technology offerings, delivering flexible, scalable, and cost-effective AI solutions for enterprises. Speaking about the launch, Justin Hotard, Intel Executive Vice President and General Manager of the Data Center and Artificial Intelligence Group, said:
[2]
Intel introduces top-end Xeon 6900P server processors with up to 128 cores - SiliconANGLE
Intel introduces top-end Xeon 6900P server processors with up to 128 cores Intel Corp. today introduced a new line of server processors, the Xeon 6900P series, that is designed for use in demanding environments such as artificial intelligence clusters. The product family promises to provide about twice the performance per watt of the chipmaker's previous-generation silicon. According to Intel, the Xeon 6900P series is also significantly better at running AI workloads. Chips in the lineup can complete some inference tasks 2.3 times faster than their predecessors. "Demand for AI is leading to a massive transformation in the data center, and the industry is asking for choice in hardware, software and developer tools," said Justin Hotard, the executive vice president and general manager of Intel's Data Center and Artificial Intelligence Group. Intel's Xeon server chip portfolio, of which the new Xeon 6900P series is part, implements two types of cores. One variety is optimized for power-efficiency while the other prioritizes performance. Whereas some Xeon chip families include a mix of performance- and efficiency-optimized cores, the new Xeon 6900P series features only the former variety to boost processing speeds. The flagship processor in the series, the Xeon 6980P, ships with 128 cores that operate at a base frequency of two gigahertz. They can nearly double that speed to 3.9 gigahertz for short amounts of time when running demanding workloads. The cores are supported by a 504-megabyte L3 cache in which the chip stores data that is actively used by the applications in runs. The Xeon 6900P series also includes four other processors with less computing capacity. They have 72 to 120 cores that are configured with higher base frequencies than the flagship 6980P. The five chips in the series share a common design. The cores in a 6900P processor are implemented on three different pieces of silicon, or chiplets, that also contain the chip's cache and certain related components. The cores can use the cache as a shared storage environment or split it up and keep their data in separate memory pools. Intel produces the chiplets with its latest Intel 3 manufacturing process, which is the second from the company to use extreme ultraviolet lithography, or EUV, technology. The process provides 18% better performance per watt than Intel's first-generation EUV implementation. The three chiplets that contain a Xeon 6900P processor's cores and cache are integrated with two other semiconductor modules made using the earlier Intel 7 node. Those two modules help speed tasks such as compressing and encrypting data. They also contain the I/O, or input and output, circuits that allow the chip to connect to the other components of the server in which it's installed. Another selling point of the Xeon 6900P series is that it allows servers to be equipped with MRDIMM memory. This is a faster version of DDR5, a memory technology widely used in data centers. MRDIMM promises to provide up to 39% more bandwidth than earlier technologies. The technology also eases chip manufacturing in certain respects. It comes in a so-called tall form factor, or TFF, configuration that doubles the maximum amount of memory a processor can accommodate. Moreover, it does so without requiring the use of complex chip packaging components that increase manufacturing costs. Alongside the debut of the Xeon 6900 series, Intel today officially launched the Gaudi 3 machine learning accelerator that it introduced in April. The latter chip is positioned as an alternative to Nvidia Corp.'s market-leading graphics processing units. Intel says that the Gaudi 3 can perform inference 30% faster than Nvidia's previous-generation H200 GPU. Under the hood, Intel's new AI chip comprises two sets of computing modules. It features eight so-called MME modules optimized to run relatively simple machine learning tasks. There are also 64 TPC units, which are designed to power advanced AI workloads such as large language models. Intel has published a reference architecture for an AI appliance that can hold up to 256 Gaudi 3 chips. According to the company, companies with advanced requirements may link together multiple such appliances into a single cluster. AI clusters assembled in this manner can be equipped with up to 8,000 Gaudi 3 chips.
[3]
Intel Launches Xeon 6900P Series
Building off the launch earlier this year of the first Xeon 6 processors with the Xeon 6700E "Sierra Forest" processors, today Intel is lifting the wraps on the much anticipated Xeon 6900P "Granite Rapids" processors. Where as Sierra Forest is optimized for power efficiency and core density, the Intel Xeon 6 P-core processors are optimized for per-core performance and have shown some very strong generational uplift -- and against the AMD competition -- as we'll show today in the first Xeon 6980P Linux benchmarks. Intel held briefings in Oregon last week to go over the initial Xeon 6900P processors as part of their Enterprise Tech Tour. A lot of the Xeon 6 information was previously disclosed by Intel but they also revealed Granite Rapids benchmark results, the SKU table for the Xeon 6900P processors, and more. Intel is claiming up to 5.5x higher AI inferencing performance over AMD EPYC and up to 2.1x higher HPC performance over AMD EPYC. With the very lofty AI inferencing claims, it comes down to software able to effectively leverage Advanced Matrix Extensions (AMX). As we have shown with prior Xeon Sapphire Rapids / Emerald Rapids, AMX can make quite an impact for capable AI workloads. Intel's numbers show big generational gains going from prior Emerald Rapids processors to Granite Rapids. I ended up being very impressed with the Granite Rapids performance in the lab and have my own benchmarks to share today as well. Particularly for HPC and technical computing workloads, Granite Rapids is especially strong. Xeon 6900P processors support up to DDR5-6400 memory, up to 8800 MT/s MRDIMMs, up to 128 cores, six UPI 2.0 links, up to 96 lanes of PCI Express 5.0, and L3 cache sizes up to 504MB. MRDIMMs with Xeon 6900P are quite exciting for really helping increase system memory bandwidth... Check out my benchmark results for a ton more benchmarks. I'll have up DDR5-6400 vs. MRDIMM 8800 benchmarks in the coming days on Phoronix. Intel Xeon 6900P series feature CXL 2.0 support. Intel engineers have also been doing much of the CXL enablement upstreaming within the Linux kernel. And what most of you are eager to see... The Xeon 6900P SKU table. It's much smaller than we are used to seeing for Intel Xeon launches where there can be dozens of different SKUs. Keep in mind though this is just the Xeon 6900P line-up while the Xeon 6700P and others will come later. The flagship Granite Rapids processor is the Xeon 6980P offering 128 cores with a 2.0GHz base clock and 3.9GHz maximum turbo clock with 3.2GHz all-core turbo. The Xeon 6980P has 504MB of L3 cache and 500 Watt TDP rating. The Xeon 6900P series currently bottoms out at the Xeon 6960P with 72 cores and an all-core turbo of 3.8GHz and 432MB L3 cache while maintaining a 500 Watt TDP. All of the Xeon 6900P processors support up to 12 channel DDR5-6400 / MRDIMM-8800 memory, 2S scalability, 4/4/4/4 DSA / UAA / QAT / DLB accelerators, 1024 TDX keys, six UPI links, and 96 PCIe lanes. Nice seeing all of the uniformity between the Xeon 6900P models for a pleasant change. Intel hasn't publicly disclosed the Xeon 6900P pricing as of writing. Then again it can be highly volatile and the major CSPs/hyperscalers often seeing steep discounts. So for now there's nothing to share on the pricing front. Intel barely talked at all about their accelerators with Granite Rapids -- quite a stark difference from when they were introduced with the Sapphire Rapids launch. The accelerators are there but the software ecosystem support remains less than ideal and the performance benefits can vary highly depending upon the software and data set size. From what I'm hearing, the Intel accelerator story is most convincing around networking/telco and select edge use-cases. Another reason the IAA / DSA accelerators may have received little coverage during the Granite Rapids event... They can't be assigned securely to VMs. Back in August I wrote about Intel's Current IAA & DSA Accelerators Aren't Safe For VMs Due To A Security Issue. In there I noted that the new device IDs for marking the IAA and DSA accelerators as secure are just for Diamond Rapids and Granite Rapids D. Indeed I was able to confirm that the Xeon 6900P Granite Rapids processors suffer from this same security issue as Sapphire Rapids / Emerald Rapids / Sierra Forest. So without being able to assign the accelerators to VMs, there's likely not much of a cloud play this generation for the accelerator IP at least for public CSPs. Early next year Intel is still pacing for introducing the Xeon 6900E series Sierra Forest processors at up to 288 cores per socket as well as introducing the Xeon 6700P / Xeon 6500P / Xeon 6300P Granite Rapids processors and the Xeon 6 SoC Granite Rapids D offering.
Share
Share
Copy Link
Intel has announced its latest high-performance server processors, the Xeon 6900P series, featuring up to 128 cores and significant performance improvements. This launch marks a major step in Intel's efforts to compete in the data center market.
Intel has officially unveiled its latest addition to the server processor market, the Xeon 6900P series, codenamed "Granite Rapids." This new line of CPUs represents a significant advancement in Intel's data center offerings, aiming to solidify its position in the highly competitive server market 1.
The flagship model of the Xeon 6900P series boasts an impressive 128 cores, a substantial increase from previous generations. This leap in core count is accompanied by a host of other improvements, including support for up to 8TB of RAM and 80 PCIe Gen5 lanes 2. The processors are built on Intel's latest manufacturing process, promising enhanced efficiency and performance.
Intel has incorporated several cutting-edge technologies into the Xeon 6900P series. These include Advanced Matrix Extensions (AMX) for AI workloads, Data Streaming Accelerator (DSA) for improved data movement, and Quick Assist Technology (QAT) for cryptography and data compression 1. These features are designed to cater to the evolving needs of modern data centers, particularly in areas such as artificial intelligence and machine learning.
Early benchmarks suggest that the Xeon 6900P series offers significant performance improvements over its predecessors. Intel claims up to 2.7x better performance per watt compared to the previous generation Xeon processors 2. Independent tests have shown promising results, with the new CPUs demonstrating strong capabilities in various workloads, including scientific computing and server-side Java applications 3.
The launch of the Xeon 6900P series is seen as Intel's response to increasing competition in the server processor market, particularly from AMD's EPYC processors. With this release, Intel aims to reclaim its leadership position in the data center space, offering a compelling option for enterprises looking to upgrade their server infrastructure 2.
Despite the increased core count and performance, Intel has placed a strong emphasis on energy efficiency with the Xeon 6900P series. The company has implemented various power-saving features and optimizations to ensure that the processors deliver high performance without excessive power consumption, addressing the growing concern for energy-efficient data center solutions 1.
Intel has announced that the Xeon 6900P processors will be available through major server manufacturers and system integrators. The company is working closely with software vendors to ensure optimized support for the new processors, enabling customers to fully leverage the capabilities of these advanced CPUs in their existing and future applications 2.
Reference
[2]
[3]
Intel launches new Xeon 6 processors with performance cores, offering improved AI processing and networking solutions for data centers and edge computing.
10 Sources
10 Sources
Intel's new Xeon 6900P series, based on Granite Rapids architecture, brings 120 cores to the table, matching AMD's EPYC core counts for the first time since 2017. This launch marks a significant milestone in the CPU market, with implications for AI and data center performance.
5 Sources
5 Sources
Intel launches new Xeon 6 CPUs and Gaudi 3 AI accelerators to boost AI and high-performance computing capabilities in data centers, aiming to compete with AMD and NVIDIA in the AI chip market.
7 Sources
7 Sources
Intel has launched its latest Xeon W-3500 and W-2500 series workstation processors, based on the Sapphire Rapids refresh. These new CPUs offer increased core counts, larger cache, and improved performance for professional workloads.
2 Sources
2 Sources
Intel introduces the Core Ultra 200S series desktop processors, featuring on-device AI capabilities, improved performance, and enhanced power efficiency. This marks a significant shift in desktop computing, integrating AI acceleration and advanced manufacturing techniques.
10 Sources
10 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved