Intel unveils Xeon 6+ with 288 cores on 18A process, targeting edge AI and 6G networks

3 Sources

Share

Intel introduced its Xeon 6+ processor, codenamed Clearwater Forest, built on the advanced 18A process with 288 Darkmont efficiency cores. The chip targets AI inference in telecom networks and data centers, delivering 38% lower rack power and 60% better performance per watt. Major carriers including Ericsson, Vodafone, and Rakuten Mobile are already deploying the technology for 5G and early 6G infrastructure.

Intel Debuts Advanced 18A Process in Data Center CPU

Intel announced its Xeon 6+ processor lineup at MWC 2026 in Barcelona, marking the commercial debut of the company's 18A process technology in data center workloads

1

. Codenamed Clearwater Forest, the new chips pack up to 288 cores into a single processor, representing the first server-grade CPU in the 1.8-nanometer class

1

. The launch signals a strategic shift for Intel toward efficiency and integration rather than raw clock speed, particularly for cloud infrastructure and telecom networks preparing for 6G infrastructure

2

.

Source: TechRadar

Source: TechRadar

Multi-Tile Architecture Drives Unprecedented Core Density

Clearwater Forest employs a complex multi-tile architecture that combines 12 compute tiles, each containing 24 Darkmont efficiency cores fabricated on the 18A process

3

. The chiplet design connects these tiles using Foveros Direct 3D stacking technology and EMIB (Embedded Multi-Die Interconnect Bridge) links, the same packaging innovations used in Intel's high-end GPUs

1

. Two input/output tiles built on Intel 7 handle memory, PCIe, and network interfaces, while three base tiles fabricated on Intel 3 anchor the structure

1

. This disaggregated approach keeps data close to the cores while minimizing power consumption, a critical requirement for data center workloads where every watt matters

1

.

Efficiency Gains Target AI Inference and Telecom Networks

Testing by Ericsson demonstrated that a single 288-core Xeon 6990E+ Clearwater Forest processor reduced runtime rack power by 38 percent while delivering more than 60 percent better performance per watt compared to a dual-socket 288-core Xeon 6780E Sierra Forest system

2

. Overall performance improved by 30 percent in the same comparison

2

. These efficiency metrics matter particularly for edge AI deployments where power budgets are constrained and cooling infrastructure is limited.

Integrated Accelerators Enable AI Inference Without Dedicated GPUs

Intel equipped the Xeon 6+ with accelerators designed to handle AI inference and network processing directly on the CPU. Each chip includes Advanced Matrix Extensions (AMX) for AI workloads, QuickAssist Technology (QAT) for cryptographic and compression tasks, and vRAN Boost tailored for virtualized Radio Access Network (RAN) deployments

1

. A single Xeon 6+ CPU features 16 total accelerators across its I/O tiles, including four Intel Dynamic Load Balancers, four QAT units, four Data Streaming Accelerators, and four In-Memory Analytics Accelerators

3

. Kevork Kechichian, executive vice president of Intel's Data Center Group, emphasized that "AI in networks isn't 'CPU vs. GPU' -- it's right compute for the workload"

2

. This approach allows operators to avoid redesigning server racks while still scaling AI inference efficiently

1

.

Major Carriers Commit to Deployment for 5G and 6G Networks

Rakuten Mobile is working with Intel to train and deploy AI models for low-latency RAN workloads using Xeon 6+ processors

2

. Vodafone has committed to adopting the chips for Open RAN and vRAN modernization projects across Europe

2

. Intel is expanding its partnership with Ericsson to jointly develop "AI-native 6G solutions" that combine intelligent, programmable networks with advanced compute and real-time sensing capabilities

3

. The collaboration aims to advance high-performance, energy-efficient compute architectures for both AI-driven network optimization and networks designed to support AI workloads

3

.

Platform Specifications Support Massive Scale

The Xeon 6+ platform maintains compatibility with existing Xeon sockets, easing deployment for system builders

1

. It supports 12 channels of DDR5 memory running up to 8,000 MT/s and up to 96 PCIe 5.0 lanes, with 64 lanes supporting Compute Express Link (CXL) 2.0 for coherent memory or device expansion

1

. Dual-socket configurations double available computing resources to 576 Darkmont cores in a single server

1

. The cache architecture was completely re-engineered, with four Darkmont cores sharing a 4 MB L2 cache and total last-level cache exceeding 1,152 MB

1

. This massive cache allows hundreds of cores to access frequently used data without heavy reliance on external memory bandwidth

1

. Cloud providers will be able to support hundreds of virtual machines on a single Xeon 6+ CPU

3

. Intel expects systems featuring Xeon 6+ processors to ship in the first half of 2026

3

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo