Curated by THEOUTPOST
On Fri, 11 Oct, 12:04 AM UTC
4 Sources
[1]
AMD launches Epyc 9005 "Turin" processors with up to 192 Zen 5c cores
In a nutshell: AMD has launched the fifth generation of its Epyc server processors, codenamed "Turin." The lineup includes 27 SKUs and introduces significant advancements with the new Zen 5 and 5c core architectures. These new chips will compete in the data center market against Intel's Granite Rapids and Sierra Forest offerings. Launched under the Epyc 9005 branding, the 5th-gen Epyc CPU family is compatible with AMD's SP5 socket, just like the Zen 4-based "Genoa" and "Bergamo" processors, and features two distinct designs. The first is the "Scale-Up" variant, utilizing 4nm Zen 5 cores with up to 16 CCDs for optimal single-threaded performance. The second is the "Scale-Out" variant, which leverages the 3nm Zen 5c core design with up to 12 CCDs for improved multi-core throughput. The lineup is led by the Epyc 9965, featuring 192 Zen 5c cores, 384 threads, a base clock of 2.5GHz, and a boost clock of up to 3.7GHz. It offers 384MB of L3 cache and has a default TDP of 500W. The processor is priced at $14,813. The flagship Zen 5 product is the Epyc 9755, which boasts 128 cores, 256 threads, 512MB of L3 cache, a 2.7GHz base clock, a 4.1GHz boost clock, and a 500W TDP. It retails for $12,984. On the other end of the spectrum, the entry-level Turin processor is the Epyc 9015, featuring 8 Zen 5 cores, a 3.6GHz base clock, a 4.1GHz boost clock, a 125W TDP, and 64MB of L3 cache. It has an MSRP of $527. AMD claims the Epyc 9965 chip offers up to 3.7x faster performance than the Xeon Platinum 8592+ in end-to-end AI workloads, such as TPCx-AI (derivative). In generative AI models, like Meta's Llama 3.1-8B, the Epyc 9965 is said to deliver 1.9x the throughput performance of the Xeon Platinum 8592+. According to AMD, the Zen 5 cores enable Turin to deliver significant performance gains over the previous generation, with up to a 17 percent increase for Enterprise and Cloud platforms, and up to a 37 percent improvement for HPC and AI platforms. The chips also feature boost frequencies of up to 5GHz (Epyc 9575F and 9175F) and AVX-512 support with the full 512-bit data path. The Epyc 9575F, AMD's purpose-built AI host node CPU, leverages its 5GHz boost clock to enable a 1,000-node AI cluster to handle up to 700,000 more inference tokens per second. Beyond the shift to Zen 5 core architecture, Turin introduces several key advancements, including support for up to 12 channels of DDR5-6400 MT/s memory, 6TB memory capacities per socket, and 128 PCIe 5.0/CXL 2.0 lanes. Another notable feature is Dynamic Post Package Repair (PPR) for x4 and x8 ECC RDIMMs, improving memory reliability.
[2]
AMD 5th Gen EPYC Turin CPUs Launched: Up To 37% IPC Increase, Up To 192 Cores, 500W TDP, 5 GHz Clocks & Significantly Outperforming Xeon
AMD's 5th Gen EPYC CPUs, codenamed Turin, are now official, and bring major uplifts across the board with the Zen 5 core architecture. AMD Zen 5 Now Launched For Data Centers With 5th Gen EPYC "Turin" Family: Up To 192 Cores, 500W TDPs and 5 GHz Clock Speeds The day is finally here, it's the start of a new chapter of EPYC with a brand new core architecture that once again delivers substantial generational uplifts and further elevates AMD's hold of the data center segments. Since launch, AMD's EPYC CPUs have amassed a 34% market share hold in the server segment, up from 2% back in 2018. The AMD EPYC platform is being used by some of the world's biggest tech companies with over 950 cloud instances and over 350 OEM platforms & now it's time for an upgrade! Meet Turin, the 5th Gen EPYC CPU family which is being branded under the "EPYC 9005" series. The lineup is set out to achieve three goals, extend server CPU leadership, propel towards efficient modernization, and offer an End-to-End AI leadership platform. To achieve these goals, AMD is using its latest Zen 5 core architecture to power the family. We have detailed the Zen 5 architecture in detail here. For its 5th Gen EPYC lineup, there are going to be two solutions. The 4nm version of Turin with up to 16 "Zen 5" CCDs, offering up to 128 cores and 256 threads, which is referred to as the "Scale-Up" variant while the second one is the "Scale-Out" variant which utilizes the 3nm "Zen 5C" cores with up to 12 CCDs, offering up to 192 cores and 384 threads. Turin packs up to 17 chiplets with a total of 150 Billion transistors for the full chip. The CPUs will come with AVX-512 support with a full 512b data path & up to 5 GHz clock speeds. Chips can be configured in 1P or 2P servers. In terms of IPC improvement, AMD states that Zen 5 delivers "exceptional uplifts" over the previous generation with up to a 17% increase for Enterprise and Cloud platforms and up to a 37% increase for HPC and AI platforms. The lineup scales from 8 cores to up to 192 cores and TDPs scale from 155W to up to 500W. As for the platform itself, AMD is relying on the same SP5 socket for both variants of Turin which makes it an easy drop-in upgrade from the previous Genoa and Bergamo "Zen 4" releases. The platform still offers a 12-channel memory solution but now has DDR5 speeds configured up to 6400 MT/s with ECC support, 6 TB capacities per socket & you still get 128 PCie 5.0/CXL 2.0 lanes. New on Turin is support for PPR or Dynamic Post Package Repair for x4 and x8 ECC RDIMMs. On the security front, you get Trusted I/O, FIPS 140-3 in process, and Hardware Root-of-Trust support. The 5th Gen AMD EPYC "Turin" lineup will consist of a total of 27 SKUs which include the EPYC 9965 as the 192-core "Zen 5C" flagship, the EPYC 9755 as the 128-core "Zen 5" flagship, and the EPYC 9575F "Zen 5" chip as the first 5 GHz EPYC SKU. Moving to the details of the flagship SKUs, the AMD EPYC 9965 will feature 192 cores, 384 threads, and 384 MB of L3 cache. This chip will feature a 2.25 GHz base clock and a 3.7 GHz boost clock. The CPU will be configured at a default TDP of 500W and will be priced at $14,813 which is significantly lower than Intel's top Xeon 6900P offering which is priced at $17,800 US. That's a 17% higher price for Intel's flagship offering with 128 cores. The EPYC 9755 which is based on the standard Zen 5 cores will come configured with 128 cores, 256 threads, 512 MB of L3 cache, a base clock of 2.7 GHz, a boost clock of 4.1 GHz, and a TDP of 500W. This chip will feature a price of $12,984 US which is again much lower than Intel's 6980P chip, marking a -27% difference. AMD also has several frequency-optimized variants in 64, 48, 32, 24, and 16 core flavors. The top part is the EPYC 9575F which offers 64 cores, 128 threads, 256 MB of L3 cache, a 400W TDP, a base clock of 3.3 GHz, and a boost clock of 5.0 GHz. This chip is priced at $11,791 US. Lastly, we should mention the entry-level 8-core SKU, the EPYC 9015, which is configured with a 125W TDP and has a base clock of 3.6 GHz, and a boost clock of 4.1 GHz and 64 MB of L3 cache. This chip will cost $527 US. The full lineup and their respective specs/prices can be seen in the table below: AMD 5th Gen EPYC Turin CPU Lineup Specs: Now let's talk about performance, throughout its slides, AMD compares its 5th Gen EPYC CPUs against the 4th Gen EPYC and 5th Gen Intel Xeon lineup. The red team starts by presenting a world record within the SPEC CPU 2017 Integer Throughput tests, leading Intel by 2.7x and 4th Gen EPYC by almost 60%. In terms of per-core performance measured using 32-core parts in the SPECrate 2017 INT base test, the 5th Gen EPYC CPUs deliver a 40% improvement over Intel's 5th Gen Emerald rapids and a 27% uplift over the 4th Gen EPYC SKU. AMD even highlights strong performance at the same cost of licensing in the virtualized segment. In terms of workload performance, the AMD EPYC 9965 192 core CPU offers up to a 4x increase in video transcoding (FFMPEG raw to vp9), 2.3x increase in Business app performance (Specjbb), 3.9x increase in open-source databases (MySQL OLTP), and a 3x increase in Image Rendering (vRay 5) performance versus the Intel 5th Gen Xeon SKU. The EPYC 9965 does offer 3x more cores than the Intel Xeon 8592+ chip which offers 64 cores. So how about performance using the same core count, well AMD also showcased 64-core EPYC 9575F comparisons against the EPYC 9554 and Xeon 8592+. The Zen 5 part with the same core count still leads in performance by up to 1.6x across a range of Enterprise HPC workloads such as Ansys LS-DYNA, Altair Radioss, Ansys Fluent, and Altair AcuSolve. The Opensource HPC performance sees a big gain too across Dense Linear Solver and Modeling & Simulation workloads with the EPYC 9965x delivering anywhere from 2.1 to 3.9x gains over the Intel Xeon CPU and over 2x gains against the 4th gen EPYC "Genoa" CPUs. For AI performance, AMD is also touting some big gains, mostly coming from those AVX-512 512b capabilities which do yield up to an impressive 3.8x gain in performance. Faster SKUs such as the 5 GHz EPYC 9575F deliver a 28% speedup in GPU orchestration tasks. AMD's EPYC platform is known for delivering superb performance at great value and that continues with Turin. With Turin, AMD says that data center firms can move from 1000 servers based on older CPU platforms to just 131 modern servers equipped with EPYC 9965 CPUs. This 7-to-1 approach will allow data center firms to easily migrate to the latest chips while retaining the same x86 architecture set, the same mature ecosystem & same robust tools available at their disposal. The AMD EPYC Turin platform can provide up to 68% reduction in power requirements, up to 87% reduction in server space, and lead to 67% lower TCO over 3 years. AMD also proposes that the extra save spacings can be used to grow the AI and compute capabilities of data centers by over 1.1 Million AI TOPs using just 416 GPUs and a 2.5x compute performance increment coming from those new 640 CPU racks. AMD is also outlying its EPYC platform to be used as an AI Host CPU for AMD Instinct and NVIDIA MGX/HGX platforms. The solution can be equipped with up to 8 OAM MI300X or MI325X GPUs & configurations that use the EPYC 9575F 5 GHz chip and can see up to a 20% performance increase in AI inferencing and up to 15% uplift in training. For NVIDIA, the MGX solutions can be equipped with up to 16 AI accelerators (Hopper/Blackwell) and HGX configurations can get up to 8 accelerators with up to 2 EPYC CPUs. AMD & NVIDIA have announced a technical partnership, recommending a range of EPYC CPUs ranging from 32, 48, and 64 cores which can be seen below: Overall, AMD's 5th Gen EPYC "Turin" family looks to be another disruptive launch, especially given the performance and value they have on offer. AMD isn't sharing the performance figures against Intel's Xeon 6900P for now but we can expect a few updates in the coming months as both chips become widely available.
[3]
AMD Launches EPYC 'Turin' 9005 Series: Our benchmarks of fifth-gen Zen 5 chips with up to 192 cores, 500W TDP
AMD launched its fifth-gen EPYC 'Turin' processors here in San Francisco at its Advancing AI 2024 event, whipping the covers off the deep-dive details of its new Zen 5-powered server CPU family for enterprise, AI and cloud use cases. We also ran some of our own benchmarks in preparation for our review, but decided to share a preview of the impressive results below. AMD has unified its standard scale-up optimized models with full-fat Zen 5 cores and its scale-out optimized models with dense Zen 5c cores into one stack that flies under the EPYC 9005 Turin banner, and made several impressive performance claims against Intel's competing Xeon processors. AMD claims that its flagship 192-core EPYC 9965 is 2.7X faster than Intel's competing flagship Platinum 8952+, with notable speed-up claims including 4X faster video transcoding, 3.9X faster performance in HPC applications, and up to 1.6X the performance per core in virtualized environments. AMD also announced its new high frequency 5GHz EPYC 9575F that it claims is up to 28% faster than Zen 4 EPYC models when used to accelerate AI GPU workloads. We'll break down the product stack and features, and then work our way to the benchmarks. Notably, AMD isn't introducing its X-series models with stacked L3 cache for this generation, instead relying upon its Milan-X lineup for now. AMD says its X-series might get an upgrade every other generation, though that currently remains under consideration. AMD's new series scales from eight cores up to the $14,813 192-core 384-thread EPYC 9965, a 500W behemoth that leverages TSMC's 3nm node for the ultimate in compute density with dense Zen 5c cores. AMD also has five other Zen 5c-powered models that scale well for high-density applications with 96, 128, 144 and 160-core models. AMD also has standard models with Zen 5 cores fabbed on the 4nm node that top out at 128 cores and 256 threads with the $12,984 EPYC 9755. This stack has a total of 22 models that begin at a mere eight cores - a new small-core level for AMD that it created in response to customer demand. AMD also has four single-socket 'P" series models interspersed throughout its product stack. AMD's standard Zen 5 lineup now includes new high frequency SKUs that top out at 5.0 GHz, a new high watermark for AMD's datacenter CPU lineup that will maximize performance in GPU orchestration workloads. AMD has a total of five F-series models for various levels of performance and core counts. The standard Zen 5 models employ up to 16 4nm CCDs (chiplets) paired with a large central I/O die (eight cores per compute chiplet), while the Zen 5c models employ up to 12 3nm CCDs with 16 Zen 5c cores per chiplet paired with the same I/O die. AMD claims a 17% increase in IPC for the RPYC 9005 series, borne of the new Zen 5 architecture. Zen 5 also brings the notable addition of full 512b datapath support for AVX-512, though users have the option to also run the chips in a 'double-pumped' AVX-512 mode that issues 512b instructions as two sets of 256b, thus lowering power requirements and improving efficiency in some workloads. With the exception of the flagship 192-core model, all Turin processors can drop into existing server platforms with the SP5 socket. The 192-core model also drops into the SP5 socket, but it requires special power accommodations, so newer motherboards are needed for that top-end model. This marks the second generation of SP5-compatible EPYC chips, with the previous-generation Genoa also utilizing the platform. This meshes well with AMD's strategy to speed time to market and reduce upgrade friction for its customers and OEM partners; for instance, the first three generations (Naples, Milan and Rome) all utilized a common platform as well. TDPs span from 155W to 500W, with the highest-power models often utilizing new dense watercoolers that resemble standard AIO coolers - the radiator is integrated inside the chassis, as pictured in our sample Turin server above (we have a review in progress). The Turin family is only available with 12 channels of DDR5 memory support, with up to 12TB of memory capacity per server (6TB per socket). AMD originally spec'd Turin at DDR5-6000, but has now increased that to DDR5-6400 for qualified platforms. AMD's platform only supports 1 DIMM per Channel (DPC). Each CPU hosts 128 PCIe 5.0 lanes for single-socket servers and exposes 160 PCIe lanes in a dual socket configuration. AMD also supports CXL 2.0 (caveats apply). We ran short on time before we traveled here to the event, so we didn't have time to finish all of our tests - the testing for the 192-core model isn't yet done. However, we do have plenty of our own results we can share below in advance of our review, which will be posted with the complete results in the coming days. Here's a preview of our results in key areas: AMD shared a series of benchmarks to solidify its performance claims, but as with all vendor-provided benchmarks, we should wait for third-party verification. We are currently testing an EPYC Turin server, so stay tuned for our benchmarks. We included AMD's test notes in an album at the end of the article. AMD made all of its comparisons against Intel's fifth-gen Xeon, though Intel recently begun shipping its Xeon 6 'Granite Rapids' lineup. AMD says it hasn't been able to secure those systems for testing yet, so keep in mind that these benchmarks aren't against Intel's current flagship. AMD claims a new world record in the industry-standard SPEC CPU 2017 integer throughput benchmark with the EPYC 9965, with a 2.7X advantage over Intel's fifth-gen flagship. AMD also claims a 1.4X advantage in per-core performance, which is key to effectively utilizing expensive software licenses that often cost more than the CPU itself - a core value prop for AMD's Turin. In fact, AMD claims 60% more performance at the same licensing costs. Naturally, AMD also included a spate of benchmarks in general compute workloads like video transcoding, business apps, database, and rendering workloads, with a 4X, 2.3X, 3.9X and 3X advantage over fifth-gen Xeon, respectively. AMD also provided plenty of HPC benchmarks that you can see in the above album. AMD shared plenty of benchmarks to back up its assertion that Turin is the best choice for the full range of AI workloads, with those workloads falling into three different buckets. In AI inference workloads that fully saturate the CPU, Intel has held a distinct advantage in CPU inference workloads that leverage its AMX (Advanced Matrix eXtensions) instructions. However, AMD claims that Turin changes that equation with up to 3.0X to 3.8X faster AI inference with its 192-core EPYC 9965 in a range of AI workloads. Many AI implementations rely upon the CPU to orchestrate the GPU AI workloads, thus pushing the GPUs along as they handle the heavy inference and training work. Here AMD claims advantages ranging from 1.08x to 1.2x with its new high-frequency 5GHz EPYC 9575F. AMD shared a list of Nvidia recommendations for pairings with its HGX and MGX systems, along with optimum pairings for its own MI300X systems. AMD also argues that Intel's AMX advantage is only applicable in 100% saturated AI throughput workloads, but AMD opines that most AI workloads occur in mixed environments where general purpose compute workloads are also active. Here AMD claims advantages in a range of mixed general-purpose and AI compute workloads running concurrently on the CPU, with a claimed doubling of performance per dollar over Intel's fifth-gen Zeon. AMD notes that many of its customers are keeping their existing servers for longer periods of time now, with some keeping servers deployed for five or even six years. However, the company points out that you can take 1000 older Xeon Platinum 8280 servers and consolidate that down to 131 Turin servers, yielding up to 68% less power consumption with up to 87% fewer servers. AMD started with roughly two percent of the datacenter revenue market share back in 2017 when it launched the first-gen EPYC Naples chips, but it has now expanded to an impressive 34% of the revenue share throughout the first half of the year on the strength of its fourth-gen Genoa ($2.8 billion last quarter alone). Much of that success comes from not only performance and pricing advantages, but also from on-time predictable execution, a mantra that AMD has repeatedly incessantly since the first-gen launch. AMD says the intervening years have found it offering 6 times more cores and 11 times more performance over its first-gen Milan, which Turin naturally adds to. It also touts its double-digit IPC increase (~14%) with each successive generation. Those generational improvements have built up to an exceedingly impressive lineup for Turin. As you can see above, we have both the Granite Ridge and Turin systems in-house and will share our further analysis of our own test results in our review soon.
[4]
AMD Launches 5th Gen AMD EPYC CPUs, Maintaining Leadership Performance and Features for the Modern Data Center - Advanced Micro Devices (NASDAQ:AMD)
-- New EPYC processors deliver record breaking performance and efficiency for a wide range of data center workloads -- -- AMD EPYC CPUs continue momentum, with more than 950 AMD EPYC-powered public instances available globally and more than 350 platforms from OxMs -- SAN FRANCISCO, Oct. 10, 2024 (GLOBE NEWSWIRE) -- AMD AMD today announced the availability of the 5th Gen AMD EPYC™ processors, formerly codenamed "Turin," the world's best server CPU for enterprise, AI and cloud1. Using the "Zen 5" core architecture, compatible with the broadly deployed SP5 platform2 and offering a broad range of core counts spanning from 8 to 192, the AMD EPYC 9005 Series processors extend the record-breaking performance3 and energy efficiency of the previous generations with the top of stack 192 core CPU delivering up to 2.7X the performance4 compared to the competition. New to the AMD EPYC 9005 Series CPUs is the 64 core AMD EPYC 9575F, tailor made for GPU powered AI solutions that need the ultimate in host CPU capabilities. Boosting up to 5GHz5, compared to the 3.8GHz processor of the competition, it provides up to 28% faster processing needed to keep GPUs fed with data for demanding AI workloads. "From powering the world's fastest supercomputers, to leading enterprises, to the largest Hyperscalers, AMD has earned the trust of customers who value demonstrated performance, innovation and energy efficiency," said Dan McNamara, senior vice president and general manager, server business, AMD. "With five generations of on-time roadmap execution, AMD has proven it can meet the needs of the data center market and give customers the standard for data center performance, efficiency, solutions and capabilities for cloud, enterprise and AI workloads." The World's Best CPU for Enterprise, AI and Cloud Workloads Modern data centers run a variety of workloads, from supporting corporate AI-enablement initiatives, to powering large-scale cloud-based infrastructures to hosting the most demanding business-critical applications. The new 5th Gen AMD EPYC processors provide leading performance and capabilities for the broad spectrum of server workloads driving business IT today. The new "Zen 5" core architecture, provides up to 17% better instructions per clock (IPC) for enterprise and cloud workloads and up to 37% higher IPC in AI and high performance computing (HPC) compared to "Zen 4."6 With AMD EPYC 9965 processor-based servers, customers can expect significant impact in their real world applications and workloads compared to the Intel Xeon® 8592+ CPU-based servers, with: Up to 4X faster time to results on business applications such as video transcoding.7Up to 3.9X the time to insights for science and HPC applications that solve the world's most challenging problems.8Up to 1.6X the performance per core in virtualized infrastructure.9 In addition to leadership performance and efficiency in general purpose workloads, 5th Gen AMD EPYC processors enable customers to drive fast time to insights and deployments for AI deployments, whether they are running a CPU or a CPU + GPU solution. Compared to the competition: The 192 core EPYC 9965 CPU has up to 3.7X the performance on end-to-end AI workloads, like TPCx-AI (derivative), which are critical for driving an efficient approach to generative AI.10In small and medium size enterprise-class generative AI models, like Meta's Llama 3.1-8B, the EPYC 9965 provides 1.9X the throughput performance compared to the competition.11Finally, the purpose built AI host node CPU, the EPYC 9575F, can use its 5GHz max frequency boost to help a 1,000 node AI cluster drive up to 700,000 more inference tokens per second. Accomplishing more, faster.12 By modernizing to a data center powered by these new processors to achieve 391,000 units of SPECrate®2017_int_base general purpose computing performance, customers receive impressive performance for various workloads, while gaining the ability to use an estimated 71% less power and ~87% fewer servers13. This gives CIOs the flexibility to either benefit from the space and power savings or add performance for day-to-day IT tasks while delivering impressive AI performance. AMD EPYC CPUs - Driving Next Wave of Innovation The proven performance and deep ecosystem support across partners and customers have driven widespread adoption of EPYC CPUs to power the most demanding computing tasks. With leading performance, features and density, AMD EPYC CPUs help customers drive value in their data centers and IT environments quickly and efficiently. 5th Gen AMD EPYC Features The entire lineup of 5th Gen AMD EPYC processors is available today, with support from Cisco, Dell, Hewlett Packard Enterprise, Lenovo and Supermicro as well as all major ODMs and cloud service providers providing a simple upgrade path for organizations seeking compute and AI leadership. High level features of the AMD EPYC 9005 series CPUs include: Leadership core count options from 8 to 192, per CPU"Zen 5" and "Zen 5c" core architectures12 channels of DDR5 memory per CPUSupport for up to DDR5-6400 MT/s14Leadership boost frequencies up to 5GHz5AVX-512 with the full 512b data pathTrusted I/O for Confidential Computing, and FIPS certification in process for every part in the series Model (AMD EPYC)CoresCCD (Zen5/Zen5c)Base/Boost5 (up to GHz)Default TDP (W)L3 Cache (MB)Price (1 KU, USD)9965192 cores"Zen5c"2.25 / 3.7500W384$14,8139845160 cores"Zen5c"2.1 / 3.7390W320$13,5649825144 cores"Zen5c"2.2 / 3.7390W384$13,0069755 9745128 cores"Zen5" "Zen5c"2.7 / 4.1 2.4 / 3.7500W 400W512 256$12,984 $12,1419655 9655P 964596 cores"Zen5" "Zen5" "Zen5c"2.6 / 4.5 2.6 / 4.5 2.3 / 3.7400W 400W 320W384 384 384$11,852 $10,811 $11,048956572 cores"Zen5"3.15 / 4.3400W384$10,4869575F 9555 9555P 953564 cores"Zen5" "Zen5" "Zen5" "Zen5"3.3 / 5.0 3.2 / 4.4 3.2 / 4.4 2.4 / 4.3400W 360W 360W 300W256 256 256 256$11,791 $9,826 $7,983 $8,9929475F 9455 9455P48 cores"Zen5" "Zen5" "Zen5"3.65 / 4.8 3.15 / 4.4 3.15 / 4.4400W 300W 300W256 192 192$7,592 $5,412 $4,819936536 cores"Zen5"3.4 / 4.3300W256$4,3419375F 9355 9355P 933532 cores"Zen5" "Zen5" "Zen5" "Zen5"3.8 / 4.8 3.55 / 4.4 3.55 / 4.4 3.0 / 4.4320W 280W 280W 210W256 256 256 256$5,306 $3,694 $2,998 $3,1789275F 925524 cores"Zen5" "Zen5"4.1 / 4.8 3.25 / 4.3320W 200W256 128$3,439 $2,4959175F 9135 911516 cores"Zen5" "Zen5" "Zen5"4.2 / 5.0 3.65 / 4.3 2.6 / 4.1320W 200W 125W512 64 64$4,256 $1,214 $72690158 cores"Zen5"3.6 / 4.1125W64$527 Supporting Resources Watch the full AMD Advancing AI KeynoteLearn more about 5th Gen AMD EPYC ProcessorsFollow AMD on XConnect with AMD on LinkedIn About AMD For more than 50 years AMD has driven innovation in high-performance computing, graphics, and visualization technologies. Billions of people, leading Fortune 500 businesses, and cutting-edge scientific research institutions around the world rely on AMD technology daily to improve how they live, work, and play. AMD employees are focused on building leadership high-performance and adaptive products that push the boundaries of what is possible. For more information about how AMD is enabling today and inspiring tomorrow, visit the AMD AMD website, blog, LinkedIn and X pages. Cautionary Statement This press release contains forward-looking statements concerning Advanced Micro Devices, Inc. (AMD) such as the features, functionality, performance, availability, timing and expected benefits of AMD products including AMD EPYC™ processors, which are made pursuant to the Safe Harbor provisions of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are commonly identified by words such as "would," "may," "expects," "believes," "plans," "intends," "projects" and other terms with similar meaning. Investors are cautioned that the forward-looking statements in this press release are based on current beliefs, assumptions and expectations, speak only as of the date of this press release and involve risks and uncertainties that could cause actual results to differ materially from current expectations. Such statements are subject to certain known and unknown risks and uncertainties, many of which are difficult to predict and generally beyond AMD's control, that could cause actual results and other future events to differ materially from those expressed in, or implied or projected by, the forward-looking information and statements. Material factors that could cause actual results to differ materially from current expectations include, without limitation, the following: Intel Corporation's dominance of the microprocessor market and its aggressive business practices; Nvidia's dominance in the graphics processing unit market and its aggressive business practices; the cyclical nature of the semiconductor industry; market conditions of the industries in which AMD products are sold; loss of a significant customer; competitive markets in which AMD's products are sold; economic and market uncertainty; quarterly and seasonal sales patterns; AMD's ability to adequately protect its technology or other intellectual property; unfavorable currency exchange rate fluctuations; ability of third party manufacturers to manufacture AMD's products on a timely basis in sufficient quantities and using competitive technologies; availability of essential equipment, materials, substrates or manufacturing processes; ability to achieve expected manufacturing yields for AMD's products; AMD's ability to introduce products on a timely basis with expected features and performance levels; AMD's ability to generate revenue from its semi-custom SoC products; potential security vulnerabilities; potential security incidents including IT outages, data loss, data breaches and cyberattacks; uncertainties involving the ordering and shipment of AMD's products; AMD's reliance on third-party intellectual property to design and introduce new products; AMD's reliance on third-party companies for design, manufacture and supply of motherboards, software, memory and other computer platform components; AMD's reliance on Microsoft and other software vendors' support to design and develop software to run on AMD's products; AMD's reliance on third-party distributors and add-in-board partners; impact of modification or interruption of AMD's internal business processes and information systems; compatibility of AMD's products with some or all industry-standard software and hardware; costs related to defective products; efficiency of AMD's supply chain; AMD's ability to rely on third party supply-chain logistics functions; AMD's ability to effectively control sales of its products on the gray market; long-term impact of climate change on AMD's business; impact of government actions and regulations such as export regulations, tariffs and trade protection measures; AMD's ability to realize its deferred tax assets; potential tax liabilities; current and future claims and litigation; impact of environmental laws, conflict minerals related provisions and other laws or regulations; evolving expectations from governments, investors, customers and other stakeholders regarding corporate responsibility matters; issues related to the responsible use of AI; restrictions imposed by agreements governing AMD's notes, the guarantees of Xilinx's notes and the revolving credit agreement; impact of acquisitions, joint ventures and/or investments on AMD's business and AMD's ability to integrate acquired businesses; impact of any impairment of the combined company's assets; political, legal and economic risks and natural disasters; future impairments of technology license purchases; AMD's ability to attract and retain qualified personnel; and AMD's stock price volatility. Investors are urged to review in detail the risks and uncertainties in AMD's Securities and Exchange Commission filings, including but not limited to AMD's most recent reports on Forms 10-K and 10-Q. AMD, the AMD Arrow logo, EPYC and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other names are for informational purposes only and may be trademarks of their respective owners. 1 EPYC-029C: Comparison based on thread density, performance, features, process technology and built-in security features of currently shipping servers as of 10/10/2024. EPYC 9005 series CPUs offer the highest thread density [EPYC-025B], leads the industry with 500+ performance world records [EPYC-023F] with performance world record enterprise leadership Java® ops/sec performance [EPYCWR-20241010-260], top HPC leadership with floating-point throughput performance [EPYCWR-2024-1010-381], AI end-to-end performance with TPCx-AI performance [EPYCWR-2024-1010-525] and highest energy efficiency scores [EPYCWR-20241010-326]. The 5th Gen EPYC series also has 50% more DDR5 memory channels [EPYC-033C] with 70% more memory bandwidth [EPYC-032C] and supports 70% more PCIe® Gen5 lanes for I/O throughput [EPYC-035C], has up to 5x the L3 cache/core [EPYC-043C] for faster data access, uses advanced 3-4nm technology, and offers Secure Memory Encryption + Secure Encrypted Virtualization (SEV) + SEV Encrypted State + SEV-Secure Nested Paging security features. See the AMD EPYC Architecture White Paper (https://library.amd.com/l/3f4587d147382e2/) for more information. 2 AMD EPYC™ 9005 processors utilize the SP5 socket. Many factors determine system compatibility. Check with your server manufacturer to determine if this processor is supported in systems configured with previously launched AMD EPYC 9004 family CPUs. 3 EPYC-022F: For a complete list of world records see: http://amd.com/worldrecords. 4 9xx5-002C: SPECrate®2017_int_base comparison based on published scores from www.spec.org as of 10/10/2024. 2P AMD EPYC 9965 (3000 SPECrate®2017_int_base, 384 Total Cores, 500W TDP, $14,813 CPU $), 6.060 SPECrate®2017_int_base/CPU W, 0.205 SPECrate®2017_int_base/CPU $, https://www.spec.org/cpu2017/results/res2024q3/cpu2017-20240923-44833.html) 2P AMD EPYC 9755 (2720 SPECrate®2017_int_base, 256 Total Cores, 500W TDP, $12,984 CPU $), 5.440 SPECrate®2017_int_base/CPU W, 0.209 SPECrate®2017_int_base/CPU $, https://www.spec.org/cpu2017/results/res2024q4/cpu2017-20240923-44837.pdf) 2P AMD EPYC 9754 (1950 SPECrate®2017_int_base, 256 Total Cores, 360W TDP, $11,900 CPU $), 5.417 SPECrate®2017_int_base/CPU W, 0.164 SPECrate®2017_int_base/CPU $, https://www.spec.org/cpu2017/results/res2023q2/cpu2017-20230522-36617.html) 2P AMD EPYC 9654 (1810 SPECrate®2017_int_base, 192 Total Cores, 360W TDP, $11,805 CPU $), 5.028 SPECrate®2017_int_base/CPU W, 0.153 SPECrate®2017_int_base/CPU $, https://www.spec.org/cpu2017/results/res2024q1/cpu2017-20240129-40896.html) 2P Intel Xeon Platinum 8592+ (1130 SPECrate®2017_int_base, 128 Total Cores, 350W TDP, $11,600 CPU $) 3.229 SPECrate®2017_int_base/CPU W, 0.097 SPECrate®2017_int_base/CPU $, http://spec.org/cpu2017/results/res2023q4/cpu2017-20231127-40064.html) 2P Intel Xeon 6780E (1410 SPECrate®2017_int_base, 288 Total Cores, 330W TDP, $11,350 CPU $) 4.273 SPECrate®2017_int_base/CPU W, 0.124 SPECrate®2017_int_base/CPU $, https://spec.org/cpu2017/results/res2024q3/cpu2017-20240811-44406.html) SPEC®, SPEC CPU®, and SPECrate® are registered trademarks of the Standard Performance Evaluation Corporation. See www.spec.org for more information. Intel CPU TDP at https://ark.intel.com/. 5 GD-150: Boost Clock Frequency is the maximum frequency achievable on the CPU running a bursty workload. Boost clock achievability, frequency, and sustainability will vary based on several factors, including but not limited to: thermal conditions and variation in applications and workloads. GD-150. 6 9xx5-001: Based on AMD internal testing as of 9/10/2024, geomean performance improvement (IPC) at fixed-frequency. - 5th Gen EPYC CPU Enterprise and Cloud Server Workloads generational IPC Uplift of 1.170x (geomean) using a select set of 36 workloads and is the geomean of estimated scores for total and all subsets of SPECrate®2017_int_base (geomean ), estimated scores for total and all subsets of SPECrate®2017_fp_base (geomean), scores for Server Side Java multi instance max ops/sec, representative Cloud Server workloads (geomean), and representative Enterprise server workloads (geomean). "Genoa" Config (all NPS1): EPYC 9654 BIOS TQZ1005D 12c12t (1c1t/CCD in 12+1), FF 3GHz, 12x DDR5-4800 (2Rx4 64GB), 32Gbps xGMI; "Turin" config (all NPS1): EPYC 9V45 BIOS RVOT1000F 12c12t (1c1t/CCD in 12+1), FF 3GHz, 12x DDR5-6000 (2Rx4 64GB), 32Gbps xGMI Utilizing Performance Determinism and the Performance governor on Ubuntu® 22.04 w/ 6.8.0-40-generic kernel OS for all workloads. - 5th Gen EPYC generational ML/HPC Server Workloads IPC Uplift of 1.369x (geomean) using a select set of 24 workloads and is the geomean of representative ML Server Workloads (geomean), and representative HPC Server Workloads (geomean). "Genoa" Config (all NPS1) "Genoa" config: EPYC 9654 BIOS TQZ1005D 12c12t (1c1t/CCD in 12+1), FF 3GHz, 12x DDR5-4800 (2Rx4 64GB), 32Gbps xGMI; "Turin" config (all NPS1): EPYC 9V45 BIOS RVOT1000F 12c12t (1c1t/CCD in 12+1), FF 3GHz, 12x DDR5-6000 (2Rx4 64GB), 32Gbps xGMI Utilizing Performance Determinism and the Performance governor on Ubuntu 22.04 w/ 6.8.0-40-generic kernel OS for all workloads except LAMMPS, HPCG, NAMD, OpenFOAM, Gromacs which utilize 24.04 w/ 6.8.0-40-generic kernel. SPEC® and SPECrate® are registered trademarks for Standard Performance Evaluation Corporation. Learn more at spec.org. 7 9xx5-006: AMD internal testing as of 09/01/2024, on FFMPEG (Raw to VP9, 1080P, 302 Frames, 1 instance/thread, video source: https://media.xiph.org/video/derf/y4m/ducks_take_off_1080p50.y4m). System Configurations: 2P AMD EPYC™ 9965 reference system (2 x 192C) 1.5TB 24x64GB DDR5-6400 running at 6000MT/s, SAMSUNG MZWLO3T8HCLS-00A07, NPS=4, Ubuntu 22.04.3 LTS, Kernel Linux 5.15.0-119-generic, BIOS RVOT1000C (determinism enable=power), 10825484.25 Frames/Hour Median 2P AMD EPYC™ 9654 production system (2 x 96C) 1.5TB 24x64GB DDR5-5600, , SAMSUNG MO003200KYDNC, NPS=4, Ubuntu 22.04.3 LTS, Kernel Linux 5.15.0-119-generic, BIOS 1.56 (determinism enable=power) , 5154133.333 Frames/Hour Median 2P Intel Xeon Platinum 8592+ production system (2 x 64C) 1TB 16x64GB DDR5-5600, 3.2 TB NVME, Ubuntu 22.04.3 LTS, Kernel Linux 6.5.0-35-generic), BIOS ESE122V-3.10, 2712701.754 Frames/Hour Median For 3.99x the performance with the AMD EPYC 9965 vs Intel Xeon Platinum 8592+ systems For 1.90x the performance with the AMD EPYC 9654 vs Intel Xeon Platinum 8592+ systems Results may vary based on factors including but not limited to BIOS and OS settings and versions, software versions and data used. 8 9xx5-022: Source: https://www.amd.com/content/dam/amd/en/documents/epyc-technical-docs/performance-briefs/amd-epyc-9005-pb-gromacs.pdf 9 9xx5-071: VMmark® 4.0.1 host/node FC SAN comparison based on "independently published" results as of 10/10/2024. Configurations: 2 node, 2P AMD EPYC 9575F (128 total cores) powered server running VMware ESXi8.0 U3, 3.31 @ 4 tiles, https://www.infobellit.com/BlueBookSeries/VMmark4-FDR-1003 2 node, 2P AMD EPYC 9554 (128 total cores) powered server running VMware ESXi 8.0 U3, 2.64 @ 3 tiles, https://www.infobellit.com/BlueBookSeries/VMmark4-FDR-1002 2 node, 2P Intel Xeon Platinum 8592+ (128 total cores) powered server running VMware ESXi 8.0 U3, 2.06 @ 2.4 Tiles, https://www.infobellit.com/BlueBookSeries/VMmark4-FDR-1001 VMmark is a registered trademark of VMware in the US or other countries. 10 9xx5-012: TPCxAI @SF30 Multi-Instance 32C Instance Size throughput results based on AMD internal testing as of 09/05/2024 running multiple VM instances. The aggregate end-to-end AI throughput test is derived from the TPCx-AI benchmark and as such is not comparable to published TPCx-AI results, as the end-to-end AI throughput test results do not comply with the TPCx-AI Specification. 2P AMD EPYC 9965 (384 Total Cores), 12 32C instances, NPS1, 1.5TB 24x64GB DDR5-6400 (at 6000 MT/s), 1DPC, 1.0 Gbps NetXtreme BCM5720 Gigabit Ethernet PCIe, 3.5 TB Samsung MZWLO3T8HCLS-00A07 NVMe®, Ubuntu® 22.04.4 LTS, 6.8.0-40-generic (tuned-adm profile throughput-performance, ulimit -l 198096812, ulimit -n 1024, ulimit -s 8192), BIOS RVOT1000C (SMT=off, Determinism=Power, Turbo Boost=Enabled) 2P AMD EPYC 9755 (256 Total Cores), 8 32C instances, NPS1, 1.5TB 24x64GB DDR5-6400 (at 6000 MT/s), 1DPC, 1.0 Gbps NetXtreme BCM5720 Gigabit Ethernet PCIe, 3.5 TB Samsung MZWLO3T8HCLS-00A07 NVMe®, Ubuntu 22.04.4 LTS, 6.8.0-40-generic (tuned-adm profile throughput-performance, ulimit -l 198096812, ulimit -n 1024, ulimit -s 8192), BIOS RVOT0090F (SMT=off, Determinism=Power, Turbo Boost=Enabled) 2P AMD EPYC 9654 (192 Total cores) 6 32C instances, NPS1, 1.5TB 24x64GB DDR5-4800, 1DPC, 2 x 1.92 TB Samsung MZQL21T9HCJR-00A07 NVMe, Ubuntu 22.04.3 LTS, BIOS 1006C (SMT=off, Determinism=Power) Versus 2P Xeon Platinum 8592+ (128 Total Cores), 4 32C instances, AMX On, 1TB 16x64GB DDR5-5600, 1DPC, 1.0 Gbps NetXtreme BCM5719 Gigabit Ethernet PCIe, 3.84 TB KIOXIA KCMYXRUG3T84 NVMe, , Ubuntu 22.04.4 LTS, 6.5.0-35 generic (tuned-adm profile throughput-performance, ulimit -l 132065548, ulimit -n 1024, ulimit -s 8192), BIOS ESE122V (SMT=off, Determinism=Power, Turbo Boost = Enabled) Results: CPU Median Relative Generational Turin 192C, 12 Inst 6067.531 3.775 2.278 Turin 128C, 8 Inst 4091.85 2.546 1.536 Genoa 96C, 6 Inst 2663.14 1.657 1 EMR 64C, 4 Inst 1607.417 1 NA Results may vary due to factors including system configurations, software versions and BIOS settings. TPC, TPC Benchmark and TPC-C are trademarks of the Transaction Processing Performance Council. 11 9xx5-009: Llama3.1-8B throughput results based on AMD internal testing as of 09/05/2024. Llama3-8B configurations: IPEX.LLM 2.4.0, NPS=2, BF16, batch size 4, Use Case Input/Output token configurations: [Summary = 1024/128, Chatbot = 128/128, Translate = 1024/1024, Essay = 128/1024, Caption = 16/16]. 2P AMD EPYC 9965 (384 Total Cores), 6 64C instances 1.5TB 24x64GB DDR5-6400 (at 6000 MT/s), 1 DPC, 1.0 Gbps NetXtreme BCM5720 Gigabit Ethernet PCIe, 3.5 TB Samsung MZWLO3T8HCLS-00A07 NVMe®, Ubuntu® 22.04.3 LTS, 6.8.0-40-generic (tuned-adm profile throughput-performance, ulimit -l 198096812, ulimit -n 1024, ulimit -s 8192) , BIOS RVOT1000C, (SMT=off, Determinism=Power, Turbo Boost=Enabled), NPS=2 2P AMD EPYC 9755 (256 Total Cores), 4 64C instances , 1.5TB 24x64GB DDR5-6400 (at 6000 MT/s), 1DPC, 1.0 Gbps NetXtreme BCM5720 Gigabit Ethernet PCIe, 3.5 TB Samsung MZWLO3T8HCLS-00A07 NVMe®, Ubuntu 22.04.3 LTS, 6.8.0-40-generic (tuned-adm profile throughput-performance, ulimit -l 198096812, ulimit -n 1024, ulimit -s 8192), BIOS RVOT1000C (SMT=off, Determinism=Power, Turbo Boost=Enabled), NPS=2 2P AMD EPYC 9654 (192 Total Cores) 4 48C instances , 1.5TB 24x64GB DDR5-4800, 1DPC, 1.0 Gbps NetXtreme BCM5720 Gigabit Ethernet PCIe, 3.5 TB Samsung MZWLO3T8HCLS-00A07 NVMe®, Ubuntu® 22.04.4 LTS, 5.15.85-051585-generic (tuned-adm profile throughput-performance, ulimit -l 1198117616, ulimit -n 500000, ulimit -s 8192), BIOS RVI1008C (SMT=off, Determinism=Power, Turbo Boost=Enabled), NPS=2 Versus 2P Xeon Platinum 8592+ (128 Total Cores), 2 64C instances , AMX On, 1TB 16x64GB DDR5-5600, 1DPC, 1.0 Gbps NetXtreme BCM5719 Gigabit Ethernet PCIe, 3.84 TB KIOXIA KCMYXRUG3T84 NVMe®, Ubuntu 22.04.4 LTS 6.5.0-35-generic (tuned-adm profile throughput-performance, ulimit -l 132065548, ulimit -n 1024, ulimit -s 8192), BIOS ESE122V (SMT=off, Determinism=Power, Turbo Boost = Enabled). Results: CPU 2P EMR 64c 2P Turin 192c 2P Turin 128c 2P Genoa 96c Average Aggregate Median Total Throughput 99.474 193.267 182.595 138.978 Competitive 1 1.943 1.836 1.397 Generational NA 1.391 1.314 1 Results may vary due to factors including system configurations, software versions and BIOS settings. 12 9xx5-087: As of 10/10/2024; this scenario contains several assumptions and estimates and, while based on AMD internal research and best approximations, should be considered an example for information purposes only, and not used as a basis for decision making over actual testing. Referencing 9XX5-056A: "2P AMD EPYC 9575F powered server and 8x AMD Instinct MI300X GPUs running Llama3.1-70B select inference workloads at FP8 precision vs 2P Intel Xeon Platinum 8592+ powered server and 8x AMD Instinct MI300X GPUs has ~8% overall throughput increase across select inference use cases" and 8763.52 tokens/s (9575F) versus 8,048.48 tokens/s (8592+) at 128 input / 2048 output tokens, 500 prompts for 1.089x the tokens/s or 715.04 more tokens/s. 1 Node = 2 CPUs and 8 GPUs. Assuming a 1000 node cluster, 1000 * 715.04 = 715,040 tokens/s For ~700,000 more tokens/s Results may vary due to factors including system configurations, software versions and BIOS settings. 13 9xx5TCO-001a: This scenario contains many assumptions and estimates and, while based on AMD internal research and best approximations, should be considered an example for information purposes only, and not used as a basis for decision making over actual testing. The AMD Server & Greenhouse Gas Emissions TCO (total cost of ownership) Estimator Tool - version 1.12, compares the selected AMD EPYC™ and Intel® Xeon® CPU based server solutions required to deliver a TOTAL_PERFORMANCE of 39100 units of SPECrate2017_int_base performance as of October 10, 2024. This scenario compares a legacy 2P Intel Xeon 28 core Platinum_8280 based server with a score of 391 versus 2P EPYC 9965 (192C) powered server with an score of 3030 (https://spec.org/cpu2017/results/res2024q3/cpu2017-20240923-44833.pdf) along with a comparison upgrade to a 2P Intel Xeon Platinum 8592+ (64C) based server with a score of 1130 (https://spec.org/cpu2017/results/res2024q3/cpu2017-20240701-43948.pdf). Actual SPECrate®2017_int_base score for 2P EPYC 9965 will vary based on OEM publications. Environmental impact estimates made leveraging this data, using the Country / Region specific electricity factors from the 2024 International Country Specific Electricity Factors 10 - July 2024 , and the United States Environmental Protection Agency 'Greenhouse Gas Equivalencies Calculator'. For additional details, see https://www.amd.com/en/claims/epyc5#9xx5TCO-001a 14 9xx5-083: 5th Gen EPYC processors support DDR5-6400 MT/s for targeted customers and configurations. 5th Gen production SKUs support up to DDR5-6000 MT/s to enable a broad set of DIMMs across all OEM platforms and maintain SP5 platform compatibility A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/3bb614ee-e307-43a7-a36b-f5bd02ed1335 Media Contacts: Aaron Grabein AMD Communications +1 512-602-8950 aaron.grabein@amd.com Mitch Haws AMD Investor Relations +1 512-944-0790 mitch.haws@amd.com Market News and Data brought to you by Benzinga APIs
Share
Share
Copy Link
AMD has unveiled its 5th generation EPYC 'Turin' server processors, featuring up to 192 cores, 5 GHz clock speeds, and significant performance improvements over previous generations and competitors.
AMD has launched its highly anticipated 5th generation EPYC server processors, codenamed 'Turin', marking a significant leap in data center computing technology. The new lineup, branded as the EPYC 9005 series, introduces substantial advancements with the Zen 5 and Zen 5c core architectures [1][2][3].
The Turin family boasts an impressive array of specifications:
The flagship model, EPYC 9965, features 192 Zen 5c cores, 384 threads, and 384MB of L3 cache, with a base clock of 2.5 GHz and boost clock up to 3.7 GHz. Priced at $14,813, it offers a compelling alternative to Intel's top-tier offerings [1][2].
AMD claims significant performance improvements over both its previous generation and Intel's competing products:
In specific workloads, AMD reports:
The Turin processors introduce several key advancements:
AMD's EPYC processors have gained significant market share, now holding 34% of the server segment. The new Turin lineup is expected to further strengthen AMD's position in the data center market [2].
The processors are available immediately, with support from major OEMs including Cisco, Dell, HPE, Lenovo, and Supermicro. Cloud service providers are also expected to offer instances powered by these new chips [4].
With the launch of the 5th Gen EPYC 'Turin' processors, AMD continues to push the boundaries of server CPU performance and efficiency. As data centers increasingly focus on AI and high-performance computing workloads, these new processors are poised to play a crucial role in shaping the future of enterprise and cloud computing infrastructure.
Reference
[3]
Intel's new Xeon 6900P series, based on Granite Rapids architecture, brings 120 cores to the table, matching AMD's EPYC core counts for the first time since 2017. This launch marks a significant milestone in the CPU market, with implications for AI and data center performance.
5 Sources
AMD has revealed details about its upcoming Zen 5 architecture and Ryzen 9000 series processors, promising significant improvements in performance and efficiency. The new design lays the foundation for future CPU architectures and introduces advanced features like RDNA 3.5 iGPU and XDNA 2 NPU.
9 Sources
AMD has revealed details about its upcoming Zen 5 processor architecture, promising significant improvements in performance, efficiency, and versatility. The new design introduces a compact core variant and a revamped SoC architecture, setting the stage for the next generation of computing.
4 Sources
AMD and NVIDIA are locked in a fierce competition for datacenter CPU supremacy. Both companies claim leadership with their respective EPYC and Grace chips, sparking debate in the tech industry.
3 Sources
AMD's latest Ryzen 9 processors, the 9950X and 9900X, bring significant improvements in efficiency and performance. These new chips challenge Intel's dominance in the high-end desktop market.
5 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved