4 Sources
[1]
AMD launches Radeon AI Pro R9700 to challenge Nvidia's AI market dominance
AMD has been busy at Computex 2025, where the chipmaker unveiled the exciting Radeon RX 9060 XT and the Ryzen Threadripper 9000 series. To cap off its series of announcements, AMD is thrilled to introduce the Radeon AI Pro R9700, a PCIe 5.0 graphics card designed specifically for professional and workstation users. RDNA 4 is an architecture geared towards gaming, but that doesn't mean AMD can't apply it to professional-grade graphics cards. For instance, RDNA 3 saw the mainstream Radeon RX 7000 series successfully coexisting with the Radeon Pro W7000 series. The same situation will occur with RDNA 4. AMD has already unveiled four RDNA 4-powered gaming graphics cards, yet the Radeon AI Pro R9700 is the first RDNA 4 professional graphics card to enter the market. The new workstation graphics card aims to replace the RDNA 3-powered Radeon Pro W7800, which has been faithfully catering to consumers since 2023. The Radeon AI Pro R9700 utilizes the Navi 48 silicon. It's currently the largest RDNA 4 silicon to date, with a die size of 357 mm² and home to 53.9 billion transistors. Navi 48 is also found in the Radeon RX 9070 series. It's a substantially smaller silicon than the last-generation Navi 31 silicon, which is 529 mm² with 57.7 billion transistors. It's nothing short of impressive that Navi 48 is roughly 33% smaller but still has 93% of the transistors of Navi 31. Navi 48, a product of TSMC's N4P (4nm) FinFET process node, adheres to a monolithic design. On the contrary, Navi 31 features an MCM (Multi-Chip Module) design, consisting of chiplets interconnected to a monolithic die. That's the reason why Navi 31 is so enormous. The GCD (Graphics Complex Die) alone measures 304.35 mm², whereas each of the six MCDs (Memory Cache Die) is 37.52 mm². With Navi 48, AMD returned to a monolithic die and, with N4P's help, reduced the die size by 33%. Nonetheless, Navi 48 is up to 38% denser than Navi 31. The former has a density of 151 million transistors per mm², whereas the latter comes in at 109.1 million transistors per mm². In terms of composition, the Navi 48 features 64 RDNA 4 Compute Units (CUs), which enable a maximum of 4,096 Streaming Processors (SPs). In contrast, the Navi 31 is equipped with 96 RDNA 3 CUs, for a total of 6,144 SPs. More CUs don't necessarily mean more performance since RDNA 4 delivers considerable generation-over-generation performance uplift over RDNA 3. AMD, being AMD as usual, didn't reveal the Radeon AI Pro R9700's entire specifications. However, the chipmaker did boast about the graphics card's 128 AI accelerators, meaning it's leveraging the full Navi 48 silicon. That means the Radeon AI Pro R9700 is rocking 4,096 SPs, 9% fewer than the Radeon Pro W7800. It also correlates to the former having 9% less AI accelerators. In the Radeon AI Pro R9700 's defense, the CUs are RDNA 4, and the AI accelerators are second generation. Regarding FP16 performance, the Radeon AI Pro R9700 peaks at 96 TFLOPS, 6% faster than the Radeon Pro W7800. AMD rates the graphics card with a 1,531 TOPS of AI performance. AMD claims the Radeon AI Pro R9700 offers 2X improved performance over the Radeon Pro W7800 in DeepSeek R1 Distill Llama 8B. For some strange reason, AMD compared the Radeon AI Pro R9700 to the GeForce RTX 5080. Tested in a few large AI models, the Radeon AI Pro R9700 delivered up to 5X higher performance than the RTX 5080. The Radeon AI Pro R9700 is equipped with 32GB of GDDR6 memory. AMD has not disclosed the specifications regarding the speed of the memory chips or the width of the memory interface. Given that the Radeon Pro W7800 features 18 Gbps GDDR6, it is reasonable to conclude that the Radeon AI Pro R9700 should utilize memory chips with superior speed. With 32GB of onboard memory, the Radeon AI Pro R9700 can tackle most AI models. It has the capacity of the Radeon Pro W7800, but not as much as the 48GB variant. The Radeon AI Pro R9700's typical blower-type design will enable users to rock up to four of them inside a single system, such as AMD's Ryzen Threadripper platform, which has good multi-GPU support. With four of them, users will have access to 128GB, more than enough for heavy models that exceed 100GB of VRAM usage. The Radeon AI Pro R9700 has a 300W TBP (Total Board Power). It's 15% greater than the Radeon Pro W7800 32GB and 7% higher than the Radeon Pro W7800 48GB. Similar to most workstation-grade graphics cards, the Radeon AI Pro R9700 has the power connector at the rear. However, AMD has not indicated the type of power connector it employs, and it's not visible in the provided renders. Considering the 300W rating, we would anticipate it to require two 8-pin PCIe power connectors. The Radeon AI Pro R9700 renders illustrate the graphics card featuring four DisplayPort outputs. Since it utilizes the RDNA 4 architecture, these outputs should conform to the 2.1a standard. AMD has announced that the Radeon AI Pro R9700 will launch in July, but it has not revealed pricing details. In contrast, the Radeon Pro W7800 debuted at $2,499 two years ago and has maintained most of its value, currently priced at $2,399. We will soon learn the price of the Radeon AI Pro R9700 as its launch approaches in just a couple of months. AMD anticipates a healthy supply of the Radeon AI Pro R9700 from its partners, including ASRock, Asus, Gigabyte, PowerColor, Sapphire, XFX, and Yeston.
[2]
AMD Announces Multi GPU capable Radeon AI PRO R9700 32 GB
When you set up a multi-GPU rig with the Radeon AI PRO R9700, you're basically pooling both memory and compute power from each card into one big chunk that your workstation can tap into. Imagine two or three of these cards working side by side: you'll instantly get 64 GB or 96 GB of combined frame buffer, letting you handle monster-sized AI models, real-time 3D scenes, or parallel simulations without breaking a sweat. It's a plug-and-play way to ramp up your system's AI research or graphics pipeline -- just slot in another R9700 and you're ready for more demanding projects. Under the hood, the R9700 is packing AMD's second-gen RDNA 4 AI engines, which are really good at tensor math, matrix ops, and shader workloads. You also get a hefty 32 GB of GDDR memory onboard, plus PCI Express Gen 5 lanes for that extra throughput. If you've ever had to wait for huge datasets to move back and forth between the CPU and GPU, Gen 5's double bandwidth cuts those delays, so your local model inference and fine-tuning loops run faster and more smoothly. No more throttling performance because of a data-transfer bottleneck. On the software side, this card is fully compatible with ROCm on Linux right now, and AMD says the Windows driver is coming soon. Running everything locally means you keep your data under lock and key -- no pushing sensitive info out to the cloud and hoping the network behaves. For many organizations with strict security rules, that's a big win. And since the workloads live on your machine, you avoid the hiccups that can come from online services, so what you see in your benchmarks is exactly what you get in production.
[3]
AMD Radeon AI PRO R9700 GPU announced, RDNA 4 and 32GB of memory
AMD has announced its latest GPU for AI workstations, the RDNA 4-powered Radeon AI PRO R9700 that arrives with 32GB of GDDR6 memory. As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you. The new AMD Radeon AI PRO R9700 GPU has been announced at Computex 2025, bringing RDNA 4 to AI-powered workstations. With 32GB of GDDR6 memory, 96 TFLOPS of Peak Half-Precision, and 1531 TOPS of INT4 Sparse AI performance, you're looking at up to 2X better performance than the previous generation's AMD Radeon PRO W7800 32GB GPU. The AMD Radeon AI PRO R9700 GPU will become available in July 2025, image credit: AMD. With 32GB of VRAM, AMD notes that the Radeon AI PRO R9700 is better equipped for running advanced local text-to-image AI models and LLMs like DeepSeek R1 Distill Qwen 32B Q6 and Mistral Small 3.1 24B Instruct 2503 Q8. With expanded AMD ROCm on Radeon, there'll be support for a "broader range of AI and compute workloads." The first chart in the presentation for the new AI workstation GPU compares the average tokens per second performance against the GeForce RTX 5080, showing an increase of up to 496%. Check out the chart below. Radeon AI PRO R9700 32GB performance compared to the GeForce RTX 5080 16GB, image credit: AMD The Radeon AI PRO R9700 is also scalable. Up to four GPUs in a single workstation offer enough memory to handle 123 billion and 70 billion parameter models, which is some serious AI performance. AMD notes that the Radeon AI PRO R9700 will launch in July 2025 (pricing is TBC), with several models from its partners - ASRock, ASUS, GIGABYTE, PowerColor, Sapphire, XFX, and Yeston - set to become available.
[4]
AMD Unleashes Radeon AI PRO R9700 GPU With 32 GB VRAM, 128 AI Cores & 300W TDP: 2x Faster Than Last-Gen W7800 In DeepSeek R1
AMD is unleashing its brand-new Radeon AI PRO R9700 GPU, which is aimed purely at the AI segment with 32 GB of VRAM, and performance twice that of the previous gen. Well, as expected, AMD has introduced its first RDNA 4 GPU with 32 GB of VRAM, aimed purely at the AI segment. With this new offering, AMD is also unveiling its brand-new product branding, Radeon AI PRO. The new series replaces the older Radeon WX & Radeon PRO offerings. The first product within the lineup is the AMD Radeon AI PRO R9700 which features the same exact specifications as the RX 9070 XT, but optimized for AI workloads. The chip being used is the Navi 48 which comes with 64 compute units or 4096 stream processors. The GPU is loaded with 128 AI accelerators and has a TBP of up to 300W. In terms of memory, the AMD Radeon AI PRO R9700 is equipped with 32 GB of GDDR6 memory, running across a 256-bit bus and this essentially doubles the VRAM featured on the 9070 XT. Other performance aspects being shared by AMD include the 96 TFLOPs of FP16 compute and 1531 TOPS INT4 (Sparse). The goal of the AMD Radeon AI PRO R9700 GPU is to enable high-quality AI models to be completed efficiently. That's why it has been equipped with 32 GB of VRAM, which is an optimal amount for most advanced Local AI workloads, such as DeepSeek R1 Distill Qwen 32B Q6, Mistral Small 3.1 24B Instruct 2503 Q8, Flux 1 Schnel, and SD 3.5 Medium. As for performance, AMD states that the Radeon AI PRO R9700 is twice as fast as the Radeon PRO W7800 32 GB GPU in DeepSeek R1, while the company also shows a few measurements against the RTX 5080, which features a 16 GB VRAM buffer. The 16 GB of VRAM might not be suitable for AI models that require more memory, and that's why the R9700 is being shown to be up to 5x faster. But it doesn't end here; the AMD Radeon AI PRO R9700 can also be scaled in 4-way Multi-GPU configurations using a modern-day PCIe 5.0 platform. This enables users to harness a massive 128 GB pool, which can handle buffer models such as Mistral 123B & DeepSeek R1 70B. These models can consume up to 112-116 GB of VRAM. Lastly, for availability, the AMD Radeon AI PRO R9700 GPU will be available in July this year through leading partners such as ASUS, ASRock, Gigabyte, PowerColor, Sapphire, XFX, and Yeston. The card is going to be a dual-slot design with a blower cooler.
Share
Copy Link
AMD unveils the Radeon AI Pro R9700, a high-performance GPU designed for AI workloads, featuring 32GB VRAM and impressive performance gains over previous generations and competitors.
AMD has announced its latest high-performance GPU, the Radeon AI Pro R9700, at Computex 2025. This new offering is specifically designed for AI workloads and professional users, marking AMD's entry into the competitive AI GPU market currently dominated by Nvidia 12.
The Radeon AI Pro R9700 is built on AMD's RDNA 4 architecture, utilizing the Navi 48 silicon. This chip boasts impressive specifications:
The Navi 48 silicon is a significant advancement, featuring 53.9 billion transistors on a 357 mm² die. This represents a 33% reduction in size compared to the previous generation Navi 31, while maintaining 93% of its transistor count 1.
AMD claims the Radeon AI Pro R9700 offers substantial performance improvements:
The 32GB of VRAM makes the R9700 suitable for running advanced local text-to-image AI models and Large Language Models (LLMs) such as DeepSeek R1 Distill Qwen 32B Q6 and Mistral Small 3.1 24B Instruct 2503 Q8 3.
One of the R9700's key features is its multi-GPU support. Users can combine up to four cards in a single system, providing:
This scalability makes the R9700 particularly attractive for organizations requiring significant AI computing power.
The Radeon AI Pro R9700 is fully compatible with ROCm on Linux, with Windows driver support coming soon. This local processing capability offers enhanced data security, allowing organizations to keep sensitive information on-premises rather than relying on cloud services 2.
AMD is positioning the Radeon AI Pro R9700 as a direct competitor to Nvidia's offerings in the AI GPU market. The card is scheduled for release in July 2025, with various models available from partners including ASRock, ASUS, GIGABYTE, PowerColor, Sapphire, XFX, and Yeston 34.
While pricing details have not been disclosed, the previous generation Radeon Pro W7800 launched at $2,499, which may provide a reference point for the R9700's potential price range 1.
The introduction of the Radeon AI Pro R9700 represents a significant move by AMD to challenge Nvidia's dominance in the AI GPU sector. With its impressive specifications and performance claims, the R9700 could potentially disrupt the market and provide AI researchers and professionals with a powerful alternative for their computing needs 1234.
Summarized by
Navi
Anthropic launches Claude 4 Opus and Sonnet models, showcasing significant advancements in AI coding, reasoning, and long-term task execution. The new models boast improved performance on benchmarks and introduce features like extended thinking with tool use.
27 Sources
Technology
3 hrs ago
27 Sources
Technology
3 hrs ago
OpenAI, in partnership with major tech companies, announces the expansion of its Stargate project to the UAE, planning a 1GW data center cluster in Abu Dhabi as part of a larger 5GW initiative.
12 Sources
Technology
3 hrs ago
12 Sources
Technology
3 hrs ago
New research reveals the alarming rise in energy consumption by AI systems, potentially doubling data center power demand by year-end and posing significant challenges to tech companies' climate goals.
5 Sources
Technology
19 hrs ago
5 Sources
Technology
19 hrs ago
Google announces plans to incorporate advertisements into its AI Mode and expand ad presence in AI Overviews, signaling a significant shift in how AI-powered search results are monetized.
8 Sources
Technology
19 hrs ago
8 Sources
Technology
19 hrs ago
Apple plans to release AI-enabled smart glasses by the end of 2026, featuring cameras, microphones, and speakers. The device aims to compete with Meta's Ray-Ban smart glasses and marks Apple's entry into the AI wearables market.
12 Sources
Technology
3 hrs ago
12 Sources
Technology
3 hrs ago