5 Sources
[1]
AMD launches Radeon AI Pro R9700 to challenge Nvidia's AI market dominance
AMD has been busy at Computex 2025, where the chipmaker unveiled the exciting Radeon RX 9060 XT and the Ryzen Threadripper 9000 series. To cap off its series of announcements, AMD is thrilled to introduce the Radeon AI Pro R9700, a PCIe 5.0 graphics card designed specifically for professional and workstation users. RDNA 4 is an architecture geared towards gaming, but that doesn't mean AMD can't apply it to professional-grade graphics cards. For instance, RDNA 3 saw the mainstream Radeon RX 7000 series successfully coexisting with the Radeon Pro W7000 series. The same situation will occur with RDNA 4. AMD has already unveiled four RDNA 4-powered gaming graphics cards, yet the Radeon AI Pro R9700 is the first RDNA 4 professional graphics card to enter the market. The new workstation graphics card aims to replace the RDNA 3-powered Radeon Pro W7800, which has been faithfully catering to consumers since 2023. The Radeon AI Pro R9700 utilizes the Navi 48 silicon. It's currently the largest RDNA 4 silicon to date, with a die size of 357 mm² and home to 53.9 billion transistors. Navi 48 is also found in the Radeon RX 9070 series. It's a substantially smaller silicon than the last-generation Navi 31 silicon, which is 529 mm² with 57.7 billion transistors. It's nothing short of impressive that Navi 48 is roughly 33% smaller but still has 93% of the transistors of Navi 31. Navi 48, a product of TSMC's N4P (4nm) FinFET process node, adheres to a monolithic design. On the contrary, Navi 31 features an MCM (Multi-Chip Module) design, consisting of chiplets interconnected to a monolithic die. That's the reason why Navi 31 is so enormous. The GCD (Graphics Complex Die) alone measures 304.35 mm², whereas each of the six MCDs (Memory Cache Die) is 37.52 mm². With Navi 48, AMD returned to a monolithic die and, with N4P's help, reduced the die size by 33%. Nonetheless, Navi 48 is up to 38% denser than Navi 31. The former has a density of 151 million transistors per mm², whereas the latter comes in at 109.1 million transistors per mm². In terms of composition, the Navi 48 features 64 RDNA 4 Compute Units (CUs), which enable a maximum of 4,096 Streaming Processors (SPs). In contrast, the Navi 31 is equipped with 96 RDNA 3 CUs, for a total of 6,144 SPs. More CUs don't necessarily mean more performance since RDNA 4 delivers considerable generation-over-generation performance uplift over RDNA 3. AMD, being AMD as usual, didn't reveal the Radeon AI Pro R9700's entire specifications. However, the chipmaker did boast about the graphics card's 128 AI accelerators, meaning it's leveraging the full Navi 48 silicon. That means the Radeon AI Pro R9700 is rocking 4,096 SPs, 9% fewer than the Radeon Pro W7800. It also correlates to the former having 9% less AI accelerators. In the Radeon AI Pro R9700 's defense, the CUs are RDNA 4, and the AI accelerators are second generation. Regarding FP16 performance, the Radeon AI Pro R9700 peaks at 96 TFLOPS, 6% faster than the Radeon Pro W7800. AMD rates the graphics card with a 1,531 TOPS of AI performance. AMD claims the Radeon AI Pro R9700 offers 2X improved performance over the Radeon Pro W7800 in DeepSeek R1 Distill Llama 8B. For some strange reason, AMD compared the Radeon AI Pro R9700 to the GeForce RTX 5080. Tested in a few large AI models, the Radeon AI Pro R9700 delivered up to 5X higher performance than the RTX 5080. The Radeon AI Pro R9700 is equipped with 32GB of GDDR6 memory. AMD has not disclosed the specifications regarding the speed of the memory chips or the width of the memory interface. Given that the Radeon Pro W7800 features 18 Gbps GDDR6, it is reasonable to conclude that the Radeon AI Pro R9700 should utilize memory chips with superior speed. With 32GB of onboard memory, the Radeon AI Pro R9700 can tackle most AI models. It has the capacity of the Radeon Pro W7800, but not as much as the 48GB variant. The Radeon AI Pro R9700's typical blower-type design will enable users to rock up to four of them inside a single system, such as AMD's Ryzen Threadripper platform, which has good multi-GPU support. With four of them, users will have access to 128GB, more than enough for heavy models that exceed 100GB of VRAM usage. The Radeon AI Pro R9700 has a 300W TBP (Total Board Power). It's 15% greater than the Radeon Pro W7800 32GB and 7% higher than the Radeon Pro W7800 48GB. Similar to most workstation-grade graphics cards, the Radeon AI Pro R9700 has the power connector at the rear. However, AMD has not indicated the type of power connector it employs, and it's not visible in the provided renders. Considering the 300W rating, we would anticipate it to require two 8-pin PCIe power connectors. The Radeon AI Pro R9700 renders illustrate the graphics card featuring four DisplayPort outputs. Since it utilizes the RDNA 4 architecture, these outputs should conform to the 2.1a standard. AMD has announced that the Radeon AI Pro R9700 will launch in July, but it has not revealed pricing details. In contrast, the Radeon Pro W7800 debuted at $2,499 two years ago and has maintained most of its value, currently priced at $2,399. We will soon learn the price of the Radeon AI Pro R9700 as its launch approaches in just a couple of months. AMD anticipates a healthy supply of the Radeon AI Pro R9700 from its partners, including ASRock, Asus, Gigabyte, PowerColor, Sapphire, XFX, and Yeston.
[2]
AMD reuses iconic 9700 branding for a modern workstation GPU focused on AI
The new workstation-class GPU shares its name with a 20 year old ATI card At Computex 2025, AMD announced the Radeon AI Pro R9700, a workstation GPU aimed at local AI tasks and multi-GPU compute environments. For those familiar with the history of graphics cards, the name might ring a bell. Over 20 years ago, the original Radeon 9700 Pro marked a turning point for ATI. It was one of the first GPUs to beat Nvidia convincingly in both performance and delivery, and its launch back in 2002 helped shift market dynamics. Fast forward to today, and AMD, which acquired ATI for $5.4 billion in 2006, is reusing the 9700 name for a very different card. The AI Pro R9700 is not for gamers, but for developers and professionals working with large-scale AI models. The Radeon AI Pro R9700 features 128 dedicated AI accelerators, 32GB of GDDR6 memory, and a PCIe Gen 5 interface. Power draw is rated at 300W. AMD says it can hit 96 teraflops of FP16 performance and deliver 1531 TOPS for AI inference. Unlike GPUs built for rendering or gaming, this one is tuned for local inference and training. AMD claims it can run models with up to 32 billion parameters without cloud offload. In a system with four cards, that scales up to 123 billion. The AI Pro R9700 is optimized for multi-GPU configurations and workloads like LLM training, simulation, and AI-accelerated rendering. It ships with ROCm support on Linux, with Windows support expected later. Availability is set for July 2025. While the AI Pro R9700 was AMD's headline release for professional AI workloads at Computex, the Ryzen Threadripper 9000 Series and RX 9060 XT GPU rounded out the line-up with options aimed at creators, enthusiasts, and gamers.
[3]
AMD Announces Multi GPU capable Radeon AI PRO R9700 32 GB
When you set up a multi-GPU rig with the Radeon AI PRO R9700, you're basically pooling both memory and compute power from each card into one big chunk that your workstation can tap into. Imagine two or three of these cards working side by side: you'll instantly get 64 GB or 96 GB of combined frame buffer, letting you handle monster-sized AI models, real-time 3D scenes, or parallel simulations without breaking a sweat. It's a plug-and-play way to ramp up your system's AI research or graphics pipeline -- just slot in another R9700 and you're ready for more demanding projects. Under the hood, the R9700 is packing AMD's second-gen RDNA 4 AI engines, which are really good at tensor math, matrix ops, and shader workloads. You also get a hefty 32 GB of GDDR memory onboard, plus PCI Express Gen 5 lanes for that extra throughput. If you've ever had to wait for huge datasets to move back and forth between the CPU and GPU, Gen 5's double bandwidth cuts those delays, so your local model inference and fine-tuning loops run faster and more smoothly. No more throttling performance because of a data-transfer bottleneck. On the software side, this card is fully compatible with ROCm on Linux right now, and AMD says the Windows driver is coming soon. Running everything locally means you keep your data under lock and key -- no pushing sensitive info out to the cloud and hoping the network behaves. For many organizations with strict security rules, that's a big win. And since the workloads live on your machine, you avoid the hiccups that can come from online services, so what you see in your benchmarks is exactly what you get in production.
[4]
AMD Radeon AI PRO R9700 GPU announced, RDNA 4 and 32GB of memory
AMD has announced its latest GPU for AI workstations, the RDNA 4-powered Radeon AI PRO R9700 that arrives with 32GB of GDDR6 memory. As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you. The new AMD Radeon AI PRO R9700 GPU has been announced at Computex 2025, bringing RDNA 4 to AI-powered workstations. With 32GB of GDDR6 memory, 96 TFLOPS of Peak Half-Precision, and 1531 TOPS of INT4 Sparse AI performance, you're looking at up to 2X better performance than the previous generation's AMD Radeon PRO W7800 32GB GPU. The AMD Radeon AI PRO R9700 GPU will become available in July 2025, image credit: AMD. With 32GB of VRAM, AMD notes that the Radeon AI PRO R9700 is better equipped for running advanced local text-to-image AI models and LLMs like DeepSeek R1 Distill Qwen 32B Q6 and Mistral Small 3.1 24B Instruct 2503 Q8. With expanded AMD ROCm on Radeon, there'll be support for a "broader range of AI and compute workloads." The first chart in the presentation for the new AI workstation GPU compares the average tokens per second performance against the GeForce RTX 5080, showing an increase of up to 496%. Check out the chart below. Radeon AI PRO R9700 32GB performance compared to the GeForce RTX 5080 16GB, image credit: AMD The Radeon AI PRO R9700 is also scalable. Up to four GPUs in a single workstation offer enough memory to handle 123 billion and 70 billion parameter models, which is some serious AI performance. AMD notes that the Radeon AI PRO R9700 will launch in July 2025 (pricing is TBC), with several models from its partners - ASRock, ASUS, GIGABYTE, PowerColor, Sapphire, XFX, and Yeston - set to become available.
[5]
AMD Unleashes Radeon AI PRO R9700 GPU With 32 GB VRAM, 128 AI Cores & 300W TDP: 2x Faster Than Last-Gen W7800 In DeepSeek R1
AMD is unleashing its brand-new Radeon AI PRO R9700 GPU, which is aimed purely at the AI segment with 32 GB of VRAM, and performance twice that of the previous gen. Well, as expected, AMD has introduced its first RDNA 4 GPU with 32 GB of VRAM, aimed purely at the AI segment. With this new offering, AMD is also unveiling its brand-new product branding, Radeon AI PRO. The new series replaces the older Radeon WX & Radeon PRO offerings. The first product within the lineup is the AMD Radeon AI PRO R9700 which features the same exact specifications as the RX 9070 XT, but optimized for AI workloads. The chip being used is the Navi 48 which comes with 64 compute units or 4096 stream processors. The GPU is loaded with 128 AI accelerators and has a TBP of up to 300W. In terms of memory, the AMD Radeon AI PRO R9700 is equipped with 32 GB of GDDR6 memory, running across a 256-bit bus and this essentially doubles the VRAM featured on the 9070 XT. Other performance aspects being shared by AMD include the 96 TFLOPs of FP16 compute and 1531 TOPS INT4 (Sparse). The goal of the AMD Radeon AI PRO R9700 GPU is to enable high-quality AI models to be completed efficiently. That's why it has been equipped with 32 GB of VRAM, which is an optimal amount for most advanced Local AI workloads, such as DeepSeek R1 Distill Qwen 32B Q6, Mistral Small 3.1 24B Instruct 2503 Q8, Flux 1 Schnel, and SD 3.5 Medium. As for performance, AMD states that the Radeon AI PRO R9700 is twice as fast as the Radeon PRO W7800 32 GB GPU in DeepSeek R1, while the company also shows a few measurements against the RTX 5080, which features a 16 GB VRAM buffer. The 16 GB of VRAM might not be suitable for AI models that require more memory, and that's why the R9700 is being shown to be up to 5x faster. But it doesn't end here; the AMD Radeon AI PRO R9700 can also be scaled in 4-way Multi-GPU configurations using a modern-day PCIe 5.0 platform. This enables users to harness a massive 128 GB pool, which can handle buffer models such as Mistral 123B & DeepSeek R1 70B. These models can consume up to 112-116 GB of VRAM. Lastly, for availability, the AMD Radeon AI PRO R9700 GPU will be available in July this year through leading partners such as ASUS, ASRock, Gigabyte, PowerColor, Sapphire, XFX, and Yeston. The card is going to be a dual-slot design with a blower cooler.
Share
Copy Link
AMD unveils the Radeon AI Pro R9700, a powerful RDNA 4-based GPU designed for AI workloads, featuring 32GB GDDR6 memory and impressive performance gains over its predecessor and NVIDIA's offerings.
AMD has made a significant move in the AI GPU market with the announcement of its Radeon AI Pro R9700 at Computex 2025. This new graphics card, based on the RDNA 4 architecture, is specifically designed for professional and workstation users, with a focus on AI workloads 12.
The Radeon AI Pro R9700 is built on TSMC's N4P (4nm) FinFET process and features the Navi 48 silicon, which is notably smaller yet denser than its predecessor. Key specifications include:
In terms of performance, AMD claims the R9700 delivers:
The Radeon AI Pro R9700 is optimized for local AI tasks and multi-GPU compute environments. It can handle models with up to 32 billion parameters without cloud offload, and in a system with four cards, this scales up to 123 billion parameters 23.
Source: Tom's Hardware
The card's multi-GPU capabilities are particularly noteworthy. Users can combine multiple R9700s to pool both memory and compute power, enabling the handling of larger AI models, real-time 3D scenes, or parallel simulations 3.
AMD has announced that the R9700 will ship with ROCm support on Linux, with Windows support expected later. This local processing capability is crucial for organizations with strict security requirements, as it allows sensitive data to remain on-premises rather than being sent to the cloud 35.
The Radeon AI Pro R9700 is positioned to challenge NVIDIA's dominance in the AI GPU market. It's particularly well-suited for workloads like LLM training, simulation, and AI-accelerated rendering 2.
AMD has announced that the R9700 will be available in July 2025, with partners including ASRock, Asus, Gigabyte, PowerColor, Sapphire, XFX, and Yeston set to offer models 14.
Interestingly, AMD has chosen to revive the iconic "9700" branding, which was last used over 20 years ago for the ATI Radeon 9700 Pro. That card was a turning point for ATI, successfully challenging NVIDIA's market position. With this new release, AMD seems to be aiming for a similar impact in the AI GPU market 2.
As the AI hardware landscape continues to evolve rapidly, the Radeon AI Pro R9700 represents AMD's strong push into the professional AI GPU space, offering a compelling alternative to NVIDIA's offerings for organizations looking to implement or expand their AI capabilities.
Summarized by
Navi
AMD CEO Lisa Su reveals new MI400 series AI chips and partnerships with major tech companies, aiming to compete with Nvidia in the rapidly growing AI chip market.
8 Sources
Technology
2 hrs ago
8 Sources
Technology
2 hrs ago
Meta has filed a lawsuit against Joy Timeline HK Limited, the developer of the AI 'nudify' app Crush AI, for repeatedly violating advertising policies on Facebook and Instagram. The company is also implementing new measures to combat the spread of AI-generated explicit content across its platforms.
17 Sources
Technology
10 hrs ago
17 Sources
Technology
10 hrs ago
Mattel, the iconic toy manufacturer, partners with OpenAI to incorporate artificial intelligence into toy-making and content creation, promising innovative play experiences while prioritizing safety and privacy.
14 Sources
Business and Economy
10 hrs ago
14 Sources
Business and Economy
10 hrs ago
A critical security flaw named "EchoLeak" was discovered in Microsoft 365 Copilot, allowing attackers to exfiltrate sensitive data without user interaction. The vulnerability highlights potential risks in AI-integrated systems.
5 Sources
Technology
19 hrs ago
5 Sources
Technology
19 hrs ago
Spanish AI startup Multiverse Computing secures $217 million in funding to advance its quantum-inspired AI model compression technology, promising to dramatically reduce the size and cost of running large language models.
5 Sources
Technology
10 hrs ago
5 Sources
Technology
10 hrs ago