4 Sources
[1]
Nvidia CUDA gets RISC-V support
Nvidia is officially bringing its CUDA software stack to RISC-V CPUs. CUDA is Nvidia's high-level software abstraction layer for apps to interact with its GPUs - without CUDA support on a CPU architecture, GPU functionality is limited at best. The AI arms dealer announced RISC-V support on stage during the RISC-V Summit in China on Friday, and will eventually enable processors based on the open instruction set architecture (ISA) to serve as a host CPU for Nvidia GPUs. Interest in RISC-V as an alternative to Arm and x86-based cores has gained momentum in recent years. However, high-performance RISC-V processors appropriate for the datacenter remain few and far between. In this respect, Nvidia's decision to announce CUDA support for the ISA in China is fitting. Over the past few years, the Middle Kingdom has made a concerted effort to end its reliance on Western CPUs, with RISC-V playing a central role. Back in March, Alibaba's R&D wing XuanTie unveiled a new CPU core called the C930 aimed at server, PC, and automotive applications. Meanwhile, the Xiangshan project teased a high-performance RISC-V processor core, which it claims is within spitting distance of Arm's two-year-old Neoverse N2 cores. If and when we can expect to see an Nvidia GPU strapped to any of these chips remains to be seen, but at least now it's a possibility, especially now that Nvidia has convinced Uncle Sam to resume shipments of its China-spec H20 accelerators to China. Nvidia's decision to extend support for CUDA to the RISC-V instruction set isn't all that surprising. It's not the first or even second time its devs have worked with RISC-based systems. Today, CUDA runs on both x86 and Arm64-based processors, with Nvidia's in-house Grace CPUs being the most notable. Prior to Arm, CUDA was also supported on IBM Power-based systems. You may recall that the Department of Energy's Sierra and Summit supercomputers paired Power9 processors with V100 GPUs. Nvidia has also employed RISC-V cores in its GPUs for years now. In 2024 alone, RISC-V International estimates that Nvidia shipped more than a billion RISC-V cores, with anywhere from 10 to 40 baked into every GPU sold. As you might expect, these cores are built into the microcontrollers responsible for low-level functionality like video codecs, chip-to-chip interconnects, power management, and security, rather than orchestrating GPU workloads. How Nvidia intends to support development on RISC-V based systems remains to be seen. One possibility is a RISC-V single board computer (SBC) melding RISC-V cores with an Nvidia GPU. It currently offers several Arm-based SBCs in this vein. Nvidia could also partner with RISC-V-based chip or server vendors to build reference designs similar to its tie up with Ampere back in 2019. The Register reached out to Nvidia for comment; we'll let you know if we hear anything back. ®
[2]
Nvidia unlocks CUDA for RISC-V processors, pushing AI innovation forward
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. What just happened? Since its introduction in 2006, CUDA has been a proprietary technology running exclusively on Nvidia's own GPU hardware. Now, the GeForce maker appears ready to open CUDA to at least one additional processor architecture. Just don't expect to run CUDA-optimized code on AMD or Intel GPUs anytime soon. Nvidia has officially ported its Compute Unified Device Architecture (CUDA) to RISC-V, a move announced at a recent RISC-V summit in China. According to Nvidia's Frans Sijstermans, this port enables a RISC-V CPU to act as the central application processor in CUDA-based AI systems. RISC-V International shared a slide from his presentation illustrating an ideal CUDA AI setup with GPU, CPU, and DPU (data-processing unit) components, highlighting the platform's potential to reshape AI hardware. Sijstermans explained that a RISC-V CPU will soon be able to orchestrate an AI system by managing application software and operating system tasks. A CUDA-compatible GPU - meaning a Nvidia GPU or AI accelerator - would handle multithreaded CUDA kernel execution, while the DPU would take care of networking operations. Nvidia has yet to announce a release date for the RISC-V CUDA port. With enough investment and development, RISC-V could eventually rival Arm in performance and efficiency. Its royalty-free model makes it particularly appealing in China and other regions affected by US tariffs on advanced chip technologies. Based on an open-source standard, the RISC-V architecture has become a popular choice for microcontrollers and other embedded systems. Several major Linux distributions now offer official support for the ISA, while open-source developers are making a significant push toward consumer-oriented applications like gaming. Despite generating billions from AI hardware and other proprietary technologies, Nvidia has supported the RISC-V architecture for years. The company has used RISC-V cores in specialized microcontroller units since 2015, developing both hardware and software solutions in-house. Although Nvidia fully backs RISC-V as a legitimate CPU architecture, it isn't ready to bring CUDA to other third-party processors yet. Optimized code designed for CUDA still requires Nvidia hardware to run - though efforts are underway to enable CUDA binaries on non-Nvidia GPUs.
[3]
Nvidia CUDA Adds RISC‑V Support for AI and HPC Platforms
Nvidia has just made a significant change: you can now run CUDA on RISC‑V processors. Previously, CUDA needed x86 or Arm CPUs to handle system tasks and coordinate GPU work. Now, RISC‑V cores can step in as the "brains" of a CUDA system -- starting CUDA drivers, running your applications, and even managing the operating system. Here's how it works in practice. Imagine a three‑part team: the GPU tackles heavy parallel computations, the RISC‑V CPU takes care of control and logic, and a DPU handles networking and data transfers. Together, they form a flexible, heterogeneous computing setup. You get clear separation: the CPU orchestrates, the GPU crunches numbers, and the DPU moves data -- ideal for AI inference at the edge, like on Nvidia's Jetson devices, but also potentially useful in larger data centers down the road. RISC‑V is an open‑source ISA, giving hardware makers more freedom to design custom chips without licensing fees. By supporting RISC‑V, Nvidia taps into a growing ecosystem where companies -- especially those building specialized or region‑specific silicon -- can include CUDA acceleration without being locked into proprietary host platforms. It's a way to bridge Nvidia's established GPU ecosystem with the flexibility of open‑source hardware. At the RISC‑V Summit China, Nvidia's vice president of hardware engineering outlined this vision, showing how CUDA components run natively on RISC‑V. The move suggests Nvidia is preparing for a future where open ISAs play a bigger role in AI and high‑performance computing. Even if RISC‑V isn't in hyperscale data centers just yet, this announcement opens the door. For developers, the change is practical. Jetson modules that already use CUDA for edge AI workloads can now be designed with RISC‑V host chips. That means custom boards, new system‑on‑chip designs, and potentially lower costs in regions with strong RISC‑V adoption. And by bringing an open ISA into the CUDA fold, Nvidia may inspire system architects to experiment with more diverse hardware setups.
[4]
Nvidia's CUDA platform now officially supports RISC-V CPUs
Nvidia announced CUDA platform support for the RISC-V instruction set architecture (ISA) at the 2025 RISC-V Summit in China, enabling RISC-V to serve as a main processor for CUDA-based systems. This development permits RISC-V to function as the primary processor for systems utilizing CUDA, a role previously exclusive to x86 and Arm core architectures. While immediate integration into hyperscale datacenters is not anticipated, the compatibility extends to CUDA-enabled edge devices, specifically Nvidia's Jetson modules. Nvidia's engagement with RISC-V appears significant, evidenced by Frans Sijsterman, Vice President of Hardware Engineering at Nvidia, delivering the keynote address at the RISC-V Summit China. Sijsterman's presentation detailed the operational integration of CUDA components with RISC-V. A representative diagram showcased a typical configuration where the Graphics Processing Unit (GPU) manages parallel workloads. Concurrently, a RISC-V Central Processing Unit (CPU) executes CUDA system drivers, application logic, and the operating system. This arrangement facilitates the CPU's full orchestration of GPU computations within the CUDA environment. The specific nature of these workloads, while aligned with Nvidia's focus on Artificial Intelligence, was not explicitly confirmed as AI-related. Can new $249 Nvidia RTX 5050 actually game well? The depicted system also incorporated a Data Processing Unit (DPU) dedicated to handling networking tasks. This comprehensive configuration, comprising GPU for compute, CPU for orchestration, and DPU for data movement, indicates Nvidia's strategic direction towards creating heterogeneous compute platforms. Within this framework, a RISC-V CPU can assume a central role in managing workloads, while Nvidia's GPUs, DPUs, and networking chips manage other functions. This move bridges Nvidia's proprietary CUDA stack with an open architecture. The integration of RISC-V expands CUDA's applicability within systems that prefer open instruction sets or require tailored processor implementations. This includes custom silicon designs. Additionally, the inclusion of RISC-V enhances the options available to Nvidia Jetson developers who are working with specialized or embedded computing platforms.
Share
Copy Link
Nvidia announces CUDA support for RISC-V processors, enabling them to serve as host CPUs for GPU-accelerated systems. This move opens new possibilities for AI and high-performance computing, particularly in regions focusing on open-source architectures.
In a significant development for the AI and high-performance computing (HPC) landscape, Nvidia has officially announced support for its CUDA (Compute Unified Device Architecture) software stack on RISC-V CPUs. This announcement, made at the RISC-V Summit in China, marks a pivotal shift in Nvidia's strategy and opens up new possibilities for AI and HPC platforms 12.
Source: The Register
CUDA, Nvidia's high-level software abstraction layer for GPU interaction, has traditionally been limited to x86 and Arm-based processors. With this new support, RISC-V processors can now serve as host CPUs for Nvidia GPUs, enabling a more diverse range of system configurations 13.
The integration allows for a three-part heterogeneous computing setup:
This configuration provides clear separation of tasks, making it ideal for AI inference at the edge and potentially useful in larger data centers 34.
Nvidia's decision to support RISC-V has several significant implications:
Expanded Ecosystem: By supporting RISC-V, Nvidia taps into a growing ecosystem of open-source hardware, potentially increasing its market reach 2.
Flexibility for Hardware Makers: Companies can now include CUDA acceleration without being locked into proprietary host platforms, offering more freedom in chip design 3.
Regional Impact: This move is particularly significant for regions like China, which has been pushing to end reliance on Western CPUs. RISC-V plays a central role in this effort 14.
Edge Computing and IoT: The integration opens new possibilities for Nvidia's Jetson modules and other edge AI applications 34.
Source: Dataconomy
While high-performance RISC-V processors for datacenters are still relatively scarce, there's growing momentum in the field. Notable developments include:
Nvidia's support for RISC-V could accelerate the development and adoption of these processors in more demanding computing environments.
It's worth noting that Nvidia's engagement with RISC-V isn't new. The company has been using RISC-V cores in its GPUs for years, primarily in microcontrollers responsible for low-level functionality. In 2024 alone, Nvidia reportedly shipped over a billion RISC-V cores integrated into its GPUs 12.
Source: Guru3D.com
While the immediate impact may be more visible in edge computing and specialized applications, this move sets the stage for potential broader adoption in the future. As RISC-V matures and gains performance parity with established architectures, Nvidia's early support could prove to be a strategic advantage in the evolving computing landscape 234.
The integration of CUDA with RISC-V represents a bridge between Nvidia's proprietary technology and the open-source hardware movement, potentially reshaping the future of AI and high-performance computing platforms.
A thriving black market for Nvidia's advanced AI chips has emerged in China, with at least $1 billion worth of processors smuggled into the country despite US export restrictions. The situation highlights the challenges in enforcing tech export controls and the high demand for cutting-edge AI hardware in China.
12 Sources
Technology
2 hrs ago
12 Sources
Technology
2 hrs ago
OpenAI is preparing to release its next-generation AI model, GPT-5, as early as August 2025. This highly anticipated launch promises enhanced capabilities and a unified approach to AI tasks.
7 Sources
Technology
2 hrs ago
7 Sources
Technology
2 hrs ago
Google introduces 'Web Guide', an AI-driven feature that reorganizes search results into thematic groups, potentially changing how users interact with search engines.
8 Sources
Technology
2 hrs ago
8 Sources
Technology
2 hrs ago
Google reports significant growth in AI-powered features across its products, with AI Overviews reaching 2 billion monthly users and Gemini app hitting 450 million users. The company processes 980 trillion monthly tokens, showcasing the increasing adoption of AI technologies.
6 Sources
Technology
18 hrs ago
6 Sources
Technology
18 hrs ago
Walmart announces the rollout of AI-powered 'super agents' to enhance customer experience, streamline operations, and boost e-commerce growth, aiming to compete with Amazon in the AI-driven retail landscape.
6 Sources
Technology
10 hrs ago
6 Sources
Technology
10 hrs ago