Nvidia opens orders for DGX Station, a desktop supercomputer running trillion-parameter AI models

4 Sources

Share

Nvidia has started accepting orders for its DGX Station, a desktop-class AI workstation powered by the GB300 Superchip. The system delivers 20 PFLOPS of compute and 748GB of unified memory, enabling developers to run trillion-parameter AI models locally without cloud infrastructure. Six major manufacturers are now offering variants of the system.

Nvidia DGX Station Now Available After Delayed Launch

Nvidia has officially opened orders for the DGX Station, a desktop supercomputer designed to bring data center performance directly to developers' desks

1

. Announced at the company's annual GTC event in San Jose, the system marks a significant shift in local AI development capabilities, allowing enterprises and researchers to run trillion-parameter AI models without relying on cloud infrastructure

2

. The DGX Station was originally scheduled for release last year but faced delays as Nvidia worked to pack the GB300 Superchip and its motherboard into a desktop case

1

.

Source: VentureBeat

Source: VentureBeat

Six PC manufacturers are now accepting orders: Asus, Dell, HP, Gigabyte, MSI, and server provider Supermicro

1

. Each vendor has designed its own variant housed in desktop tower cases, though pricing remains undisclosed. Interested customers must fill out contact forms, with vendors expected to follow up with details. Nvidia indicated that systems should begin shipping within weeks

1

.

Source: PC Magazine

Source: PC Magazine

GB300 Superchip Delivers 20 PFLOPS and 748GB Unified Memory

At the heart of the DGX Station sits the GB300 Grace Blackwell Ultra Desktop Superchip, integrating a 72-core Grace CPU based on Neoverse V2 architecture with a Blackwell Ultra GPU featuring 20,480 CUDA cores

3

4

. The system delivers up to 20 PFLOPS of AI compute using FP4 precision with sparsity—a dramatic leap from the DGX Spark's 1 PFLOP performance

3

.

Source: Guru3D

Source: Guru3D

The memory subsystem represents a critical breakthrough for local AI development. The DGX Station features 748GB of unified memory, comprising 252GB of HBM3e on the GPU and 496GB of LPDDR5X attached to the CPU

3

. Through NVLink-C2C interconnect, both processors access the full memory pool with 1.8 terabytes per second of coherent bandwidth—seven times faster than PCIe Gen 6

2

. This architecture eliminates traditional bottlenecks between system and graphics memory, enabling developers to load trillion-parameter models entirely into memory without latency penalties

2

.

AI Workstation Targets Autonomous Agent Development

Nvidia designed the DGX Station explicitly for autonomous AI agents that reason, plan, and execute tasks continuously rather than simply responding to prompts

2

. The company argues that always-on agents require persistent compute, persistent memory, and persistent state—capabilities better suited to a machine under your desk than rented cloud GPU instances

2

.

To support this vision, Nvidia announced NemoClaw, an open-source stack that bundles the company's Nemotron models with OpenShell, a secure runtime enforcing policy-based security, network, and privacy guardrails for autonomous agents

1

. CEO Jensen Huang called OpenClaw "the operating system for personal AI," comparing its significance to HTML and Linux

1

. NemoClaw addresses known issues with AI agent software going rogue, such as deleting emails without permission or collecting sensitive data

1

.

Seamless Path From Desktop to Data Center

One strategic advantage of the DGX Station lies in architectural continuity. Applications built on the desktop system migrate seamlessly to Nvidia's GB300 NVL72 data center systems without rearchitecting code

2

. This vertical integration eliminates engineering time lost to rewriting code for different hardware configurations—a hidden cost that often exceeds raw compute expenses in AI development

2

.

The platform includes robust expansion options: PCIe 5.0 slots for add-in cards, four M.2 Gen5 interfaces for high-speed storage, and advanced networking with dual 400 Gbit/s ConnectX-8 adapters alongside 10GbE connectivity

3

. The system operates at 1600W TDP and supports air-gapped configurations for classified or regulated environments where data cannot leave the building

2

4

.

Unlike previous DGX products, Nvidia will not sell a Founders Edition. Instead, the company supplies the core motherboard with integrated CPU and GPU, leaving partners to deliver complete workstation solutions

3

. While pricing remains undisclosed, the system is clearly positioned for professionals working with large-scale AI models who require substantial local compute resources, with expectations of several thousand dollars based on specifications alone

4

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo