AMD introduces Agent Computer category to run AI agents locally without cloud dependence

5 Sources

Share

AMD unveils a new product category called the Agent Computer, designed to run AI agents locally on dedicated hardware rather than relying on cloud infrastructure. The company released the OpenClaw framework with two configurations—RyzenClaw and RadeonClaw—targeting developers and early adopters who prioritize user control and privacy over cloud-based AI services.

AMD Introduces Agent Computer as Next Evolution of AI PCs

AMD is pushing beyond conventional AI PC concepts with a new product category it calls the Agent Computer, a dedicated machine for AI designed to run AI agents locally without depending on cloud infrastructure

1

. The chipmaker argues that while most people access chatbots and AI tools online through services like ChatGPT or Google's Gemini, a growing market exists for those who want to run AI agents locally on their own hardware

1

. "A personal computer runs your apps. An Agent Computer runs your agents so they can run the apps for you. That is the shift," AMD stated in its blog post

1

.

Source: PCWorld

Source: PCWorld

The timing appears strategic, as AMD unveiled this concept days before rival Nvidia kicks off GTC, its annual AI developer conference

1

. Nvidia already sells the DGX Spark, a $3,999 mini PC supporting up to 128GB of RAM, with a more powerful DGX Station slated for spring release

1

.

OpenClaw Framework Powers RyzenClaw and RadeonClaw Configurations

To demonstrate how Agent Computer works in practice, AMD released the OpenClaw framework with two distinct hardware configurations: RyzenClaw and RadeonClaw

2

. The OpenClaw framework runs on Windows using WSL2, with local inference handled by LM Studio through the llama.cpp backend

2

. This setup allows users to run large language models such as Qwen 3.5 35B A3B directly on their own hardware

2

.

Source: Wccftech

Source: Wccftech

The RyzenClaw configuration centers on AMD's Ryzen AI Max+ processor paired with 128GB of unified memory, with roughly 96GB allocated to variable graphics usage to keep LLM inference running efficiently

2

. Using Qwen 3.5 35B A3B, this configuration generates about 45 tokens per second and processes a 10,000-token input in approximately 19.5 seconds

2

. Its 260,000-token context window makes it suitable for multi-agent workflows, with AMD claiming the setup can run up to six local AI agents concurrently

2

.

RadeonClaw shifts computing load to the discrete Radeon AI PRO R9700 GPU with 32GB of dedicated VRAM, significantly increasing throughput

2

. Performance climbs to around 120 tokens per second, reducing the time needed to process 10,000 tokens to about 4.4 seconds

5

. However, the maximum context window drops to 190,000 tokens, and concurrent agent capacity falls to two

2

. These trade-offs underscore AMD's strategy of offering distinct tuning paths depending on whether developers prioritize context depth or inference speed

2

.

Source: TechSpot

Source: TechSpot

User Control and Privacy Drive Local AI Hardware Push

AMD's Agent Computer initiative argues that not every AI workload belongs in a hyperscaler's data center

1

. "People and businesses want control over their data, affordable AI they can use every day without limits, and the confidence that their AI works for them," AMD stated

1

. This makes local, privacy-centric, always-on agentic compute a real and growing need for consumers, creators, developers, startups, and SMEs

1

.

The system supports Memory.md, an embedding-based memory framework that stores local context without relying on cloud synchronization

2

. AMD says the full stack can be configured in under an hour, though the target audience remains developers, enthusiasts, and early adopters rather than mainstream consumers

4

.

High Costs and Complex Setup Challenge Consumer Adoption

Neither configuration targets casual users. A Framework Desktop built around the Ryzen AI Max+ 395 chip with 128GB of memory starts at approximately $1,959, though recent reports indicate prices have climbed to $2,700 without storage

1

3

. The HP Z2 Mini G1a, configurable with the Ryzen AI Max+ 395 chip and 128GB of RAM, costs $3,309

1

. The Radeon AI PRO R9700 GPU alone retails for about $1,299

2

.

Critics point out that AMD's OpenClaw instructions are straightforward but daunting in length, and the cost puts Agent Computer out of reach for many

3

. With RAM and storage prices skyrocketing and IDC lowering PC market forecasts, the average consumer facing rising costs might think twice about spending an extra two grand for local AI hardware when cloud alternatives exist

3

.

What This Means for Developers and the AI Ecosystem

AMD is betting that developers will value autonomy and privacy over raw scale, and that local agents running on consumer-grade silicon can bridge the gap between personal computing and distributed AI

2

. As a near-term mainstream product category, it remains a hard sell, but as a preview of where high-end local AI computing may be headed, AMD's framing offers more concrete direction than most AI PC messaging seen so far

4

. If this idea gains traction among workstation users and enthusiasts experimenting with multi-agent workflows, AMD could carve out a distinct role in the rapidly evolving AI ecosystem

2

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo