5 Sources
5 Sources
[1]
AMD Pushes a New Category of PCs: The Agent Computer
Time will tell if the term takes off, but AMD wants to create a new product category called the "Agent Computer." The chipmaker points out that while people mainly access chatbots and AI tools online, some also run AI agents locally on their own hardware, as evidenced by OpenClaw, an open-source project that runs on a laptop or mini PC. However, for the best performance, AMD says its latest AI Max processors, including the AMD Ryzen AI Max+ 395, are ready to address the niche, but potentially growing market. "Powerful agents need powerful compute, and that's what AMD does. They need a new class of machine," the company wrote in a blog post. "A personal computer runs your apps. An Agent Computer runs your agents so they can run the apps for you. That is the shift." The blog post envisions a near future in which people run agents locally to help them complete a wide range of tasks throughout the day, acting as a dutiful assistant. "Not every AI workload belongs in a hyperscaler's data center," AMD adds, alluding to online services such as ChatGPT or Google's Gemini. "People and businesses want control over their data, affordable AI they can use every day without limits, and the confidence that their AI works for them. That makes local, privacy-centric, always-on agentic compute a real and growing need for consumers, creators, developers, startups, and SMEs (small and medium enterprises)." The post directs users to check out AMD's "Agent Computers for Windows," which include the HP Z2 Mini G1a, a compact desktop configurable with the Ryzen AI Max+ 395 chip and a whopping 128GB of RAM. We reviewed it last month and found it to offer some impressive, although pricey, computing suited for AI development. Our review model costs $3,309. Another product shown is Corsair's AI Workstation 300 Desktop PC, which currently starts at $2,199, and Framework Computer's Framework Desktop with the Ryzen Max+ 395 model starting at $1,959. Both can also be configured with 128GB of RAM. AMD talked up the new product category days before rival Nvidia kicks off GTC, its annual AI developer conference. Nvidia will no doubt discuss its roadmap for future AI chips for data centers. But there's a good chance the GPU maker will debut hardware that can also run AI models locally at home or in the office. Last year, Nvidia began selling the DGX Spark, a $3,999 mini PC that also supports up to 128GB of RAM. Its partners, including Dell, developed their own versions built using the same GB10 Nvidia chip. A more powerful, larger DGX Station is slated to arrive this spring. So it's possible AMD is talking up Agent Computers to counter Nvidia's GTC announcements. To promote its own offerings, AMD created a site dedicated to the new product category, which includes a guide on running OpenClaw on AMD Ryzen AI Max+ Processors and the company's Radeon GPUs.
[2]
AMD unveils OpenClaw to run AI agents locally on Ryzen and Radeon hardware
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. The takeaway: AMD is pushing the idea that artificial intelligence agents don't need to live in the cloud. Its new OpenClaw framework - now equipped with two hardware configurations dubbed RyzenClaw and RadeonClaw - is designed to help developers and early adopters run sophisticated large language models entirely on local machines. The aim is clear: bring generative AI performance into the home and reduce dependence on data centers. The effort is part of AMD's broader Agent Computer initiative, which argues that the future of AI isn't limited to remote infrastructure. Instead, it envisions a world where users control both their data and their computing environment - where AI assistants operate continuously with reduced network dependence, fewer external subscriptions, and fewer privacy concerns. OpenClaw is AMD's latest attempt to turn that principle into a tangible, developer-accessible platform. At a technical level, OpenClaw runs on Windows using the Windows Subsystem for Linux (WSL2), with local inference handled by LM Studio through the llama.cpp backend. This setup allows users to run models such as Qwen 3.5 35B A3B directly on their own hardware. The system also supports Memory.md, an embedding-based memory framework that stores local context without relying on cloud synchronization. AMD presents the guide as a streamlined way for developers to configure a full OpenClaw environment on Windows when testing AI agent architectures, though it does not specify an expected setup time. The two configurations represent different paths to the same idea: high-performance, on-device AI. The RyzenClaw configuration is built around AMD's Ryzen AI Max+ processor paired with 128GB of unified memory. AMD recommends allocating roughly 96GB of that memory to variable graphics usage to keep LLM inference running efficiently. In this configuration, Qwen 3.5 35B A3B generates about 45 tokens per second and can process a 10,000-token input in approximately 19.5 seconds. Its 260,000-token context window is expansive, making it suitable for multi-agent workflows or "agent swarm" testing environments. AMD says the setup can run up to six local AI agents concurrently - a notable figure for a non-datacenter system. RadeonClaw, by contrast, shifts the computing load to a discrete GPU: the Radeon AI PRO R9700. This workstation card comes with 32GB of dedicated VRAM, which significantly increases throughput. Using the same model, performance climbs to around 120 tokens per second, reducing the time needed to process 10,000 tokens to about 4.4 seconds. That gain, however, comes with limits as the maximum context window drops to 190,000 tokens, and concurrent agent capacity falls to two. These trade-offs underscore AMD's strategy of offering distinct tuning paths depending on whether developers prioritize context depth or inference speed. Neither configuration is built for casual users. A desktop built around the Ryzen AI Max+ 395 chip and 128GB of memory such as a Framework Desktop configuration is cited as starting at around $2,700. The RadeonClaw option adds further expense, as the Radeon AI PRO R9700 GPU alone retails for about $1,299. For now, AMD acknowledges that OpenClaw targets early adopters and engineers experimenting with local AI agents rather than mainstream consumers. Still, the message behind OpenClaw extends beyond its hardware. AMD is betting that developers will value autonomy and privacy over raw scale, and that local agents running on consumer-grade silicon can bridge the gap between personal computing and distributed AI. If that idea gains traction, the company could carve out a distinct role in the rapidly evolving AI ecosystem - one that blurs the line between workstation and datacenter performance.
[3]
AMD wants you to buy a $2,000 'agent PC' just for AI
High component costs and complex installation processes currently limit consumer adoption, with alternatives like Raspberry Pi potentially more practical. You already have a laptop or desktop PC, but now AMD thinks you need another one -- an "agent PC" to support your main machine. AMD has responded to the growing success of OpenClaw's AI agents with a new suggestion: customers should buy "agent PCs," which would take the power of the Ryzen AI Max+ processor (surprise!) and repurpose it to run an agent swarm. AMD's idea is that you should use a "normal" PC plus a secondary agent PC for running AI apps. To help out, AMD published a guide to running OpenClaw locally on an AMD processor. But AMD's argument reads less like a pitch to the industry and more like a manifesto. "An Agent Computer is a new category of device built to run your AI agents full-time," AMD said in a blog post. "It can sit in your home or office, always on, always available, always working." AMD goes on: "You do not operate it like a PC. You delegate to it. You send a message on WhatsApp. Your agent gets moving. You drop a task into Slack. Your agent takes it from there. You ask for an update in Message. Your agent reports back. A personal computer runs your apps. An Agent Computer runs your agents so they can run the apps for you. That is the shift." [Emphasis is AMD's.] AMD's argument is that only the Ryzen AI Max+ is suited to these types of PCs, given the enormous potential amount of memory (128GB) such systems can come with. Much of that memory can be configured as VRAM, the space in which AI algorithms work. OpenClaw can be launched with just a single line of code on Windows, macOS, and Linux, then linked to everything from LLMs to Gmail to Spotify to work independently. OpenClaw's agents then work together to perform tasks like researching and writing presentations, tracking down the details of trips, and more. OpenClaw can be allowed access to the PC as a whole or run in sandboxed mode for more security. In any event, AMD believes that a Ryzen AI Max+ platform is a superior offering. OpenClaw can be run on a variety of platforms, but Mac Minis -- which combine powerful and power-efficient Apple M-series silicon with a compact form factor -- have anecdotally become a popular choice for the platform. The current Mac Mini, however, maxes out at 64GB of RAM. On paper, at least, that might give AMD's Ryzen Max+ chip an advantage... at least until Apple updates the Mac Mini once again. Is now really the time for this? It's difficult not to be skeptical. For one thing, IDC has lowered its PC market forecasts yet again, predicting that the days of inexpensive PCs might be over for now. A Ryzen AI Max+ box like the Framework Desktop we reviewed once cost $2,515, but now costs $2,700 -- and that's without any storage. You already know that RAM and storage prices are skyrocketing. AI developers with cash to spare might be able to cash out a Bitcoin or two and buy themselves a box to sit on their desk, but the average consumer staring down rapidly rising gas prices might think twice about spending an extra two grand for a local AI box when the cloud exists. Granted, local agentic AI is the strongest argument yet for local AI hardware, since AI art and LLMs can easily be run in the cloud for free. A dedicated "Agent PC" box that you can disconnect and reformat in a pinch makes more sense to me than agents which are roaming through the cloud on your behalf. If AMD is trying to convince the average consumer to buy and set up an Agent PC, however, the company might want to think about a more streamlined installation process. AMD's OpenClaw instructions go on and on -- they're straightforward, but the length is daunting. Not to mention all the concerns about OpenClaw itself. The problem is that AMD is trying to lure users in with the premise that an OpenClaw-powered Agent PC is for everyone, when the cost and complexity put it out of reach for many. That doesn't mean it's a bad idea! But starting OpenClaw with a cheaper Raspberry Pi or waiting until the technology matures might be a safer bet right now.
[4]
AMD pushes Agent Computers as the next evolution of AI PCs
AMD is laying out a more ambitious vision for the AI PC, and it goes beyond the usual mix of operating-system assistants and on-device inference demos. The company is now promoting what it calls the "Agent Computer," a local system designed to run AI agents directly on client hardware without depending on cloud infrastructure. To show how that works in practice, AMD has published guidance for running OpenClaw locally on Windows through two hardware paths built on its own silicon: RyzenClaw and RadeonClaw. The concept is fairly easy to understand. AMD's view is that not every AI workload belongs in a remote data center, especially when users want better privacy, fixed cost, always-on access, and direct control over their models and data. In AMD's recommended setup, OpenClaw runs through WSL2, while LM Studio and llama.cpp handle local large language model inference. Memory.md is supported through local embeddings, so the environment stays self-contained rather than relying on external cloud services. AMD says the full stack can be configured in under an hour, which makes it clear the target audience is developers, enthusiasts, and early adopters rather than mainstream consumers. RyzenClaw is the more memory-focused route. AMD bases it on a Ryzen AI Max+ platform with 128 GB of unified memory and recommends reserving 96 GB of that pool as variable graphics memory for AI workloads. Using the Qwen 3.5 35B A3B model, AMD reports performance of around 45 tokens per second, with 10,000 input tokens processed in roughly 19.5 seconds. The more notable part is capacity: AMD says this configuration supports a 260K token context window and can run up to six agents concurrently. That makes it a better fit for users experimenting with multi-agent workflows where memory footprint and concurrency matter more than outright inference speed. RadeonClaw takes a different approach. Instead of relying on a large unified memory pool, this path is built around the Radeon AI PRO R9700, a workstation-class graphics card with 32 GB of VRAM. With the same Qwen model, AMD claims about 120 tokens per second and roughly 4.4 seconds to process 10,000 input tokens, making it much faster than the Ryzen AI Max+ setup in direct throughput terms. The compromise is a smaller 190K token context window and support for only two concurrent agents. So while RadeonClaw offers better performance, it gives up some of the flexibility that makes RyzenClaw attractive for broader agent experimentation. In effect, AMD is presenting two local AI machine profiles. One favors larger working memory and more simultaneous agents, while the other favors raw inference speed. Both support AMD's broader argument that a higher-end PC can become a self-contained AI environment rather than just a terminal for cloud services. It is also a more specific pitch than the generic "AI PC" label that has been stretched across much of the market. The biggest issue, at least for now, is cost. A Ryzen AI Max+ 395 system with 128 GB of memory starts around $2,700, and the Radeon AI PRO R9700 starts at $1,299 before the rest of the system is considered. That means AMD's Agent Computer idea is technically credible, but still priced for developers, workstation users, and enthusiasts. As a near-term mainstream product category, it is a hard sell. As a preview of where high-end local AI computing may be headed, however, AMD's framing is much more concrete than most AI PC messaging seen so far.
[5]
AMD Ryzen AI MAX APUs & Radeon AI PRO GPUs Offer Stunning Capabilities In OpenClaw AI Agent
AMD has published a new blog on enabling OpenClaw AI agent on its powerful Ryzen AI MAX APUs & Radeon AI PRO GPUs. AMD Shows You How To Run OpenClaw AI Agent on Ryzen AI MAX APUs & Radeon AI PRO GPUs, Plus Also Reveals Strong Performance Capabilities AI agents such as OpenClaw are the talk of the town, and AMD has decided to offer a guide on how to run these on their latest hardware. For this purpose, AMD has set up two unique OpenClaw configurations, the first one being RadeonClaw, which is based on their Radeon AI PRO GPUs, and RyzenClaw, which is based on their Ryzen AI MAX SoCs. As we know, AMD's Ryzen AI MAX+ APUs feature support of 128 GB of fast memory on a single platform. This allows them to tackle big LLMs such as Qwen 3.5 122B easily. With 128 GB of system memory & the ability to allocate up to 112 GB of VRAM to the Radeon 8000S GPUs, the systems, which come in the form of laptops and Mini PCs, offer lots of local AI performance. Starting with the first example, the AMD Ryzen AI MAX+ APUs offer up to 19 Tokens/s performance on a single agent and multi-agents up to two with 95K context concurrency in Qwen 3.5 122B A10B. The AMD Ryzen AI MAX+ systems can also be linked together for even faster AI workstation capabilities. For standard Qwen 3.5 35B A3B workloads, the AMD Ryzen AI MAX+ AI offers 45 Tokens/s performance and takes just 19.5 seconds to process 10,000 input tokens. The chips have a max context window of 260K, and with a multi-agent use-case, this can be expanded to 6x95K concurrency range. For RadeonClaw, AMD demonstrates its Radeon AI PRO R9700 graphics card, which is based on its fastest 32 GB RDNA 4 GPU. A single AI PRO R9700 GPU can crunch up to 10,000 input tokens in just 4.4 seconds, and offers 120 Tokens/s performance. The max context window is 190K, and a multi-agent application rate of 2 x 95K is listed. Users can also combine up to four of these Radeon AI PRO R9700 GPUs in workstation setups for 128 GB of VRAM, giving them the ability to run larger 128B models locally and with ease. AMD also provides users with a BKC (Best Known Configuration) for OpenClaw via WSL2. It provides the following: * Fully Local LLM Provisioning * Functional Memory.md (Local Embedding) * Powered by LM Studio (llama.cpp) * Browser Control (Inside WSL2) * Est Setup Time: <1 Hour * Designed for early adopters of personal agents We have tested both an HP Zbook Ultra G1a laptop and GMKtec's EVO X2 Mini PC featuring the Ryzen AI MAX+ 395, and found their AI capabilities to be very disruptive. Sure, the products come at a high price, above $2000 for the Mini PC with 64 GB memory and over $4000 for the HP Zbook with 128 GB memory, but they are truly compact workstation beasts. It's great to see that companies are not just pushing the AI narrative by launching new hardware, but also putting out handy guides to help enable consumers to utilize their hardware capabilities in new ways. The AI agents have lots of use cases, not just for professionals or business-oriented users, but also for regular PCs & users. You can check out AMD's full guide here on how to enable OpenCLaw on your system. Follow Wccftech on Google to get more of our news coverage in your feeds.
Share
Share
Copy Link
AMD unveils a new product category called the Agent Computer, designed to run AI agents locally on dedicated hardware rather than relying on cloud infrastructure. The company released the OpenClaw framework with two configurations—RyzenClaw and RadeonClaw—targeting developers and early adopters who prioritize user control and privacy over cloud-based AI services.
AMD is pushing beyond conventional AI PC concepts with a new product category it calls the Agent Computer, a dedicated machine for AI designed to run AI agents locally without depending on cloud infrastructure
1
. The chipmaker argues that while most people access chatbots and AI tools online through services like ChatGPT or Google's Gemini, a growing market exists for those who want to run AI agents locally on their own hardware1
. "A personal computer runs your apps. An Agent Computer runs your agents so they can run the apps for you. That is the shift," AMD stated in its blog post1
.
Source: PCWorld
The timing appears strategic, as AMD unveiled this concept days before rival Nvidia kicks off GTC, its annual AI developer conference
1
. Nvidia already sells the DGX Spark, a $3,999 mini PC supporting up to 128GB of RAM, with a more powerful DGX Station slated for spring release1
.To demonstrate how Agent Computer works in practice, AMD released the OpenClaw framework with two distinct hardware configurations: RyzenClaw and RadeonClaw
2
. The OpenClaw framework runs on Windows using WSL2, with local inference handled by LM Studio through the llama.cpp backend2
. This setup allows users to run large language models such as Qwen 3.5 35B A3B directly on their own hardware2
.
Source: Wccftech
The RyzenClaw configuration centers on AMD's Ryzen AI Max+ processor paired with 128GB of unified memory, with roughly 96GB allocated to variable graphics usage to keep LLM inference running efficiently
2
. Using Qwen 3.5 35B A3B, this configuration generates about 45 tokens per second and processes a 10,000-token input in approximately 19.5 seconds2
. Its 260,000-token context window makes it suitable for multi-agent workflows, with AMD claiming the setup can run up to six local AI agents concurrently2
.RadeonClaw shifts computing load to the discrete Radeon AI PRO R9700 GPU with 32GB of dedicated VRAM, significantly increasing throughput
2
. Performance climbs to around 120 tokens per second, reducing the time needed to process 10,000 tokens to about 4.4 seconds5
. However, the maximum context window drops to 190,000 tokens, and concurrent agent capacity falls to two2
. These trade-offs underscore AMD's strategy of offering distinct tuning paths depending on whether developers prioritize context depth or inference speed2
.Source: TechSpot
AMD's Agent Computer initiative argues that not every AI workload belongs in a hyperscaler's data center
1
. "People and businesses want control over their data, affordable AI they can use every day without limits, and the confidence that their AI works for them," AMD stated1
. This makes local, privacy-centric, always-on agentic compute a real and growing need for consumers, creators, developers, startups, and SMEs1
.The system supports Memory.md, an embedding-based memory framework that stores local context without relying on cloud synchronization
2
. AMD says the full stack can be configured in under an hour, though the target audience remains developers, enthusiasts, and early adopters rather than mainstream consumers4
.Related Stories
Neither configuration targets casual users. A Framework Desktop built around the Ryzen AI Max+ 395 chip with 128GB of memory starts at approximately $1,959, though recent reports indicate prices have climbed to $2,700 without storage
1
3
. The HP Z2 Mini G1a, configurable with the Ryzen AI Max+ 395 chip and 128GB of RAM, costs $3,3091
. The Radeon AI PRO R9700 GPU alone retails for about $1,2992
.Critics point out that AMD's OpenClaw instructions are straightforward but daunting in length, and the cost puts Agent Computer out of reach for many
3
. With RAM and storage prices skyrocketing and IDC lowering PC market forecasts, the average consumer facing rising costs might think twice about spending an extra two grand for local AI hardware when cloud alternatives exist3
.AMD is betting that developers will value autonomy and privacy over raw scale, and that local agents running on consumer-grade silicon can bridge the gap between personal computing and distributed AI
2
. As a near-term mainstream product category, it remains a hard sell, but as a preview of where high-end local AI computing may be headed, AMD's framing offers more concrete direction than most AI PC messaging seen so far4
. If this idea gains traction among workstation users and enthusiasts experimenting with multi-agent workflows, AMD could carve out a distinct role in the rapidly evolving AI ecosystem2
.Summarized by
Navi
[1]
18 Mar 2025•Technology

31 Jul 2025•Technology
02 Mar 2026•Technology
