5 Sources
5 Sources
[1]
Arm says AI agents need a new CPU. Intel doesn't buy it
Interview In recent weeks, the likes of Nvidia and Arm have revealed CPUs designed expressly to run AI agents like OpenClaw. Kevork Kechichian, who runs Intel's Data Center Group and served as executive vice president of Arm's Solutions Engineering team until last summer, isn't so sure this "new" kind of CPU is really what hyperscalers or enterprises actually need. His comments came just days after Arm unveiled its full processor design, a chip called the AGI CPU, which it proposes as an agentic AI processor. Nvidia showed off its own agentic compute platform, powered by its in-house Vera CPUs, a week earlier. After years of GPUs and AI accelerators dominating headlines, CPUs are back in the limelight because those agentic frameworks, tools, API calls, and AI-generated code snippets need to run on something, and it's not GPUs. Speaking at the Arm Everywhere event in San Francisco last week, Arm EVP of Cloud AI Mohamed Awad made the case that existing x86 processors weren't designed to run agents, and that their boost modes, simultaneous multithreading (SMT), specialized accelerators and other legacy features that work for today's workloads only served to consume die area and drive up power consumption. "When you increase the frequency, what else do you increase? Power. That's a problem. These boost modes are not sustainable across long periods of time. They're not sustainable across a chip," Awad claimed. Naturally, Arm argues its 300-watt, 136-core chip avoids those problems. "We don't support Lotus Notes, we just don't do it," Awad said in an apparent reference to x86 real mode. "We're focused on exactly and only what the agentic datacenter needs, performance, scale, and efficiency." The cores Arm uses in the AGI are also surprisingly light on Single Instruction, Multiple Data (SIMD) features compared to the AVX extensions found on modern x86 server processors. Arm's chip features a pair of 128-bit wide vector units, compared to the 512-bit wide vectors supported on most Intel and AMD server chips. Awad went out of his way to pitch the chip's lack of SMT, which you might know as hyperthreading, as a benefit rather than a negative. "What happens when you do multithreading? You throw two jobs at the same core, that's how they get to a high thread count," he said. "The reality is that your I/O and your bandwidth don't double, so you've just moved the bottleneck elsewhere." Whether or not the optimization points highlighted in Arm's AGI CPU announcement are the ones that actually matter for agentic performance, the jury is still out for Intel's Kechichian. One area where he can see the logic is on SIMD. "If you look at the workloads, it's just mostly traditional data movement types of things; orchestration," he said. "That's one area where not having heavy SIMD engines is a good thing." He also acknowledges that there are features in current CPUs, both Arm and x86, that you don't necessarily need for agent frameworks. However, he argues that many of the accelerators Intel has developed over the past several years remain relevant - for example, QuickAssist, which is designed to speed up compression, decompression, and cryptographic workloads. Kechichian is also less than convinced by Arm's case against SMT. "While Renee talked about non-SMT and optimization, a week before, Jensen showed another CPU which has SMT." Nvidia's Vera CPUs feature 88 of its custom Arm-based Olympus cores, which include what the GPU giant calls "spatial multithreading." As Nvidia explains it, the tech essentially splits each core's resources down the middle rather than doing time slicing like any other x86 chip with SMT would. "My view is that, if they had the option, they would have put it in," Kechichian said of Arm's AGI CPU. "They don't have the option, and none of the cores have SMT at Arm." That said, it's also important to understand that some workloads have always benefited from SMT more than others. There is a reason that IBM is still shipping new Power CPUs with four or even eight threads per core. But because of this, Intel and AMD have long made it easy to turn SMT on or off in BIOS settings, at least for the parts that support multiple threads per core in the first place. Alongside its Granite Rapids P-core Xeons, Intel also has its Sierra Forest and Clearwater Forest processors that pack in plenty of its ultra-efficient cores. Clearwater Forest in particular shares many qualities with Arm's AGI CPU. It's got 288 stripped-down cores with minimal SIMD extensions and 12 channels of fast DDR5 memory. "It has the density, it has the high core count, and it also lacks SMT," Kechichian said. Asked about the similarities between Arm's product and Intel's Clearwater Forest, Awad argued that the parts were really designed around maximizing compute density, citing the memory bandwidth per core and calling into question relative performance of Intel's efficiency cores. While it's true that Arm's 136-core parts deliver 6 GB/s of memory bandwidth per core, this is largely down to the ratio of compute to memory. In fact, it is common to see lower core count parts with large caches favored for memory-bound workloads like computational fluid dynamics. Fewer cores hanging off the same memory subsystem usually, but not always, translates to higher bandwidth per core. Compared to Intel's top-specced Clearwater Forest parts, Arm's CPU offers more than twice the bandwidth per core. We don't have the full Xeon 6+ SKU list just yet, but Kechichian tells us the part will be offered in configurations ranging from 288 cores at the high end to the low 100s at the low end. In a 136-core vs 136-core scenario, Arm's lead would likely be significantly smaller. Despite checking many of the same boxes as Arm's AGI CPUs, Kechichian tells us Chipzilla does not see much demand for Xeon 6+ in agentic use cases. Instead, we're told the chip is most popular in networking applications like packet processing. Kechichian isn't ruling out the possibility that demand for agentic workloads will come in time. ®
[2]
ARM's first in-house AI chip draws Meta and OpenAI interest
Industry looks beyond x86 for efficient large-scale AI data center deployment * Arm enters silicon production with a CPU designed for large-scale AI workloads * New AGI CPU doubles rack performance compared with traditional x86 systems * Meta and OpenAI adopt Arm chip for next-generation infrastructure Arm has extended its compute platform into production silicon for the first time with the introduction of what it calls the "next evolution of the Arm compute platform," the AGI CPU. The companys says the CPU is designed specifically for AI data centers, supporting agentic AI workloads which involve continuously running agents capable of reasoning, planning, and acting. The processor features up to 136 Neoverse V3 cores per CPU, with 6GB/s memory bandwidth per core and sub-100ns latency, allowing higher workload density and improved system efficiency. Performance and capacity The Arm AGI CPU promises deterministic performance under sustained load with a 300-watt TDP and a dedicated core per program thread. The processor supports air-cooled 1U server chassis with up to 8,160 cores per rack, and liquid-cooled deployments reaching 45,000 cores per rack. Compared with x86 CPUs, the Arm AGI CPU can provide more than double the performance per rack, supporting larger AI workloads while remaining energy efficient. These capabilities aim to improve compute density, accelerator utilization, and overall infrastructure efficiency. Meta serves as the lead partner and co-developer of the Arm AGI CPU, integrating it with its Meta Training and Inference Accelerator (MTIA) to optimize data center performance. Early commercial adoption also includes the likes of OpenAI, Cerebras, Cloudflare, Positron, Rebellions, SAP, and SK Telecom. Arm is collaborating with OEMs and ODMs such as Lenovo, Supermicro, Quanta Computer, and ASRock Rack to deliver early systems, with broader availability expected in the second half of 2026. More than 50 industry leaders across hyperscale, cloud, semiconductor, memory, networking, software, and system design sectors support the CPU's rollout. "Over the last decade, we've partnered closely with Arm in building Graviton here at AWS, and it's been a remarkable success -- the majority of compute capacity AWS added to our fleet in 2025 was powered by Graviton," said James Hamilton, SVP and Distinguished Engineer, Amazon. "This collaboration has been great for both companies, and Graviton continues to deliver better price/performance for our customers." Industry partners also pointed to the broader infrastructure implications of the new CPU. "The new Arm AGI CPU will further unlock the Arm ecosystem for a broad range of customers, creating new opportunities for everyone..." said Charlie Kawwas, President, Semiconductor Solutions Group, Broadcom Inc. "As Broadcom builds the world's most capable XPU and networking solutions for hyperscalers...our partnership with Arm has enabled us to move with unmatched intent and speed." The Arm AGI CPU is intended to serve as a foundation for agentic AI workloads, enabling organizations to deploy AI tools at scale while maintaining high efficiency. The processor supports large-scale deployment of AI applications, including accelerator management, control plane processing, and cloud- or enterprise-based API and task hosting. That said, the Arm AGI CPU's success will depend on data center adoption, integration with existing accelerators and memory, and proven performance gains over alternatives. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[3]
A shot in the ARM for the humble CPU - how AI is increasing awareness of why silicon matters, according to ARM CEO Rene Haas
Anyone thinks that [AI] is something that is going to go away, it's a little bit of an ostrich syndrome. This is here with us. And it's really changed how people think about computing. However, somewhere along the way, people thought CPUs were dead, and there was a thought that the only way you handle AI is through accelerated computing, that the CPU's role in the AI world is no longer relevant. Those people were wrong, he says: As agentic AI becomes mainstream, all of the work required to make that happen is CPU bound and you need a CPU that has the DNA of being born to run off a battery. That, in case you don't get the reference, alludes to ARM's own history. As Haas recalls: The company's DNA was really born to run off batteries. [It] started in the early 1990s. It was a spin-out of a British computer company named Acorn, and that company had a mandate to build a chip. That chip had a couple of requirements. One was it had to run in a plastic package, which back then was really important, and number two, it had to be really low power. The first part was important because of heat; the second part was important because battery life met everything because this was going into the world's first PDA (Personal Digital Assistant). Today, ARM is upping its game in the silicon market with its Agentic Generalized Infrastructure (AGI) Central Processing Unit (CPU), targeting the x86-dominated server processor space. But this latest development builds on a wide foundation, notes Haas, citing that 350 billion+ ARM chips have shipped: That is 3x the total number of humans who have ever existed on the planet. So it's not just one for every human. It's three for every human to have ever lived, seven times the total number of non ARM-based CPUs shipped combined. Just think about that number - 160 Arm chips for every global household. Demands in the marketplace around chips have been getting more complex, Haas says: The cycle times to build these chips are getting longer, 5-nanometer to 3-nanometer to 2-nanometer means longer fab times, longer peg times. There's a need to do more and to do it faster. We've traditionally provided IP in a stand-alone form, the CPU, the GPU, system IP. That has served us well for the first 30-plus years of the company, but, we were starting to see huge demand for the need to go faster, make products better and get time to market sooner. To meet this need, ARM introduced compute subsystems, he adds: We did this about three or four years ago. We invested very heavily in terms of the engineering requirements to do this. What this does is it takes all the blocks of IP and puts them together in a finished way, verified, performance tested that the end customer can then take to market. In some cases, it shaves a year, in some cases, 18 months off the time of starting design to get into production. What has changed in the last number of months has been an explosion of agentic AI, argues Haas: Agents are essentially tools that act on a request and come back with a full flow of answers. So it's not just a query for an answer, but it's actually work. It's run a payroll task, do a scheduler, go off and write a number of analyses relative to a tool flow and provide me an answer...As we move to agentic query, the number of tokens per human go up by 15x, if not greater. If you think about the why of that, it's pretty straightforward. Agents can generate requests (a) far faster than humans; and (b) they don't sleep. They're 24/7. So the agents are now pushing these requests into the cloud, into the data center and what's happening? The data center is choking. This comes back to the notion that an agent is a workflow, he says: It's a payroll task, it's a scheduler task. It's asynchronous. It is a lot of work relative to scheduling. That is what CPUs do. That is not a work that can be done by an accelerator. The way to think about this is an accelerator generates the tokens, but it's almost like pushing a dump truck up and someone's got to move all that dirt. The CPUs are the pieces of equipment that move that dirt and Agentic AI only increases that. So what you see is a huge bottleneck now in terms of flow. So what does that mean? You need more and more CPUs, lots of them. CPUs near the head node, CPUs next to the accelerator rack, more CPU racks inside the data center, you just need more. By our calculations, and we think this may be a little bit light, [it] goes up about 4x, 120 million CPU cores for that same gigawatt. So the need now is to put 4x the amount of CPU cores in the same power envelope., he concludes: Power is precious, obviously. The capital required for it is precious. So trying to put all those extra CPUs into a data center that is already stuffed to the brim with accelerators and CPUs doing the core work, that is a problem. That's the context for the introduction of the AGI CPU. According to Mohamed Awad, ARM EVP of Cloud AI Business Unit, the AGI CPU around three simple principles: First, performance, performance, performance. With this many threads going on, with this much work to do, with this much orchestration to happen, you can't slow down. These agents are going to be running 24 hours a day, and if they're not performing fast enough, then the rest of the infrastructure that's relying on it grinds to a halt. So we focused on performance. Second, we focused on scale. The scale of what we're talking about here is just incredible...scale at the CPU level, scale at the [mother]board level, scale at the rack level, scale at the warehouse level, all the way up. We focused on that. And finally, we focused on efficiency, maybe most importantly, because at the end of the day, with this much at stake, with this much compute we're trying to deploy, we're not going to get there unless we provide that performance and scale and we do it in an efficient package. ARM AGI CPU has been designed from the ground up to make sure that performance scales and power stays predictable, he says: The software is ready, and we have a great product. And that's why we're seeing such great customer traction. We're seeing it in multiple areas...Ultimately, this is about architectural philosophy. We're not strapped to the past...We're focused on exactly and only what the AGI at data center needs, performance, scale and efficiency. Companies like Cerebras and Positron and Rebellions, they're joining Meta and OpenAI by using ARM AGI CPU for things like managing head nodes that they're building or managing accelerators they're building, so a head node type use case or also for agentic orchestration and fan-out. These are specific use cases that they're looking at. And then in the cloud, we see companies like SAP and SK Telecom and Cloudflare who are actively using or planning on deploying ARM as part of their infrastructure. These are just a few of the customers that are planning on using ARM AGI CPU. According to Stefan Bäuerle, Senior Vice President, Head of HANA & Persistency, SAP: SAP's successful deployment of SAP HANA on ARM-based AWS Graviton underscores the maturity and performance of the ARM ecosystem for enterprise workloads. The ARM AGI CPU extends that opportunity, providing scalable, efficient compute designed to support the next generation of AI-powered business solutions. Meanwhile Kevin Weil, head of the OpenAI Science team argues that while GPUs tend to get top billing in recent years, he endorses ARM's philosophy that AI offers a new age for the CPU One of the most common things I hear inside OpenAI. I need more compute. It's kind of the coin of the realm. I mean the root of it is we have more demand from customers, we have more ideas internally that we want to experiment with, we have more things that we want to do than, frankly, the industry can keep up with. When you get to the bottom of all this, it's certainly it's about silicon, but it's also about power. If you have a CPU that can draw less power - it could be just as performant, but use less power - it means you have more left over for everything else that you want to do. That means more inference and more compute. That means more intelligence, and if there's one thing that I've learned in my couple of years now at OpenAI, it's that more intelligence leads us to be able to build better products for all of you. He adds: The thing that I keep coming back to that I try and remind myself of at all times is that as amazing as the models are today, the model that you use today is the worst AI model that you will ever use for the rest of your life. A year from now, you couldn't imagine coming back to the AI models of today because they're getting better at such a rapid pace, which just means there's basically infinite demand for intelligence. So we are not stopping from here. ARM in the data center just works. As a statement of intent, it doesn't get much clearer than that.
[4]
Arm creates history by building its first-ever CPU, the ARM AGI
TL;DR: Arm announced its first production silicon chip, the Arm AGI CPU, designed for data center AI applications with up to 136 Neoverse V3 cores and TSMC's 3nm process. Partnering with Meta and over 50 companies, Arm aims to challenge x86 systems by delivering higher performance per rack for agentic AI infrastructure. Arm products have been ubiquitous in consumer electronics for quite some time now, but the company never actually produced the silicon itself. Over its 35-year history, the business model has been based on IP licensing rather than production. However, on March 24th, 2026, during the Arm Everywhere keynote, Arm made history by announcing its first-ever production silicon chip, the Arm AGI CPU. There have been rumblings in the industry over the past year or so about Arm finally entering the merchant silicon market, but nothing really came of it until now. Arm has collaborated with Meta on this project, and the partnership aims to optimize Arm's AGI infrastructure for Meta's extensive family of apps. Moreover, the two will collaborate on future generations of the AGI CPU, according to Arm. The Arm AGI processor is the first product of a new data center focused silicon lineup. It can have up to 136 Neoverse V3 cores running at up to 3.7GHz, with dedicated 2MB L2 cache per core. The CPU has been manufactured using TSMC's 3nm process and has a 300-watt TDP. The cores use a dual-chiplet design with 96 lanes of PCIe Gen6 as well. This is serious hardware, hyper-focused on AI applications. "AI has fundamentally redefined how computing is built and deployed. Agentic computing is accelerating that change. Today marks the next phase of the Arm compute platform and a defining moment for our company. With the expansion into delivering production silicon with our Arm AGI CPU, we are giving partners more choices, all built on Arm's foundation of high-performance, power-efficient computing, to support agentic AI infrastructure at global scale." - Arm CEO, Rene Haas It is interesting to note that Arm has opted to produce a data center focused agentic AI CPU, when the entire industry is crowding around GPUs for this purpose. Arm has placed its chips on the expectation that the CPU-to-GPU ratio is about to change in agentic AI applications, and that step is quite bold. According to Arm's keynote, data centers are expected to require more than four times the current CPU capacity per gigawatt to support agent-driven applications. Moreover, Arm's foray into silicon production should ring alarm bells for the current x86 manufacturers, AMD and Intel. According to Arm, its system can deliver 2x the performance per rack compared to the latest x86 systems, though, of course, this claim will need to be verified. Arm's partner Meta is deploying the AGI CPU with its custom MTIA (Meta Training and Inference Accelerator) silicon, and additional deployments have been confirmed. Other partners include Cerebras, Cloudflare, F5, OpenAI, Positron, Rebellions, SAP, and SK Telecom. Arm says "more than 50 companies" have lined up for deployment. It will be interesting to see how Arm manages to compete with Intel, AMD, and NVIDIA for a piece of the AI pie.
[5]
ARM's CEO Rene Haas Says the 'AGI CPU' Will Bite Into the x86 Dominance, Brutally Referring to Intel as "Historic"
ARM's CEO appears confident in the company's entry into the server CPU segment, claiming that the AGI CPU is designed to counter x86's dominance in the industry. ARM made a rather unusual announcement at its recent keynote, where CEO Rene Haas announced a transition from a mere IP company to a compute provider and unveiled its first-ever server CPU, the AGI CPU. ARM's decision has been met with skepticism, but Haas's revenue projections have made everyone happy, since the AGI CPU is said to generate roughly $15 billion in annual revenue by 2031, driven by its rack-scale configuration. Interestingly, ARM's CEO sat with WIRED to talk about the company's new venture, and here are the fundamentals behind it: So why would we build a chip? When you're a compute platform company, there are times when the ecosystem benefits from you physically building something. We've seen this in the past, whether it's Microsoft building a Surface laptop that helps the Windows ecosystem, while HP and Dell and Lenovo are still building laptops; or whether it's Google building a Pixel phone, but meanwhile, Samsung still builds Android phones. - ARM's Rene Haas ARM's CEO claims that their entry as a compute provider is ultimately a way to expand the influence of the firm's IP offerings to a wider market, thereby increasing the company's customer TAM. Haas cites Microsoft's Windows and Surface laptop shipments, claiming that the latter has increased adoption of the company's OS. He says that with the AGI CPU, the ARM ecosystem becomes much stronger, and it also serves the agentic AI sector, where the need for processing units has signifcantly risen in the past few months due to agent orchestration and management workloads. Drawing parallels with Microsoft is a sensible move from ARM's CEO, but it's important to note that ARM has helped its competitors develop capable solutions for different markets. Specifically, when we talk about datacenters, ARM's IP dominates in CPU offerings from NVIDIA and Amazon, which means that the AGI CPU has emerged in a market where ARM is both the 'foe and the fellow'. Given how closely ARM works with its customers, there may be skepticism within them about sharing chip designs and architectures, now that ARM is a direct competitor. The ARM-Qualcomm fiasco in the mobile segment is a clear indicator that you cannot compete in and serve similar markets, and this could become a problem for the IP provider in the future. Interestingly, ARM's CEO was asked about the conflict of interest within partners like NVIDIA, to which he had to say: Question: You're referring to the fact that it's Intel's x86 architecture versus Arm architecture. So you don't think you'll piss off your pal, Jensen, but that AMD and Intel may have some response to this. ARM's CEO: I use "piss off" as tongue-in-cheek. It's beneficial to the Arm ecosystem and it's beneficial to Jensen that we build a chip. If you've got [Nvidia's] Vera chip, which is a great product, and you've got Arm AGI CPU, which is a great product, it's not great for Intel and AMD, that's all I know. ARM is confident in its ability to compete with x86 in the server CPU segment with its AGX CPU, yet questions remain about its adoption prospects. Right now, the lead customer for the solution is Meta, which will integrate it into its rack offerings, likely with the MTIA ASICs. Haas does mention customers like SK hynix, Cisco, SAP, and Cloudflare, but, yet again, the adoption concern isn't only driven by the clients ARM can bring; it also depends on whether the firm can sustain production. The AGI CPU is being fabbed at TSMC using the 3nm process, and we know how difficult it has been to secure capacity recently. The AGI CPU is a great step by ARM and the company's manufacturing team, yet the current focus is on whether the firm can capture market share, and there are several caveats to doing so, especially when competing with players that have been in the industry for decades.
Share
Share
Copy Link
Arm has launched its first in-house production silicon, the AGI CPU, marking a historic shift from IP licensor to compute provider. The 136-core processor targets agentic AI workloads in data centers, with Meta as lead partner and OpenAI among early adopters. Intel's leadership questions whether specialized chips are necessary, pointing to its own Clearwater Forest as a competitor.
After 35 years as an IP licensor, Arm has unveiled its first production silicon chip, the Arm AGI CPU, marking a defining moment for the company and the data center AI applications landscape
1
4
. The first server CPU from Arm features up to 136 Neoverse V3 cores running at 3.7GHz, manufactured using TSMC's 3nm process with a 300-watt TDP4
. CEO Rene Haas announced the chip during the Arm Everywhere keynote on March 24, 2026, positioning it as a direct challenge to x86 dominance in the server market5
. The processor promises more than double the performance per rack compared to traditional x86 systems, a claim that has drawn both interest and skepticism from industry observers2
.
Source: Wccftech
The Arm AGI CPU addresses what Haas describes as a fundamental shift in computing demands driven by agentic AI workloads
3
. Unlike simple query-response interactions, agents perform continuous reasoning, planning, and acting tasks that generate 15 times more tokens per human3
. These workflows involve agent orchestration and management, payroll processing, scheduling, and asynchronous tasks that create a CPU-bound bottleneck in data centers3
. Arm's calculations suggest data centers will require four times the current CPU capacity per gigawatt—approximately 120 million CPU cores—to support agent-driven applications3
. Mohamed Awad, Arm EVP of Cloud AI, argued at the San Francisco event that existing x86 processors carry legacy features like boost modes and Simultaneous Multithreading (SMT) that consume die area and drive up power consumption without benefiting agentic workloads1
.
Source: diginomica
Kevork Kechichian, who runs Intel's Data Center Group and previously served as an executive at Arm, questions whether this new CPU architecture truly represents what hyperscalers need
1
. While acknowledging that heavy SIMD engines may be unnecessary for orchestration workloads, Kechichian argues that Intel accelerators like QuickAssist remain relevant for compression and cryptographic tasks1
. He disputes Arm's case against SMT, noting that Nvidia's Vera CPUs include spatial multithreading, and suggests Arm lacks the option rather than making a strategic choice1
. Intel's Clearwater Forest processor shares many qualities with the Arm AGI CPU, featuring 288 stripped-down cores with minimal SIMD extensions, high core count, no SMT, and 12 channels of fast DDR5 memory1
. The competition highlights a broader industry debate about whether large-scale AI data center workloads require purpose-built silicon or can run efficiently on existing architectures.Meta serves as the lead partner and co-developer of the Arm AGI CPU, integrating it with its Meta Training and Inference Accelerator (MTIA) to optimize data center performance
2
. Early commercial adoption includes OpenAI, Cerebras, Cloudflare, Positron, Rebellions, SAP, and SK Telecom2
. More than 50 industry leaders across hyperscale, cloud, semiconductor, memory, networking, and software sectors support the rollout2
. Arm is collaborating with OEMs and ODMs including Lenovo, Supermicro, Quanta Computer, and ASRock Rack to deliver early systems, with broader availability expected in the second half of 20262
. Haas projects the in-house AI chip will generate roughly $15 billion in annual revenue by 2031, driven by rack-scale configurations5
.Related Stories
The processor delivers 6GB/s memory bandwidth per core with sub-100ns latency, enabling higher workload density and improved system efficiency
2
. Each core receives dedicated 2MB L2 cache, and the dual-chiplet design includes 96 lanes of PCIe Gen64
. Air-cooled 1U server chassis can accommodate up to 8,160 cores per rack, while liquid-cooled deployments reach 45,000 cores per rack2
. The chip features only a pair of 128-bit wide vector units, compared to the 512-bit wide vectors supported on most Intel and AMD server chips1
. Awad emphasized that the AGI CPU provides deterministic performance under sustained load, avoiding the power spikes associated with boost modes on x86 processors1
.
Source: TechRadar
Haas compared Arm's strategic shift to Microsoft building Surface laptops while partners like HP and Dell continue making Windows devices, arguing that physically building products strengthens the ecosystem
5
. However, Arm now competes directly in markets where its IP already dominates through partners like Nvidia and Amazon5
. The merchant silicon market entry creates potential conflicts of interest, particularly with customers who previously collaborated on chip designs without facing Arm as a direct competitor5
. When asked about competing with Nvidia, Haas suggested both the Vera CPUs and Arm AGI CPU benefit the ecosystem at the expense of Intel and AMD5
. Adoption prospects depend not only on customer acquisition but also on Arm's ability to sustain production at TSMC, where securing 3nm capacity has proven challenging5
. Over 350 billion Arm chips have shipped throughout the company's history—three times the total number of humans who have ever existed—but success in the merchant silicon market requires different capabilities than IP licensing .Summarized by
Navi
[1]
[3]
01 Apr 2025•Technology

31 Jan 2025•Business and Economy

08 Apr 2026•Technology

1
Policy and Regulation

2
Technology

3
Policy and Regulation
