Curated by THEOUTPOST
On Thu, 27 Feb, 12:09 AM UTC
2 Sources
[1]
Arm to support more intelligent applications at the edge with Armv9 Edge AI Platform - SiliconANGLE
Arm to support more intelligent applications at the edge with Armv9 Edge AI Platform Chipmaker Arm Holdings Plc is looking to strengthen its grip on artificial intelligence at the network edge with the debut of a powerful new lightweight processor designed to sit at the heart of intelligent internet of things applications. The company unveiled the Arm Cortex-A320 central processing unit today, saying it's the centerpiece of its all-new Armv9 Edge AI Platform, which provides all of the hardware needed to run lightweight AI workloads independently of the cloud. In its pitch, Arm says the increasingly connected world we live in means that we cannot just rely on the cloud to continue processing AI workloads anymore. Use cases such as smart cities and industrial automation demand that AI applications live at the edge, and there's an urgent need for them to process data locally to eliminate latency, but to do that we need to have the right infrastructure to run them, the company says. That's what it's providing with the Armv9 Edge AI Platform, which combines the Cortex-A320 CPU with a new AI accelerator chip, the Arm Ethos-U85 neural processing unit, to run powerful AI models with up to 1 billion parameters locally on any device. Arm said the edge platform is equipped to handle workloads such as autonomous vehicles that can navigate busy factory floors, smart cameras that must be able to process what they're seeing, drones that carry out autonomous operations, and human-machine interfaces that drive natural, conversational interactions. The Cortex-A320 is based on the company's most advanced CPU architecture, Armv9, and delivers key features such as SVE2 for enhanced machine learning performance of up to 10 times its predecessor edge CPU, the Cortex-A35. It also benefits from improved security, with new capabilities such as Pointer Authentication, Branch Target Identification and Memory Tagging Extension, which enable edge devices to handle sensitive data in the most exposed locations, the company said. At the same time, Armv9 provides greater efficiency, meaning lower running costs for edge AI workloads. It's one thing to provide the infrastructure for edge AI applications, and another thing to build them, but Arm has this covered too. Alongside the Armv9 Edge AI Platform, it's also extending its Arm KleidiAI software development platform to the edge. It provides a powerful set of compute libraries to support the development of AI frameworks that can optimize AI and machine learning workloads to run on the new Armv9 Edge AI Platform, the company said. KleidiAI is a popular platform that has already been widely integrated into IoT AI software frameworks such as Llama.cpp and ExecuTorch to accelerate the performance of lightweight large language models such as Meta Platform's Llama 3 and Microsoft Corp.'s Phi-3. According to Arm, KleidiAI can help to boost the performance of the new Cortex-A320 CPUs by up to 70% in some scenarios. By using KleidiAI, developers can also accelerate the time-to-market for new edge AI applications, meaning they can quickly build new solutions that grow and adapt as their requirements evolve. The launch of the Armv9 Edge AI Platform has been warmly welcomed by customers including Amazon Web Services Inc. and the edge server manufacturer Eurotech S.p.A. For instance, AWS has already integrated the hardware into the nucleus lite runtime environment within its AWS IoT Greengrass platform for edge devices. "This seamless integration between the two technologies provides an optimized solution for developers to build modern edge AI applications, like anomaly detection in precision agriculture, smart manufacturing and autonomous vehicles," said AWS Vice President of IoT Yasser Alsaied. Meanwhile, Eurotech has been quick to install Arm's new hardware at the foundation of its latest edge computing hardware. "Arm's new edge AI platform provides us with the foundation to build the next generation of rich IoT devices, with Armv9 giving us access to new levels of secure performance, energy-efficiency and software flexibility," said Eurotech Chief Technology Officer Marco Carrer.
[2]
First 64-bit Armv9 CPU for AI in edge workloads launches
Arm rolls out the Cortex-A320 for small embedded gear that needs the oomph for big-model inference Arm predicts AI inferencing will soon be ubiquitous. In order to give devices the oomph they need for all that neural-network processing, it is beefing up its embedded platform with the first 64-bit Armv9 CPU core aimed at edge workloads. The Softbank-owned Brit chip design biz says AI development is moving quickly, claiming that network-edge machine-learning workloads were much simpler just a few years ago, focused on basic noise reduction or anomaly detection. "Take the humble doorbell as an example," says Paul Williamson, senior veep and general manager for Arm's Internet-of-Things line-of-business. It evolved from a simple buzzer to a basic camera viewer and now on to a smarter AI-driven device capable of determining whether it is detecting people or even identifying specific individuals, he added. To address this, the processor design house is introducing the Cortex-A320 CPU core, which is intended to be paired in edge AI system-on-chip (SoC) designs with the Ethos-U85, Arm's embedded neural processing unit (NPU) accelerator. It can be configured in clusters of four cores to scale and accommodate a range of performance needs. The A320 is said to be the "smallest Armv9 implementation," provides an AArch64 instruction set, and is a relatively simple single-issue, in-order, eight-stage core with up to 64KB in L1 cache, and up to 512KB L2. Good to see RISC-V keeping Arm on its toes, there. As an indication of how fast things have moved, it is less than a year since Arm rolled out a reference platform for edge AI that paired the Ethos-U85 with the Cortex-M85, a microcontroller-grade CPU core design. In contrast, the Cortex-A320 is part of Arm's full-fat application processor family, albeit an "ultra-efficient" one, based on the newer Armv9 architecture with the various enhancements this brings. The new pairing delivers more than eight times the machine-learning performance of last year's platform, Williamson claims, and is capable of handling large AI models of over a billion parameters. "The continued demand for hardware to efficiently execute larger networks is pushing memory size requirements, so systems with better memory access performance are becoming really necessary to perform these more complex use cases," Williamson said. "Cortex-A processors address this challenge as they've got intrinsic support for more addressable memory than Cortex M based platforms and they're more flexible at handling multiple tiers of memory access latency." Within the family of Armv9 processors, Cortex-A320 is now said to be the most energy-efficient to date, as it's claimed to use half the power of the Cortex-A520, the high-efficiency core used in some reference designs. The move to Armv9 brings with it the security features introduced in this architecture, such as memory tagging extensions for catching memory exceptions, while for AI processing, it also features the Scalable Vector Extensions (SVE2) and support for the BFloat16 data type. Software development is also vital, and here Arm is offering support for the new edge hardware in its Arm Kleidi libraries. This includes Kleidi AI, a set of compute kernels for building AI frameworks, and Kleidi CV for computer vision applications. This also supports optimizations in Armv9 such as Neon and SVE2, and is integrated into popular AI frameworks such as llama.cpp, ExecuTorch, and LiteRT, according to Williamson. Cortex-A320 also has the ability to run applications using real-time operating systems such as FreeRTOS and Zephyr, plus support for Linux. As with other Arm offerings, licensees will be responsible for building chips around the new Cortex-A320 and Ethos-U85. The firm said it is expecting to see it in silicon next year, but wouldn't name any specific partners or products that will be using it. Beyond network-edge applications, its low-power design makes it suitable for various uses, including smartwatches and wearables. Cortex-A320 is also potentially "the ideal CPU for baseboard management controllers in servers and infrastructure," according to Williamson. ®
Share
Share
Copy Link
Arm introduces the Cortex-A320 CPU as part of its new Armv9 Edge AI Platform, designed to support more powerful AI applications at the network edge with improved performance and energy efficiency.
Arm Holdings Plc has unveiled the Cortex-A320 central processing unit (CPU), the cornerstone of its new Armv9 Edge AI Platform. This development aims to strengthen Arm's position in artificial intelligence at the network edge, addressing the growing need for local AI processing in various applications 1.
The platform combines the Cortex-A320 CPU with the Arm Ethos-U85 neural processing unit (NPU), enabling the execution of powerful AI models with up to 1 billion parameters locally on devices. This integration supports a wide range of edge AI applications, including autonomous vehicles, smart cameras, drones, and human-machine interfaces 1.
The Cortex-A320, based on the advanced Armv9 architecture, offers significant enhancements:
Arm is extending its KleidiAI software development platform to the edge, providing compute libraries to optimize AI and machine learning workloads for the new Armv9 Edge AI Platform. KleidiAI has been integrated into popular IoT AI software frameworks like Llama.cpp and ExecuTorch, potentially boosting Cortex-A320 CPU performance by up to 70% in some scenarios 1 2.
The launch has been well-received by major players in the tech industry:
The Cortex-A320 is designed as an "ultra-efficient" application processor:
The CPU's low-power design makes it suitable for various applications beyond edge AI, including smartwatches, wearables, and potentially as a CPU for baseboard management controllers in servers and infrastructure 2.
Reference
[1]
[2]
Arm's Total Design initiative has doubled its ecosystem size in a year, fostering collaborations for advanced AI processors. Samsung Foundry, ADTechnology, and Rebellions partner to develop a 2nm AI CPU chiplet platform using Arm's Neoverse technology.
2 Sources
2 Sources
Meta and Arm are collaborating to optimize AI models for mobile devices, aiming to enhance on-device AI capabilities and improve user experiences across various applications.
4 Sources
4 Sources
Arm, known for licensing chip designs, is set to produce its first in-house chip, with Meta as its inaugural customer. This move marks a significant shift in Arm's business model and could reshape the semiconductor industry landscape.
16 Sources
16 Sources
Arm Holdings projects a significant increase in its share of the global data center CPU market, from 15% to 50% by the end of 2025, primarily due to the surge in AI computing demand and the adoption of Arm-based processors by major cloud providers.
6 Sources
6 Sources
Apple is set to integrate ARM's latest AI chip technology in the upcoming iPhone 16, signaling a significant leap in on-device AI capabilities for smartphones.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved