9 Sources
9 Sources
[1]
The Top Android Phones of 2026 Could Get Better AI With Arm's New CPUs
Expertise Smartphones | Gaming | Telecom industry | Mobile semiconductors | Mobile gaming On the same day Apple revealed its new iPhone 17 lineup, chip designer Arm is introducing its next generation of processors bundled into its new Arm Lumex platform. These processors will likely make their way into premium Android phones coming in 2026 -- and improve their AI capabilities without draining device batteries faster, Arm says. "It's pretty amazing how there's been kind of this insatiable amount of performance being asked for [by our customers], and a lot of it's around AI, as well as some graphics workloads," said Chris Bergey, senior vice president and general manager of Arm's client line of business. Arm's processors have traditionally been the centerpieces of holistic systems-on-a-chip, which power smartphones. For instance, Arm's previous top-end central processing unit, the Cortex X925, released last year, is featured in Samsung's Exynos 2500 chipset, which powered the Galaxy Z Flip 7 and MediaTek's Dimensity 9400 chipset, found in the Oppo Find X8 Pro. It's likely that Arm's new chips, either on their own or bundled into the company's Arm Lumex platform, will power premium Android phones and other devices next year. But it faces more competition as Qualcomm (which used Arm CPUs in older chips like the Snapdragon 8 Gen 3) has shifted to its internally designed Oryon CPUs in its latest silicon -- which was the subject of its own tech licensing clash over the last few years between the two chip companies. Arm's new CPU range has shifted with a new naming paradigm. The successors to the X925 are two different chips: the highest-end C1-Ultra, which boasts 25% greater performance over its predecessor, and the next-most-powerful C1-Premium (no performance improvement given). The successor to the A725 chip is the C1-Pro, which has 12% greater efficiency. These CPUs also benefit from an evolution of their chip architecture called Scalable Matrix Extension version 2, which enables better AI performance. The new Arm Lumex platform combines these chips with the new Mali G1-Ultra GPU (which Arm says has 20% better performance and twice the ray tracing as its predecessor) for a system that can be plugged into larger chipsets. The end result: up to 5x improvement in AI performance, 4.7x lower latency for speech-based workloads (think live translation) and 2.8x faster audio generation, Arm says. The goal is faster performance without increasing the battery drain when running AI tasks that tax processing power. For example, a yoga tutor demo app running Arm's new chips saw a 2.4x boost in text-to-speech to give faster feedback to users, the company said in a press release. As a supplier of chips and technology, it's ultimately up to the phonemakers using Arm's silicon to decide how much (or little) of the advancements they integrate into their devices. But as demand for generative AI like ChatGPT increases, so too does the drive to get that functionality working on phones as efficiently as possible, rather than relying on slower responses going to and from the cloud. So what will consumers see with devices running C1-Ultra chips and Arm's new technology? "I think what they're gonna see is the ability to run amazing on-device AI and to do so with significant power savings, significant performance increases, and also third-party support," Bergey said. "Not just first-party devices, but also third-party devices."
[2]
Arm bets on CPU-based AI with Lumex chips for smartphones
Arm has lifted the lid on its latest mobile platform, comprising new CPU and GPU designs plus rearchitected interconnect and memory management logic, all optimized with a coming wave of AI-enabled smartphones in mind. The UK-based chip designer has been moving towards more integrated solutions rather than just offering cores for several years, and this year's Lumex compute subsystem (CSS) is the latest evolution of that philosophy. As with each generation, Arm manages to squeeze more performance and power efficiency out of its designs, claiming an average 15 percent step up in CPU and 20 percent in GPU, while saving 15 percent on power. The key focus with Lumex is on Arm's SME2 Scalable Matrix Extensions in the CPU cluster, which the firm is pushing as the preferred route for AI acceleration, and overall system-level optimizations to boost scalability for devices capable of running AI models. Stefan Rosinger, Senior Director for CPUs, said that SME2 gives AI acceleration "an order of magnitude better" than before, and its advantage for a mobile device is that it uses less power and finishes calculations quicker. According to Arm, we can expect to see Lumex implemented in smartphones and other devices later this year or early next year. It has been crafted with 3nm manufacturing in mind, and the firm says it expects chips produced by its licensees will run at upwards of 4 GHz clock speeds. In a rare outbreak of common sense, all the cores in the Lumex CPU cluster are designated C1, with the maximum performance core design labelled the C1-Ultra. Most smartphone chips now feature a mix of core types, with one or two performance cores for the demanding work, coupled with more power-optimized cores to handle other tasks. This started out as big.LITTLE many years ago. With Lumex, Arm has given chip designers no fewer than four core types to choose from, adding C1-Premium as the next step down, followed by C1-Pro and finally C1-Nano as the smallest footprint core with the greatest power efficiency. Chips aimed at flagship phones will likely see two C1-Ultra cores combined with six C1-Pro cores, Arm believes, while "sub-flagship" silicon might use a mix of two C1-Premium and six C1-Pro, while the mainstream could be served by a mix of four Pro and four Nano. In the GPU department, the Lumex platform has the Mali G1, and like the CPU cores, these are graded from Mali G1-Ultra to Mali1 G1-Premium and Mali G1-Pro. The differences between these three tiers are in the number of shader cores, with Pro having one to five, Premium six to nine, and Ultra 10 cores or more. Mali G1-Ultra is also the only tier with Arm's redesigned Ray Tracing Unit (RTU), claimed to offer 40 percent higher performance than last year's Immortalis-G925, plus higher quality for games. The new GPU design is also claimed to accelerate in-game AI with support for half-precision (FP16) matrix multiplication, which Arm claims improves tensor processing while reducing memory bandwidth and lowering power consumption. Arm has already given us a taste of the neural accelerator hardware it is bringing to its phone GPUs next year, but that is not part of the Mali G1-Ultra in Lumex this year. What is included in this complete compute subsystem are a new purpose built System Interconnect (SI) and System Memory Management Unit (SMMU), designed to cope with the demands that running AI models on smartphones are likely to bring. The System Interconnect has been redesigned with what Arm calls a channelized architecture that can give quality of service (QoS) priority to different traffic, while the System MMU has been optimized to reduce latency by up to 75 percent, the firm says. It isn't just about hardware of course, and Arm says it has been working behind the scenes to ensure that the various developer frameworks support the optimizations coming in its latest platforms. Its KleidiAI libraries have been integrated with frameworks such as PyTorch, Llama, LiteRT and ONNX to enable support for SME2 acceleration when running AI workloads. Of course, Arm believes that AI processing should be kept on the CPU, since "it's the only compute unit in the mobile market you can rely on to be present in every mobile phone," according to Geraint North, Arm Fellow in AI and Developer Platforms. "As you start to move to GPUs and NPUs (neural processing units), you end up doing different work for different handsets," he explained, as vendors may have opted for different GPUs and NPUs in their smartphone silicon. This is a logical viewpoint, but it's not one that everyone necessarily agrees with. Analyst Gartner explicitly defines a GenAI smartphone as a device equipped with a built-in neural engine or neural processing unit (NPU) capable of running small language models. It proclaims that premium smartphones and basic smartphones (under $350) will fit this description, with only "utility smartphones" not expected to have NPU capabilities. This will suit Qualcomm, no doubt, which features an integrated NPU in its smartphone chips and showed off a 7 billion parameter large language model running on an Android phone at last year's MWC show. Bob O'Donnell, President and Chief Analyst at TECHnalysis Research, said that Arm's approach makes sense given where the market is right now. "First, because of the wide range of different NPU architectures available and the lack of standardization, few software developers are actually using the NPUs for their AI applications. Instead, they're defaulting to the CPU and GPU, and with the latest SME2 instructions and logic from Arm, they're going to help accelerate those functions," he told The Register. "Second, many of Arm's partners are choosing to differentiate with their own NPU designs and yet another option from Arm could actually make that NPU confusion worse. Hopefully we'll start to see some standardized means for leveraging different NPU architectures soon so that NPUs can start being used more frequently, but I'm concerned that could still be several years away." Arm's silicon licensees will have to place their bets now - will Arm's approach with SME2 in the CPU prove more popular, or will smartphone makers and the buying public want a built-in NPU? ®
[3]
Arm launches new generation of mobile chip designs geared for AI
SAN FRANCISCO, Sept 9 (Reuters) - Arm Holdings said on Tuesday it was launching its next-generation set of chip designs called Lumex that it has optimized for artificial intelligence to run on mobile devices such as smartphones and watches without accessing the internet. Called Lumex, the new generation of Arm mobile designs come in four types, ranging from less powerful but more energy efficient ones designed for watches and other smart wearable devices to a version designed to maximize the computing horsepower available. The peak performance design aims to run software that harnesses the power of large AI models without accessing cloud computing on devices such as high-end smartphones. "AI is becoming pretty fundamental to kind of what's happening, whether it's kind of real-time interactions or some killer use cases like AI translation," said Chris Bergey, a senior vice president and general manager at Arm. "We're just seeing (AI) become kind of this expectation." The Lumex designs announced on Tuesday are part of the company's Compute Subsystems (CSS) business, which aims to offer handset makers and chip designers a more ready-made piece of technology that enables their speedy use in new products. The more complete designs Arm is now making for data centers and mobile phones are part of the company's longer-term plans to grow its smartphone and other revenue through a variety of means. Arm has also said it plans to invest more money into examining making its own chips and has hired several key people in order to do so. The Lumex designs are optimized for so-called 3-nanometer manufacturing nodes, such as the process offered by TSMC (2330.TW), opens new tab. The series of iPhone chips Apple (AAPL.O), opens new tab announced on Tuesday use a TSMC 3-nanometer process. The company is holding a launch event in China on Wednesday to unveil the new designs because outside of Apple and Samsung (005930.KS), opens new tab the leading handset makers are located there, Bergey said. Reporting by Max A. Cherney in San Francisco; Editing by Jamie Freed Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Disrupted * Medtech Max A. Cherney Thomson Reuters Max A. Cherney is a correspondent for Reuters based in San Francisco, where he reports on the semiconductor industry and artificial intelligence. He joined Reuters in 2023 and has previously worked for Barron's magazine and its sister publication, MarketWatch. Cherney graduated from Trent University with a degree in history.
[4]
Arm's bid for smarter, AI-powered phones and PCs begins with Lumex
It's likely that Lumex gives us a hint at what Arm's Niva PC line will eventually offer. With any luck, Arm's new Lumex CPU platform may give us a hint at what to expect for upcoming Windows on Arm PCs: four tiers of CPU power, plus an improved ray-tracing engine and graphics upscaling. Arm says that its new Lumex C1-series chips will deliver 25 percent more performance than the Cortex X925 series of processors it launched in May 2024. Like the X925, the latest Lumex C1 cores are being optimized with 3nm process technologies in mind, with physical implementations and foundry collaborations to speed customers to market. (While Arm has hinted at building its own physical cores for years, however, company representatives would say only that it includes "near production-ready physical implementations for partners" and is "not a chip.") The Lumex platform is Arm's brand for smartphones. PC-specific Arm chips will be branded as "Niva" under Arm's new naming scheme, though they will share some common features with the Lumex cores. Qualcomm, which actually makes the Arm processors, uses the "Snapdragon" brand, which will almost certainly continue. Arm chips power the vast majority of the world's smartphones, as Arm licenses its designs to customers who can choose to bring the Arm cores to market or take an architectural license and design compatible but otherwise brand new designs of their own making. That's the approach Apple and Qualcomm have taken -- who, in addition to designing smartphones, have brought Arm into the Mac OS space as well as Windows on Arm PCs. Though Qualcomm and Arm have had their legal differences over licensing Arm's cores -- which have since been settled -- an Arm representative declined to comment when asked if any ongoing legal issues would prevent Qualcomm from taking a license. And you will, apparently, see the Lumex in more than just PCs. "So the Lumex platform is going to power flagship smartphones through to PCs and tablets," said James McNiven, the vice president of product management for Arm, in a press briefing. Smartphones, though, have their own demands: low power, which Arm's RISC architecture was designed for; and maximizing local AI. The Cortex C1 series includes what Arm calls Scalable Matrix Extension 2 (SME2), using the Arm v9.3 instruction set. Arm doesn't use a dedicated NPU. Instead, it uses a technology called KleidiAI that essentially uses software libraries to address AI-specific functions inside the CPU, no matter which version of the Arm architecture is present. In the C1 CPU cluster, Arm says, you'll see a 5X uplift in AI performance. Arm says that will increase performance on apps that always have some form of AI technology running, such as audio generation, camera inferencing, or computer vision. Specifically, Arm is claiming over 4.7 times improvement in latency, speech recognition, and classical large language model tasks, and about 2.8 times faster audio generation. The other thing Arm has been known for is that it basically pioneered the concept of performance and efficiency cores, a strategy known as "Big-Little." But with the Lumex, that's been taken to a new level with the addition of a new "premium" core. Now, there are four different tiers of CPU cores: the C1-Ultra, the C1-Premium, the C1-Pro, and the C1-Nano. Nothing's really changed all that much: The Ultra and Premium cores are simply two tiers of "performance" cores, while the Pro and Nano deliver different levels of efficiency. Arm executives said that the Premium core could stand in for the Ultra cores on non-flagship, cheaper devices, as it offers similar performance to the Ultra, but in a 35 percent smaller area. The C1 Pro "improves" upon the Cortex A75 in terms of performance and efficiency, McNiven said, while the Nano has the "smallest footprint" and will play a role in both flagship and entry-level devices, he said. Overall, Arm executives said that the on-device AI is three times more power efficient than the previous implementations, and the Pro is 12 percent more power efficient at the same frequency. Arm is also introducing a new GPU, the Mali G1-Ultra, which will promise 20 percent better graphics performance, twice the ray-tracing performance, and 20 percent faster inferencing for AI processing than the earlier Immortalis-G925. The Mali G1-Ultra will also consume less power, as the block is on its own power island with less leakage when idle. Specifically, Arm is claiming that frame rates on ray-traced games will be 40 percent higher than its predecessor, part of moving to a "single-ray" model for improved efficiency and more realistic lighting," McNiven said. The new Mali core also includes upscaling -- quickly rendering at a lower resolution for improved frame rate, then upscaling it for better visual quality -- but it does not use the AI-generated frames of some desktop GPUs. "One of the examples that we have been seeing recently was some of the new ray tracing benchmarks like [UL's 3DMark] Solar Bay Extreme, and I think that we see there up to a doubling in performance, because it is so ray tracing heavy. So it really does depend on just the amount of ray-tracing content," McNiven said. One idea behind the Lumex platform, executives said, was to move certain cloud-based AI functions on to the device. Specifically, a large language model in the world of Krafton's Inzoi (a spiritual successor to The Sims) was run on-device at the GDC conference, they said, as well as a "coach" that watched you play in Tencent's Honor of Kings and offered advice. A major online payment provider is also working to put agentic AI on device to handle payment processing during peak times, instead of committing to expensive, back-end cloud servers, said Chris Bergey, the senior vice president of the client line of business at Arm. "If your device is capable of running a large language model, you have an extra means of interacting with the game that augments your experience," McNiven said. "But if you don't have it, the game is still playable."
[5]
Arm unveils new Lumex AI focused smartphone CPUs with some impressive stats
Arm has lifted the wraps off its next-generation Lumex chip designs, optimized to run some local AI workloads on mobile devices. Its architecture allows for four different design types, ranging from energy-efficient cores for wearables to high-performance cores for flagship smartphones. Slating accelerated product cycles, which result in tighter timescales and reduced margin for error, Arm says its integrated platforms combine CPU, GPU and software stacks to speed up time-to-market. Arm described Lumex as its "new purpose-built compute subsystem (CSS) platform to meet the growing demands of on-device AI experiences." The Armv9.3 C1 CPU cluster includes built-in SME2 units for accelerated AI, promising 5x better AI performance and 3x more efficiency compared with the previous generation. Standard benchmarks see performance rise by 30%, with a 15% speed-up in apps and 12% lower power use in daily workloads compared with the prior generation. The four CPUs on offer are C1-Ultra for large-model inferencing, C1-Premium for multitasking, C1-Pro for video playback and C1-Nano for wearables. The Mali G1-Ultra GPU also enables 20% faster AI/ML inferencing than Immortalis-G295, as well as improvements across gaming like 2x better ray tracing performance. Lumex also offers G1-Premium and G1-Pro options - but no G1-Nano. Interestingly, Arm positions CPUs as the universal AI engine given the lack of standardization in NPUs, even though NPUs are starting to earn their place in PC chips. Launching with Lumex is a complete Android 16-ready software stack, SME2-enabled KleidiAI libraries and telemetry to analyze performance and identify bottlenecks, allowing developers to tailor Lumex to each model. "Mobile computing is entering a new era that is defined by how intelligence is built, scaled, and delivered," Senior Director Kinjal Dave explained. Looking ahead, Arm notes that many popular Google apps are already SME2-enabled, meaning that they're prepared to benefit from improved on-device AI features when next-generation hardware becomes available.
[6]
Lumex, Arm's new mobile chip designs purpose-built for AI
Chip-designer Arm has announced its next-generation set of chip designs called Lumex, purpose-built for AI on mobile devices like smart phones and watches. UK semiconductor company Arm designs the core architecture of semiconductor processors and instruction set for processors, which are licensed by chip giants the likes of Apple and Nvidia. Yesterday it announced Arm Lumex its "most advanced compute subsystem (CSS) platform purpose-built to accelerate AI experiences on flagship smartphones and next-gen PCs". "AI is no longer a feature, it's the foundation of next-generation mobile and consumer technology," said Arm SVP Chris Bergey who leads the chip designer's client line of business. "Users now expect real-time assistance, seamless communication, or personalised content that is instant, private, and available on device, without compromise. Meeting these expectations requires more than incremental upgrades, it demands a step change that brings performance, privacy and efficiency together in a scalable way." "Partners can choose exactly how they build Lumex into their SoC (system on a chip) - they can take the platform as delivered and leverage cutting-edge physical implementations tailored to their needs, reaping time to market and time to performance benefits," said Bergey. "Alternatively, partners can configure the platform RTL for their targeted tiers and harden the cores themselves." The new Lumex name was mooted back in May when it said it was introducing a simplified product naming architecture "to better communicate these platforms to the outside world", with each compute platform having a clear identity for the respective end market: Arm Neoverse for infrastructure Arm Niva for PC Arm Lumex for mobile Arm Zena for automotive Arm Orbis for IoT The Mali brand to continue for GPUs We will be Chip building The developments come at a time when the majority SoftBank-owned Arm is believed to be planning to build its own chips, with Amazon's artificial intelligence (AI) chip director Rami Sinno the latest recruit to boost this ambition. Sinno, in his role as director of engineering at Amazon-owned Annapurna Labs, played a large part in the development of Amazon's AI chips Trainium and Inferentia. Prior to joining Amazon's homegrown chip operation, Sinno had held leadership positions in Arm's engineering team for more than five years until 2019. The company has had its sights set on its own in-house manufacturing for a while now, with reports indicating as much as far back as January. In February, sources told Reuters that the company had begun hiring from its own customers and competing with them for deals as part of its plan to sell its own chips. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[7]
Arm's New Lumex Platform is Geared Towards AI Functionality - Phandroid
Arm recently announced the arrival of its Lumex CSS platform, which it says is designed to support next-generation AI features on consumer devices. That in mind, the new platform combines powerful CPUs, GPUs, and system IP with an optimized software stack. READ: Qualcomm Teams up with BMW for a AI-Enabled Automated Driving System One of Lumex's core components is the Scalable Matrix Extension version 2 (SME2), which Arm says enhances AI performance by enabling new SME2-enabled CPUs to achieve up to a 5x increase in AI performance. On paper, this essentially translates to faster real-time tasks like voice translation and audio generation, for example. Chris Bergey, SVP and GM of Arm's Client Line of Business states: AI is no longer a feature, it's the foundation of next-generation mobile and consumer technology. Users now expect real-time assistance, seamless communication, or personalized content that is instant, private, and available on device, without compromise. Meeting these expectations requires more than incremental upgrades, it demands a step change that brings performance, privacy and efficiency together in a scalable way. By moving AI processing from the cloud to the device, Arm says that Lumex is able to provide users a faster, more private, and more reliable experience. It also supports graphics enhancement and gaming with the new Mali G1-Ultra GPU, with as much as a 2x uplift in ray tracing performance and up to a 20% increase in AI inference performance. For developers, the KleidiAI software library is integrated into major mobile operating systems and AI frameworks, allowing them to easily access SME2 acceleration. Arm says that this technology will add over 10 billion TOPS of compute across more than 3 billion devices over the next half-decade.
[8]
Arm launches new generation of mobile chip designs geared for AI - The Economic Times
Arm Holdings said on Tuesday it was launching its next-generation set of chip designs called Lumex that it has optimized for artificial intelligence to run on mobile devices such as smartphones and watches without accessing the internet. Called Lumex, the new generation of Arm mobile designs come in four types, ranging from less powerful but more energy efficient ones designed for watches and other smart wearable devices to a version designed to maximize the computing horsepower available. The peak performance design aims to run software that harnesses the power of large AI models without accessing cloud computing on devices such as high-end smartphones. "AI is becoming pretty fundamental to kind of what's happening, whether it's kind of real-time interactions or some killer use cases like AI translation," said Chris Bergey, a senior vice president and general manager at Arm. "We're just seeing (AI) become kind of this expectation." The Lumex designs announced on Tuesday are part of the company's Compute Subsystems (CSS) business, which aims to offer handset makers and chip designers a more ready-made piece of technology that enables their speedy use in new products. The more complete designs Arm is now making for data centers and mobile phones are part of the company's longer-term plans to grow its smartphone and other revenue through a variety of means. Arm has also said it plans to invest more money into examining making its own chips and has hired several key people in order to do so. The Lumex designs are optimized for so-called 3-nanometer manufacturing nodes, such as the process offered by TSMC. The series of iPhone chips Apple announced on Tuesday use a TSMC 3-nanometer process. The company is holding a launch event in China on Wednesday to unveil the new designs because outside of Apple and Samsung the leading handset makers are located there, Bergey said.
[9]
Arm launches new generation of mobile chip designs geared for AI
SAN FRANCISCO (Reuters) -Arm Holdings said on Tuesday it was launching its next-generation set of chip designs called Lumex that it has optimized for artificial intelligence to run on mobile devices such as smartphones and watches without accessing the internet. Called Lumex, the new generation of Arm mobile designs come in four types, ranging from less powerful but more energy efficient ones designed for watches and other smart wearable devices to a version designed to maximize the computing horsepower available. The peak performance design aims to run software that harnesses the power of large AI models without accessing cloud computing on devices such as high-end smartphones. "AI is becoming pretty fundamental to kind of what's happening, whether it's kind of real-time interactions or some killer use cases like AI translation," said Chris Bergey, a senior vice president and general manager at Arm. "We're just seeing (AI) become kind of this expectation." The Lumex designs announced on Tuesday are part of the company's Compute Subsystems (CSS) business, which aims to offer handset makers and chip designers a more ready-made piece of technology that enables their speedy use in new products. The more complete designs Arm is now making for data centers and mobile phones are part of the company's longer-term plans to grow its smartphone and other revenue through a variety of means. Arm has also said it plans to invest more money into examining making its own chips and has hired several key people in order to do so. The Lumex designs are optimized for so-called 3-nanometer manufacturing nodes, such as the process offered by TSMC. The series of iPhone chips Apple announced on Tuesday use a TSMC 3-nanometer process. The company is holding a launch event in China on Wednesday to unveil the new designs because outside of Apple and Samsung the leading handset makers are located there, Bergey said. (Reporting by Max A. Cherney in San Francisco; Editing by Jamie Freed)
Share
Share
Copy Link
Arm introduces its new Lumex platform, featuring advanced CPU and GPU designs optimized for AI processing on mobile devices. The platform promises significant performance improvements and energy efficiency for smartphones and other mobile devices.
Arm, the UK-based chip designer, has unveiled its latest mobile platform called Lumex, marking a significant leap in on-device AI capabilities for smartphones and other mobile devices
1
2
3
. This new generation of chip designs aims to revolutionize how intelligence is built, scaled, and delivered in mobile computing5
.Source: PCWorld
The Lumex platform introduces a new CPU range with four distinct tiers:
1
4
This tiered approach allows for flexible configurations to suit various device types, from high-end smartphones to wearables
4
.At the heart of Lumex is the Armv9.3 C1 CPU cluster, featuring built-in Scalable Matrix Extension version 2 (SME2) units. This architecture enables:
3
5
Source: CNET
The Mali G1-Ultra GPU complements the CPU advancements with:
1
4
Arm is positioning CPUs as the universal AI engine, citing the lack of standardization in Neural Processing Units (NPUs). The Lumex platform aims to run AI workloads efficiently on-device, reducing reliance on cloud processing
2
3
.Source: Reuters
Related Stories
To support developers, Arm is launching a complete Android 16-ready software stack alongside Lumex. This includes:
2
5
The Lumex platform is expected to power a wide range of devices, from flagship smartphones to PCs and tablets. With many popular Google apps already SME2-enabled, the stage is set for improved on-device AI features when next-generation hardware becomes available
5
.As the demand for generative AI like ChatGPT increases, Arm's focus on efficient on-device processing could significantly impact the mobile industry. The company expects Lumex to be implemented in smartphones and other devices later this year or early next year, with chips potentially running at clock speeds upwards of 4 GHz
1
2
.This advancement in mobile AI processing sets the stage for a new era of smart devices, promising faster performance, improved power efficiency, and enhanced user experiences across a wide range of applications.
Summarized by
Navi
[2]
1
Business and Economy
2
Business and Economy
3
Policy and Regulation