Curated by THEOUTPOST
On Thu, 26 Sept, 12:05 AM UTC
4 Sources
[1]
Arm: AI will turn smartphones into 'proactive assistants'
The British chip giant supported the development Meta's latest AI models Arm wants to upgrade the brains inside our mobile devices. The chip designer -- whose architectures power 99% of smartphones -- envisions AI bringing a new wave of breakthroughs to our handsets. The company outlined this plan after the release of Llama 3.2 -- Meta's first open-source models that processes both images and text.Arm said the models run "seamlessly" on its compute platform. The smaller, text-based LLMs -- Llama 3.2 1B and 3B -- are optimised for Arm-based mobile chips. Consequently, the models can deliver faster user experiences on smartphones. Processing more AI at the edge can also create energy and cost savings. These enhancements offer new opportunities to scale. By increasing the efficiencies of LLMs, Arm can run more AI directly on smartphones. For developers, that could lead to faster innovations. Arm expects endless new mobile apps to emerge as a result. LLMs will perform tasks on your behalf by understanding your location, schedule, and preferences. Routine tasks will be automated and recommendations personalised on-device.Your phone will evolve from a command and control tool to a "proactive assistant." Arm aims to accelerate this evolution. The UK-based business wants its CPUs to provide "the foundation for AI everywhere." Arm has an ambitious timetable for this strategy. By 2025, the chip giant wants more than 100 billion Arm-based devices to be "AI ready."
[2]
Meta and Arm Want to Bring More AI to Phones And Beyond
Expertise Smartphones | Smartwatches | Tablets | Telecom industry | Mobile semiconductors | Mobile gaming In the future, large language models could make it so that you can just have a conversation with your phone to take a photo instead of pressing camera buttons. And conversational interfaces like these could maybe one day power much more than just phones, perhaps even watches and security cameras. That's according to product managers at Meta and Arm, which partnered to work on a pair of compact AI models unveiled at today's Meta Connect event that are built to run on phones. Both companies are getting into the increasingly competitive effort to get generative AI on phones, which has become the must-have feature on phones, with Galaxy AI on the Samsung Galaxy S24 series, Gemini AI on the Google Pixel 9 Pro and Apple Intelligence set to launch on the new iPhone 16 series. Meta's new AI models are relatively smaller than other LLMs at 1 billion and 3 billion parameters (labeled Llama 3.2 1B and 3B, respectively). They're suitable for use on phones, and potentially other small devices, too. They're for use on the "edge" -- in other words, not with computation through the cloud but on-device. "We think this is a really good opportunity for us to move a lot of the inference to on-device and edge use cases," said Ragavan Srinivasan, vice president of product management for generative AI at Meta. Smartphones and other devices will be able to use these smaller models for things like text summarization, for example summing up a bunch of emails, Srinivasan explained, and creating calendar invites -- things deeply integrated into mobile workflows. The 1B and 3B models are purposely smaller to work on phones and can only understand text. Two larger models released in the Llama 3.2 generation, 11B and 90B, are too big to run on phones and are multimodal, meaning you could submit text and images to get complex answers. They replace previous generation 8B and 70B models that could only understand text. Meta worked closely with Arm, which designs architecture for CPUs and other silicon that's used in chips from companies such as Qualcomm, Apple, Samsung, Google and more. With over 300 billion Arm-based devices in the world, there's a big footprint of computers and phones that can use these models. Meta and Arm, through their partnership, are invested in helping the around 15 million developers for apps on Arm devices build software supporting these Llama 3.2 models. "What Meta is doing here is truly changing the kind of access to these leading edge models and what the developer community is going to be able to do with it," said Chris Bergey, general manager of the client line of business at Arm. The partnership is invested in helping developers support the smaller Llama 3.2 models and rapidly integrate them into their apps. They could harness the LLMs to create new user interfaces and ways to interact with devices, Bergey theorizes. Instead of pressing a button to open their camera app, you could have a conversation with your device and explain you they want it to do, for example. Given the amount of devices out there, and the speed at which they can deploy a smaller model like the 1B or 3B, Bergey says developers could start supporting them in their apps soon. "I think early next year, if not even late this year," he said. Conventional LLM logic goes that the more parameters, the more powerful the language model. The 1B and 3B, with only 1 and 3 billion parameters respectively, have far fewer parameters than other LLMs. Though parameter size is a proxy for intelligence, as Srinivasan says, it's not necessarily the same thing. The Llama 3.2 models build on Meta's Llama 3 series of models released earlier this year, including the most powerful one the company has produced in the Llama 3.1 model 405B, which Meta said at the time was the largest publicly available LLM -- and which the company used as a teacher of sorts for the 1B and 3B models. Developers want to use smaller models for a vast majority of their roofline or on-device tasks, Srinivasan said. They want to choose which tasks are so complex they're sent to the higher-parameter 8B and 70B models (of the Llama 3 generation announced in April) that require computation on larger devices and in the cloud -- but from a user perspective, this should all be very seamless when an app switches between them. "What it should result in is really snappy responses to prompts that require a quick response, and then a graceful blending of capabilities that then go to the cloud for some of the more higher-capacity models," Srinivasan said. And the benefit of having such relatively small-parameter models as the 1B and 3B are their comparatively better efficiencies -- providing answers with 1 watt of power or within 8 milliseconds, Bergey suggested, compared to the power drains and longer compute times of larger models. That could make them suitable for platforms that aren't as powerful, like smartwatches, headphones or other accessories, though there are still challenges with providing enough power and memory to run LLMs. For now, smartphones are suitable because they have both. But in the future, smaller-parameter models could be well-suited for devices that don't have traditional user interfaces or rely on external devices to be controlled, like security cameras. "I think this definitely goes well beyond smartphones relative to the applicability, especially as you get into smaller models," Bergey said.
[3]
Meta and Arm Are Getting In on the Smartphone AI Race
Expertise Smartphones | Smartwatches | Tablets | Telecom industry | Mobile semiconductors | Mobile gaming In the future, large language models could make it so that you can just have a conversation with your phone to take a photo instead of pressing camera buttons. And conversational interfaces like these could maybe one day power much more than just phones, perhaps even watches and security cameras. That's according to product managers at Meta and Arm, which partnered to work on a pair of compact AI models unveiled at today's Meta Connect event that are built to run on phones. Both companies are getting into the increasingly competitive effort to get generative AI on phones, which has become the must-have feature on phones, with Galaxy AI on the Samsung Galaxy S24 series, Gemini AI on the Google Pixel 9 Pro and Apple Intelligence set to launch on the new iPhone 16 series. Meta's new AI models are relatively smaller than other LLMs at 1 billion and 3 billion parameters (labeled Llama 3.2 1B and 3B, respectively). They're suitable for use on phones, and potentially other small devices, too. They're for use on the "edge" -- in other words, not with computation through the cloud but on-device. "We think this is a really good opportunity for us to move a lot of the inference to on-device and edge use cases," said Ragavan Srinivasan, vice president of product management for generative AI at Meta. Smartphones and other devices will be able to use these smaller models for things like text summarization, for example summing up a bunch of emails, Srinivasan explained, and creating calendar invites -- things deeply integrated into mobile workflows. The 1B and 3B models are purposely smaller to work on phones and can only understand text. Two larger models released in the Llama 3.2 generation, 11B and 90B, are too big to run on phones and are multimodal, meaning you could submit text and images to get complex answers. They replace previous generation 8B and 70B models that could only understand text. Meta worked closely with Arm, which designs architecture for CPUs and other silicon that's used in chips from companies such as Qualcomm, Apple, Samsung, Google and more. With over 300 billion Arm-based devices in the world, there's a big footprint of computers and phones that can use these models. Meta and Arm, through their partnership, are invested in helping the around 15 million developers for apps on Arm devices build software supporting these Llama 3.2 models. "What Meta is doing here is truly changing the kind of access to these leading edge models and what the developer community is going to be able to do with it," said Chris Bergey, general manager of the client line of business at Arm. The partnership is invested in helping developers support the smaller Llama 3.2 models and rapidly integrate them into their apps. They could harness the LLMs to create new user interfaces and ways to interact with devices, Bergey theorizes. Instead of pressing a button to open their camera app, you could have a conversation with your device and explain you they want it to do, for example. Given the amount of devices out there, and the speed at which they can deploy a smaller model like the 1B or 3B, Bergey says developers could start supporting them in their apps soon. "I think early next year, if not even late this year," he said. Conventional LLM logic goes that the more parameters, the more powerful the language model. The 1B and 3B, with only 1 and 3 billion parameters respectively, have far fewer parameters than other LLMs. Though parameter size is a proxy for intelligence, as Srinivasan says, it's not necessarily the same thing. The Llama 3.2 models build on Meta's Llama 3 series of models released earlier this year, including the most powerful one the company has produced in the Llama 3.1 model 405B, which Meta said at the time was the largest publicly available LLM -- and which the company used as a teacher of sorts for the 1B and 3B models. Developers want to use smaller models for a vast majority of their roofline or on-device tasks, Srinivasan said. They want to choose which tasks are so complex they're sent to the higher-parameter 8B and 70B models (of the Llama 3 generation announced in April) that require computation on larger devices and in the cloud -- but from a user perspective, this should all be very seamless when an app switches between them. "What it should result in is really snappy responses to prompts that require a quick response, and then a graceful blending of capabilities that then go to the cloud for some of the more higher-capacity models," Srinivasan said. And the benefit of having such relatively small-parameter models as the 1B and 3B are their comparatively better efficiencies -- providing answers with 1 watt of power or within 8 milliseconds, Bergey suggested, compared to the power drains and longer compute times of larger models. That could make them suitable for platforms that aren't as powerful, like smartwatches, headphones or other accessories, though there are still challenges with providing enough power and memory to run LLMs. For now, smartphones are suitable because they have both. But in the future, smaller-parameter models could be well-suited for devices that don't have traditional user interfaces or rely on external devices to be controlled, like security cameras. "I think this definitely goes well beyond smartphones relative to the applicability, especially as you get into smaller models," Bergey said.
[4]
Meta and Arm: AI on Smartphones
Making AI more accessible and efficient with enhanced SLM models. Meta is further expanding its AI vision with a new colossal partnership with chipmaker Arm. The two companies are partnering up to develop AI models specifically tailored for smartphones and other devices. This collaboration was announced during Meta Connect 2024 by Meta's CEO. Mark Zuckerberg said that the company's focus would be on making AI more efficient by working on smaller language models (SLMs). The new AI models would enable fast, on-device processing and edge computing, aimed at reducing the delay in AI inference. This would make interaction with AI assistants much more seamless.
Share
Share
Copy Link
Meta and Arm are collaborating to optimize AI models for mobile devices, aiming to enhance on-device AI capabilities and improve user experiences across various applications.
Meta, the parent company of Facebook, and Arm, a leading chip design firm, have announced a strategic partnership aimed at bringing advanced artificial intelligence (AI) capabilities to smartphones and other mobile devices 1. This collaboration marks a significant step towards enhancing on-device AI processing, potentially revolutionizing how users interact with their mobile devices.
The primary objective of this partnership is to optimize Meta's large language models (LLMs), particularly the Llama series, for Arm-based chips commonly found in smartphones 2. By doing so, they aim to enable more efficient and powerful AI processing directly on mobile devices, reducing the need for cloud-based computing and potentially improving user privacy and response times.
Meta and Arm are working on developing software that can run AI models more efficiently on Arm-designed chips. This involves creating specialized AI accelerators and optimizing neural processing units (NPUs) to handle complex AI tasks 3. The challenge lies in adapting large AI models, which typically require significant computing power, to function effectively on the more limited resources of mobile devices.
The integration of advanced AI capabilities into smartphones could lead to a wide range of improved features and applications:
These advancements could significantly enhance user experiences across various mobile applications 4.
This collaboration between Meta and Arm is part of a broader trend in the tech industry to bring AI capabilities closer to end-users. Other major players, including Google and Qualcomm, are also working on similar initiatives to integrate AI into mobile devices [2]. This competition is likely to accelerate innovation in the field of mobile AI, potentially leading to rapid advancements in smartphone capabilities.
While the full impact of this partnership remains to be seen, it has the potential to significantly alter the landscape of mobile computing. As AI becomes more integrated into our daily lives, the ability to process complex AI tasks directly on our smartphones could lead to more personalized, efficient, and privacy-conscious mobile experiences. However, challenges related to power consumption, heat generation, and model optimization will need to be addressed as this technology evolves.
Reference
[1]
[4]
Meta has introduced a voice mode for its AI assistant, allowing users to engage in conversations and share photos. This update, along with other AI advancements, marks a significant step in Meta's AI strategy across its platforms.
10 Sources
Apple is set to integrate ARM's latest AI chip technology in the upcoming iPhone 16, signaling a significant leap in on-device AI capabilities for smartphones.
2 Sources
Meta has released compact versions of its Llama 3.2 1B and 3B AI models, optimized for mobile devices with reduced size and memory usage while maintaining performance.
4 Sources
Meta has released Llama 3, an open-source AI model that can run on smartphones. This new version includes vision capabilities and is freely accessible, marking a significant step in AI democratization.
3 Sources
Apple introduces on-device AI capabilities for iPhones, iPads, and Macs, promising enhanced user experiences while maintaining privacy. The move puts Apple in direct competition with other tech giants in the AI race.
6 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved