3 Sources
[1]
Google to give app devs access to Gemini Nano for on-device AI
The rapid expansion of generative AI has changed the way Google and other tech giants design products, but most of the AI features you've used are running on remote servers with a ton of processing power. Your phone has a lot less power, but Google appears poised to give developers some important new mobile AI tools. At I/O next week, Google will likely announce a new set of APIs to let developers leverage the capabilities of Gemini Nano for on-device AI. Google has quietly published documentation on big new AI features for developers. According to Android Authority, an update to the ML Kit SDK will add API support for on-device generative AI features via Gemini Nano. It's built on AI Core, similar to the experimental Edge AI SDK, but it plugs into an existing model with a set of predefined features that should be easy for developers to implement. Google says ML Kit's GenAI APIs will enable apps to do summarization, proofreading, rewriting, and image description without sending data to the cloud. However, Gemini Nano doesn't have as much power as the cloud-based version, so expect some limitations. For example, Google notes that summaries can only have a maximum of three bullet points, and image descriptions will only be available in English. The quality of outputs could also vary based on the version of Gemini Nano on a phone. The standard version (Gemini Nano XS) is about 100MB in size, but Gemini Nano XXS as seen on the Pixel 9a is a quarter of the size. It's text-only and has a much smaller context window. This move is good for Android in general because ML Kit works on devices outside Google's Pixel line. While Pixel devices use Gemini Nano extensively, several other phones are already designed to run this model, including the OnePlus 13, Samsung Galaxy S25, and Xiaomi 15. As more phones add support for Google's AI model, developers will be able to target those devices with generative AI features. The documentation is available for developers to peruse now, but we expect Google to fling the API doors open at I/O. The company has already confirmed an I/O session called "Gemini Nano on Android: Building with on-device gen AI." The description promises new APIs to "summarize, proofread, and rewrite text, as well as to generate image descriptions," which sounds exactly like what the new ML Kit APIs can do. An important piece of the AI puzzle App developers interested in adding on-device generative AI features on Android are currently in a tough spot. Google offers the AI Edge SDK that can provide access to the NPU hardware for running models, but these tools are experimental and only work on the Pixel 9 series currently. It's also limited to text. Both Qualcomm and MediaTek offer APIs for running AI workloads, but features and functionality vary by device, which makes it risky to rely on them for a long-term project. And running your own model requires intimate knowledge of generative AI systems. The new APIs should make implementing local AI comparatively quick and easy. Despite the limited functionality of an on-device model, this is an important part of how AI could become more helpful. Most people would probably prefer not to send all their personal data to a remote server for AI processing, but an on-device model can parse that information in a more secure way. For example, Google's Pixel Screenshots sees all your screenshots, but all the processing happens on your phone. Similarly, Motorola summarizes notifications locally on the new Razr Ultra foldable. On the other hand, its less capable base model Razr sends notifications to a server for processing. The release of APIs that plug into Gemini Nano could provide some much-needed consistency to mobile AI. However, it does rely on Google and OEMs to collaborate on support for Gemini Nano. Some companies might decide to go their own way, and there will be plenty of phones that don't have enough power to run AI locally.
[2]
Google is about to unleash Gemini Nano's power for third-party Android apps
Unlike the experimental AI Edge SDK, ML Kit's GenAI APIs will be in beta, support image input, and be available on a wider range of Android devices beyond the Pixel 9 series. Generative AI technology is changing how we communicate and create content online. Many people ask AI chatbots like Google Gemini to perform tasks such as summarizing an article, proofreading an email, or rewriting a message. However, some people are wary of using these AI chatbots, especially when these tasks involve highly personal or sensitive information. To address these privacy concerns, Google offers Gemini Nano, a smaller, more optimized version of its AI model that runs directly on the device instead of on a cloud server. While access to Gemini Nano has so far been limited to a single device line and text-only input, Google will soon significantly expand its availability and introduce image input support.
[3]
Android Developers Can Now Build Apps With Gemini Nano With New API
Google is also hosting a session on building with Gemini Nano at I/O 2025 Google quietly released the ML Kit GenAI application programming interface (API) last week, allowing Android developers to build apps that leverage the capabilities of Gemini Nano. As per a document added to its developer forum, the Mountain View-based tech giant is now letting developers access the image description feature of the artificial intelligence (AI) model as well. Earlier, the model was only available as an experimental access, and developers could not publish the apps made using the large language model (LLM). First spotted by Android Authority, the tech giant added a new support document to Gemini Nano's Android developer page. This page now mentions a new API dubbed ML Kit GenAI that will allow developers to "harness the power of Gemini Nano to deliver out-of-the-box performance for common tasks through a simplified, high-level interface." The page also highlights that the API is built on AICore, an Android system service, and it enables on-device execution of Gemini Nano-like models, even if developers do not understand how the models function. The apps built using the AI model will also run locally, powered by the device's system-on-a-chip (SoC). With the ML Kit GenAI API, developers will be able to access new features such as text summarisation, message proofreading, rewriting messages, as well as adding short descriptions to images. Notably, Google has also scheduled a session at I/O 2025 dubbed "Gemini Nano on Android: Building with on-device gen AI." The company will likely explain the capabilities of the model and how developers can integrate the feature into the apps they're building. Google first released Gemini Nano to developers in October 2024 as part of the AI Edge software developer kit (SDK). However, this was only available as an experimental access, which means developers could not publish the apps they made using the AI model. Additionally, the SDK only supported developing apps for the Google Pixel 9 series, while the new API allows building apps for all compatible Android devices. The SDK only supported text-based features, and the image description feature was not available.
Share
Copy Link
Google is set to release new APIs that will allow Android developers to leverage Gemini Nano for on-device AI features, including text summarization, proofreading, rewriting, and image description.
Google is poised to make a significant announcement at its upcoming I/O event, introducing new APIs that will allow Android developers to harness the power of Gemini Nano for on-device AI features. This move marks a substantial shift in making advanced AI capabilities more accessible and privacy-focused for mobile applications 123.
Gemini Nano is a smaller, optimized version of Google's AI model designed to run directly on devices rather than on cloud servers. This on-device processing addresses privacy concerns associated with sending sensitive information to remote servers for AI processing 2. The model, while less powerful than its cloud-based counterpart, offers a range of functionalities that can significantly enhance mobile applications.
The new ML Kit GenAI API will enable developers to implement several AI-powered features without requiring in-depth knowledge of AI systems:
These features will operate entirely on-device, ensuring user data privacy and reducing reliance on internet connectivity 13.
While Gemini Nano offers impressive capabilities, it does have some limitations:
Two versions of Gemini Nano exist:
Unlike the experimental AI Edge SDK, which was limited to the Pixel 9 series, the new ML Kit GenAI API will be compatible with a broader range of Android devices. This includes phones from manufacturers such as OnePlus, Samsung, and Xiaomi, which are already designed to run Gemini Nano 123.
This release is expected to provide much-needed consistency in mobile AI development. It offers a more stable alternative to experimental tools like the AI Edge SDK and manufacturer-specific APIs from Qualcomm and MediaTek. The new APIs should make implementing local AI features comparatively quick and easy for developers 1.
Google has scheduled a session titled "Gemini Nano on Android: Building with on-device gen AI" at I/O 2025. This session is expected to provide detailed insights into the capabilities of Gemini Nano and guide developers on integrating these features into their applications 13.
As AI continues to reshape the mobile landscape, Google's expansion of Gemini Nano access represents a significant step towards more private, efficient, and capable on-device AI features for Android users and developers alike.
Meta Platforms is considering collaborations with AI rivals Google and OpenAI to improve its AI applications, potentially integrating external models into its products while developing its own AI capabilities.
5 Sources
Technology
20 hrs ago
5 Sources
Technology
20 hrs ago
Meta announces significant changes to its AI chatbot policies, focusing on teen safety by restricting conversations on sensitive topics and limiting access to certain AI characters.
8 Sources
Technology
20 hrs ago
8 Sources
Technology
20 hrs ago
Meta faces scrutiny for hosting AI chatbots impersonating celebrities without permission, raising concerns about privacy, ethics, and potential legal implications.
7 Sources
Technology
20 hrs ago
7 Sources
Technology
20 hrs ago
A groundbreaking AI-powered stethoscope has been developed that can detect three major heart conditions in just 15 seconds, potentially transforming early diagnosis and treatment of heart diseases.
5 Sources
Health
12 hrs ago
5 Sources
Health
12 hrs ago
A group of 60 UK parliamentarians have accused Google DeepMind of breaching international AI safety commitments by delaying the release of safety information for its Gemini 2.5 Pro model.
2 Sources
Policy
20 hrs ago
2 Sources
Policy
20 hrs ago