2 Sources
2 Sources
[1]
New details on Apple-Google AI deal revealed, including Gemini changes: report - 9to5Mac
Today in The Information's latest 'AI Agenda' column, a variety of details of Apple and Google's AI partnership have been outlined. The report says that Apple's partnership is "deeper than previously known." As part of the agreement, Apple reportedly gets "a lot more freedom with Google's tech" than was expected. From the report: Apple has complete access to the Gemini model in its own data center facilities. Apple can use that access to produce smaller models that power specific tasks or are small enough to run directly on Apple devices so they can run the tasks faster, said a person who has direct knowledge of the arrangement. As explained by The Information, this process of producing new, offshoot models is called "distillation." It's a way in which one LLM can transfer knowledge to smaller, more streamlined models. What's the benefit of this? Smaller models can be more efficient, requiring less computing power and delivering results faster. In other words, they make a lot more sense for running on-device than larger models. Since Apple has full access to Gemini, its student model can also learn to imitate the internal computations that Gemini uses to arrive at its answers, which can be more effective than just imitating the answers it spits out. This results in smaller models that roughly approximate the performance of their state-of-the-art teachers but require significantly less computing power to run. The Information's source said that this can be a tricky process, since Apple's goals for Siri don't always align with Gemini's specialties. They also reiterate what we've heard before about Apple's Foundation Models team not giving up on in-house models. However, it sounds very unclear what the AFM team's current goals are. How are you feeling about Apple's AI deal with Google? Let us know in the comments.
[2]
Apple Can Create Smaller On-Device AI Models From Google's Gemini
Apple has full access to Gemini to customize the model for Siri and other AI features, reports The Information. Google gave Apple "complete access" to the Gemini model in its own data centers, and Apple can use the access for distillation, or creating smaller models for specific tasks. Apple is able to design models that are built to run on Apple devices without the need to connect to the internet. The Information explains that Apple can ask the main Gemini model to perform a series of tasks that provide high-quality results, with a rundown of the reasoning process. Apple can feed the answers and reasoning information that it gets from Gemini to train smaller, cheaper models. With this process, the smaller models are able to learn the internal computations used by Gemini, producing efficient models that have Gemini-like performance but require less computing power. Apple is also able to edit Gemini as needed to make sure that it responds to queries in a way that Apple wants, but Apple has been running into some issues because Gemini has been tuned for chatbot and coding applications, which doesn't always meet Apple's needs. Apple is relying on Google's Gemini models for the smarter, chatbot version of Siri that's planned for iOS 27, but the Apple Foundation Models team is still working on Apple AI models that are distinct from the Gemini models. Siri will be able to do many of the same things that Gemini and other chatbots are able to do, such as answering questions, summarizing information, scanning and understanding uploaded documents, telling stories, providing emotional support, and completing tasks like booking travel.
Share
Share
Copy Link
Apple's partnership with Google runs deeper than expected. The company has complete access to Google's Gemini model in its own data centers, allowing it to create smaller, distilled AI models that run locally on devices. This process enables Apple to build efficient models requiring less computing power while maintaining Gemini-like performance for Siri improvements.
The Apple Google AI deal extends far beyond a simple licensing arrangement, according to new reporting from The Information . Apple has complete access to Google's Gemini model within its own data center facilities, granting the iPhone maker significantly more freedom with Google's technology than previously understood. This access to Google's Gemini model allows Apple to manipulate and customize the large language model in ways that serve its specific product needs, particularly for transforming Siri into a more capable assistant.

Source: 9to5Mac
The Apple and Google AI partnership reveals a sophisticated technical arrangement where Apple can leverage Gemini's capabilities while maintaining control over the user experience. According to sources with direct knowledge of the deal, Apple can use this access to produce smaller models designed for specific tasks or compact enough to run locally on its devices
2
. This approach addresses one of the fundamental challenges in mobile AI: delivering powerful performance without requiring constant cloud connectivity or excessive computing power.The process Apple employs is called distillation, a technique where knowledge from a large language model transfers to smaller, more streamlined versions. Apple can ask the main Gemini model to perform a series of tasks that provide high-quality results along with a detailed rundown of the reasoning process
2
. The company then feeds these answers and reasoning information to train smaller, cheaper models that can run locally on its devices without internet connectivity.
Source: MacRumors
What makes this arrangement particularly valuable is that Apple's student models can learn to imitate the internal computations that Gemini uses to arrive at its answers, rather than simply copying the outputs. This results in distilled AI models that roughly approximate the performance of their state-of-the-art teachers while requiring significantly less computing power to operate
1
. For users, this translates to faster responses and the ability to use AI features even when offline.Despite the technical capabilities this partnership provides, Apple faces obstacles in adapting Google Gemini for its purposes. Sources indicate that Gemini has been primarily tuned for chatbot and coding applications, which doesn't always align with Apple's needs for Siri and other AI features
2
. Apple can edit Gemini as needed to ensure it responds to queries in ways the company prefers, but this customization process has proven tricky.The distillation process itself presents complications, as Apple's goals for a smarter Siri don't always match Gemini's specialties
1
. This misalignment suggests Apple must invest significant engineering resources to reshape Gemini's capabilities for consumer-focused tasks rather than the developer and enterprise applications Google has prioritized.Related Stories
While Apple is relying on Google's Gemini models for the chatbot version of Siri planned for iOS 27, Apple's Foundation Models team hasn't abandoned work on proprietary AI systems. The team continues developing Apple AI models that are distinct from the Gemini models, though their current objectives remain unclear
1
. This dual-track approach suggests Apple views the Google partnership as a bridge solution while building long-term AI capabilities internally.The enhanced Siri coming in iOS 27 will perform many functions similar to Gemini and other chatbots, including answering questions, summarizing information, scanning and understanding uploaded documents, telling stories, providing emotional support, and completing tasks like booking travel
2
. Whether Apple eventually replaces Gemini with its own Foundation Models or maintains this partnership will likely depend on how quickly the company's internal AI development progresses and whether users embrace the Gemini-powered features.Summarized by
Navi
23 Aug 2025β’Technology

24 Feb 2025β’Technology

30 Oct 2025β’Business and Economy

1
Technology

2
Technology

3
Technology
