6 Sources
[1]
Apple lets developers tap into its offline AI models | TechCrunch
Apple is launching what it calls the Foundation Models framework, which the company says will let developers tap into its AI models in an offline, on-device fashion. Onstage at WWDC 2025 on Monday, Apple VP of software engineering Craig Federighi said that the Foundation Models framework will let apps use on-device AI models created by Apple to drive experiences. These models ship as a part of Apple Intelligence, Apple's family of models that power a number of iOS features and capabilities. "For example, if you're getting ready for an exam, an app like Kahoot can create a personalized quiz from your notes to make studying more engaging," Federighi said. "And because it happens using on-device models, this happens without cloud API costs [...] We couldn't be more excited about how developers can build on Apple intelligence to bring you new experiences that are smart, available when you're offline, and that protect your privacy." In a blog post, Apple says that the Foundation Models framework has native support for Swift, Apple's programming language for building apps for its various platforms. The company claims developers can access Apple Intelligence models with as few as three lines of code. Guided generation, tool calling, and more are all built into the Foundation Models framework, according to Apple.
[2]
Apple opens its foundational AI models to developers
It's safe to say Apple Intelligence hasn't landed in the way Apple likely hoped it would. However, that's not stopping the company from continuing to iterate on its suite of AI features. During its WWDC 2025 conference on Monday, Apple announced a collection of new features for Apple Intelligence, starting with the decision to bring its foundational models to developers. According to Craig Federighi, the company's senior vice president of software engineering, Apple's new Foundation Models framework will allow third-party developers to tap into the large language models that power Apple Intelligence.
[3]
Here's how Apple's new local AI models perform against Google's - 9to5Mac
One of the very first announcements on this year's WWDC was that for the first time, third‑party developers will get to tap directly into Apple's on‑device AI with the new Foundation Models framework. But how do these models actually compare against what's already out there? With the new Foundation Models framework, third-party developers can now build on the same on-device AI stack used by Apple's native apps. In other words, this means that developers will now be able to integrate AI features like summarizing documents, pulling key info from user text, or even generating structured content, entirely offline, with zero API cost. But how good are Apple's models, really? Based on Apple's own human evaluations, the answer is: pretty solid, especially when you consider the balance (which some might call 'tradeoff') between size, speed, and efficiency. In Apple's testing, its ~3B parameter on-device model outperformed similar lightweight vision-language models like InternVL-2.5 and Qwen-2.5-VL-3B in image tasks, winning over 46% and 50% of prompts, respectively. And in text, it held its ground against larger models like Gemma-3-4B, even edging ahead in some international English locales and multilingual evaluations (Portuguese, French, Japanese, etc.). In other words, Apple's new local models seem set to deliver consistent results for many real-world uses without resorting to the cloud or requiring data to leave the device. When it comes to Apple's server model (which won't be accessible by third-party developers like the local models), it compared favorably to LLaMA-4-Scout and even outperformed Qwen-2.5-VL-32B in image understanding. That said, GPT-4o still comfortably leads the pack overall. The real story here isn't just that Apple's new models are better. It's that they're built in. With the Foundation Models framework, developers no longer need to bundle heavy language models in their apps for offline processing. That means leaner app sizes and no need to fall back on the cloud for most tasks. The result? A more private experience for users, and no API costs for developers, savings that can ultimately benefit everyone. Apple says the models are optimized for structured outputs using a Swift-native "guided generation" system, which allows developers to constrain model responses directly into app logic. For apps in education, productivity, and communication, this could be a game-changer, offering the benefits of LLMs without the latency, cost, or privacy tradeoffs. Ultimately, Apple's models aren't the most powerful in the world, but they don't need to be. They're good, they're fast, and now they're available to every developer for free, on-device, and offline. That might not make for the same headlines as more powerful models will, but in practice, it could lead to a wave of genuinely useful AI features in third-party iOS apps that don't require the cloud. And for Apple, that may very well be the point.
[4]
Apple Intelligence opened up to all developers with Foundation Models Framework
As rumored, Apple has announced that developers will soon be able to access the on-device large language models that power Apple Intelligence in their own apps through the Foundation Models framework. Through the newly unveiled Foundation Models framework, Apple is giving developers the chance to use native AI capabilities in their own apps. Third-party apps will be able to use the features for image creation, text generation, and more. Like Apple Intelligence itself, the on-device processing will allow for AI features that are fast, powerful, focused on privacy, and available without an internet connection. Rumors that Apple would be opening up its Apple Intelligence platform first circulated earlier this year. In May, Bloomberg reported that Apple would take the first steps toward making its intelligence systems accessible to third-party apps, though it noted that apps wouldn't be able to access the models themselves -- just AI-powered features. Along with opening up Apple Intelligence to other apps, the company also announced that it is expanding the number of languages that its AI platform supports, and making the generative models that power it "more capable and more efficient."
[5]
Apple Opens Its On-Device AI Model to Developers
It's unclear whether Apple is using its older 3B model or an improved AI model for on-device inference. Today, at WWDC 2025, Apple announced the Foundation Models Framework to allow developers to leverage the power of Apple's on-device AI models. Developers can use the new API to integrate AI-powered features into their apps. This new framework utilizes Apple's in-house AI models locally while preserving data privacy. During the announcement, Apple's senior VP of software engineering, Craig Federighi, said: We're also taking the huge step of giving developers direct access to the on-device foundation model powering Apple Intelligence, allowing them to tap into intelligence that is powerful, fast, built with privacy, and available even when users are offline. With the new Foundation Models API, developers don't have to rely on third-party vendors like OpenAI and Google to power AI features into their apps. The best part is that AI features will work even in offline mode since Apple's AI models run locally on the device, and there is no AI inference cost for developers. That said, Apple has not demonstrated the capabilities of its in-house models. Last year, Apple showcased its on-device AI model which was trained on 3 billion parameters. The older model was closer to Google's Gemma-1.1-2B and Microsoft's Phi-3-mini models in terms of performance. It's unclear whether Apple is using the same AI model or if the company has trained an improved model for the local AI stack. Apart from that, Apple opened up about Siri and said, "We're continuing our work to deliver the features that make Siri even more personal. This work needed more time to reach our high-quality bar, and we look forward to sharing more about it in the coming year." So it looks like the upgraded, AI-powered Siri is coming next year. However, Apple Intelligence features are coming to more languages, including French, German, Italian, Spanish, and more.
[6]
Apple's Foundation Models Framework Empowers Third-Party Developers With Direct Access To On-Device Apple Intelligence, Enabling Seamless Integration Of Fast, Private, And Powerful AI Features In Their Apps
The WWDC 2025 revolved around the new design language Apple opted for across its systems for a more unified experience and the Apple Intelligence features integrated to make the experience even better, with more personalization. While the keynote kicked off with AI-powered capabilities and how they remain at the core, Apple also introduced a new framework called the Foundation Models that is meant to help developers access the AI features more seamlessly but without compromising on performance and privacy. Apple made a major announcement today about bringing a new framework called the Foundations Model, which amplifies the company's broader AI strategy as it is here to open up access to Apple Intelligence systems to third-party developers without the need to develop or host their own models. It can also be referred to as a greater emphasis on local execution as the models would be running on-device AI models powered by Apple Intelligence. This would allow third-party apps to integrate AI-driven features without having to rely on cloud infrastructure and allow developers to access capabilities such as image generation, summarization, and other key features without internet connectivity. Since the framework follows an on-device architecture, Apple's core values centered around user privacy would remain intact as the user data stays private and does not leave the device. As the tech giant moves further into its AI integrations, this step is huge in terms of offering more developer tools and the groundwork it is laying out for ensuring a secure ecosystem. The step is pivotal as it marks a new direction Apple has taken regarding AI. This is the first time developers will access the on-device foundational models, and Apple Intelligence capabilities are being extended to third-party applications. This step also sets the direction of future app development since the company is approaching on-device AI as a foundation step. This framework will debut with iOS 26 and other platform updates, highlighting Apple's commitment to bringing deeply integrated AI across its ecosystem.
Share
Copy Link
Apple introduces the Foundation Models framework at WWDC 2025, allowing developers to access its on-device AI models for creating powerful, privacy-focused, and offline-capable AI features in third-party apps.
At WWDC 2025, Apple unveiled its groundbreaking Foundation Models framework, a significant leap in on-device AI technology. This new framework allows third-party developers to access Apple's AI models, enabling them to integrate powerful AI features into their apps while maintaining user privacy and offline functionality 1.
Source: 9to5Mac
Craig Federighi, Apple's senior VP of software engineering, emphasized the framework's potential: "We're also taking the huge step of giving developers direct access to the on-device foundation model powering Apple Intelligence, allowing them to tap into intelligence that is powerful, fast, built with privacy, and available even when users are offline" 5.
The Foundation Models framework offers several advantages for developers and users alike:
On-Device Processing: AI features can run locally, ensuring fast performance and maintaining user privacy 2.
Offline Functionality: Apps can utilize AI capabilities without an internet connection 1.
Cost-Effective: Developers can implement AI features without incurring cloud API costs 3.
Swift Integration: The framework offers native support for Swift, Apple's programming language, allowing developers to access AI models with minimal code 1.
Source: Wccftech
Apple's on-device AI models have shown competitive performance in various tasks:
In image-related tasks, Apple's ~3B parameter model outperformed similar lightweight models like InternVL-2.5 and Qwen-2.5-VL-3B 3.
For text processing, it held its ground against larger models like Gemma-3-4B, particularly in international English locales and multilingual evaluations 3.
The framework supports various AI capabilities, including image creation, text generation, and structured output generation 4.
The introduction of the Foundation Models framework has significant implications:
Streamlined App Development: Developers can now integrate AI features without bundling heavy language models, resulting in leaner app sizes 3.
Enhanced Privacy: On-device processing ensures user data remains on the device, addressing privacy concerns 4.
Expanded Language Support: Apple is increasing the number of languages supported by its AI platform 4.
Source: TechCrunch
While Apple has made significant strides with the Foundation Models framework, there are hints of further advancements:
Apple mentioned ongoing work to enhance Siri's personalization, with more details expected in the coming year 5.
The company is continuously improving its generative models, making them more capable and efficient 4.
As developers begin to explore the possibilities offered by the Foundation Models framework, we can expect to see a wave of innovative AI-powered features in iOS apps, potentially reshaping the landscape of mobile applications.
Summarized by
Navi
Apple is reportedly in talks with OpenAI and Anthropic to potentially use their AI models to power an updated version of Siri, marking a significant shift in the company's AI strategy.
29 Sources
Technology
22 hrs ago
29 Sources
Technology
22 hrs ago
Cloudflare introduces a new tool allowing website owners to charge AI companies for content scraping, aiming to balance content creation and AI innovation.
10 Sources
Technology
6 hrs ago
10 Sources
Technology
6 hrs ago
Elon Musk's AI company, xAI, has raised $10 billion in a combination of debt and equity financing, signaling a major expansion in AI infrastructure and development amid fierce industry competition.
5 Sources
Business and Economy
14 hrs ago
5 Sources
Business and Economy
14 hrs ago
Google announces a major expansion of AI tools for education, including Gemini for Education and NotebookLM, aimed at enhancing learning experiences for students and supporting educators in classroom management.
8 Sources
Technology
22 hrs ago
8 Sources
Technology
22 hrs ago
NVIDIA's upcoming GB300 Blackwell Ultra AI servers, slated for release in the second half of 2025, are poised to become the most powerful AI servers globally. Major Taiwanese manufacturers are vying for production orders, with Foxconn securing the largest share.
2 Sources
Technology
14 hrs ago
2 Sources
Technology
14 hrs ago