3 Sources
[1]
Apple Intelligence: Everything you need to know about Apple's AI model and services | TechCrunch
If you've upgraded to a newer iPhone model recently, you've probably noticed that Apple Intelligence is showing up in some of your most-used apps, like Messages, Mail, and Notes. Apple Intelligence (yes, also abbreviated to AI) showed up in Apple's ecosystem in October 2024, and it's here to stay as Apple competes with Google, OpenAI, Anthropic, and others to build the best AI tools. Cupertino marketing executives have branded Apple Intelligence: "AI for the rest of us." The platform is designed to leverage the things that generative AI already does well, like text and image generation, to improve upon existing features. Like other platforms including ChatGPT and Google Gemini, Apple Intelligence was trained on large information models. These systems use deep learning to form connections, whether it be text, images, video or music. The text offering, powered by LLM, presents itself as Writing Tools. The feature is available across various Apple apps, including Mail, Messages, Pages and Notifications. It can be used to provide summaries of long text, proofread and even write messages for you, using content and tone prompts. Image generation has been integrated as well, in similar fashion -- albeit a bit less seamlessly. Users can prompt Apple Intelligence to generate custom emojis (Genmojis) in an Apple house style. Image Playground, meanwhile, is a standalone image generation app that utilizes prompts to create visual content than can be used in Messages, Keynote or shared via social media. Apple Intelligence also marks a long-awaited face-lift for Siri. The smart assistant was early to the game, but has mostly been neglected for the past several years. Siri is integrated much more deeply into Apple's operating systems; for instance, instead of the familiar icon, users will see a glowing light around the edge of their iPhone screen when it's doing its thing. More important, new Siri works across apps. That means, for example, that you can ask Siri to edit a photo and then insert it directly into a text message. It's a frictionless experience the assistant had previously lacked. Onscreen awareness means Siri uses the context of the content you're currently engaged with to provide an appropriate answer. Leading up to WWDC 2025, many expected that Apple would introduce us to an even more souped-up version of Siri, but we're going to have to wait a bit longer. "As we've shared, we're continuing our work to deliver the features that make Siri even more personal," said Apple SVP of Software Engineering Craig Federighi at WWDC 2025. "This work needed more time to reach our high-quality bar, and we look forward to sharing more about it in the coming year." This yet-to-be-released, more personalized version of Siri is supposed to be able to understand "personal context," like your relationships, communications routine, and more. But according to a Bloomberg report, the in-development version of this new Siri is too error-ridden to ship, hence its delay. At WWDC 2025, Apple also unveiled a new AI feature called Visual Intelligence, which helps you do an image search for things you see as you browse. Apple also unveiled a Live Translation feature that can translate conversations in real time in the Messages, FaceTime, and Phone apps. Visual Intelligence and Live Translation are expected to be available later in 2025, when iOS 26 launches to the public. After months of speculation, Apple Intelligence took center stage at WWDC 2024. The platform was announced in the wake of a torrent of generative AI news from companies like Google and Open AI, causing concern that the famously tight-lipped tech giant had missed the boat on the latest tech craze. Contrary to such speculation, however, Apple had a team in place, working on what proved to be a very Apple approach to artificial intelligence. There was still pizzazz amid the demos -- Apple always loves to put on a show -- but Apple Intelligence is ultimately a very pragmatic take on the category. Apple Intelligence isn't a standalone feature. Rather, it's about integrating into existing offerings. While it is a branding exercise in a very real sense, the large language model (LLM) driven technology will operate behind the scenes. As far as the consumer is concerned, the technology will mostly present itself in the form of new features for existing apps. We learned more during the Apple's iPhone 16 event in September 2024. During the event, Apple touted a number of AI-powered features coming to its devices, from translation on the Apple Watch Series 10, visual search on iPhones, and a number of tweaks to Siri's capabilities. The first wave of Apple Intelligence is arriving at the end of October, as part of iOS 18.1, iPadOS 18.1 and macOS Sequoia 15.1. The features launched first in U.S. English. Apple later added Australian, Canadian, New Zealand, South African, and U.K. English localizations. Support for Chinese, English (India), English (Singapore), French, German, Italian, Japanese, Korean, Portuguese, Spanish, and Vietnamese will arrive in 2025. The first wave of Apple Intelligence arrived in October 2024 va iOS 18.1, iPadOS 18., and macOS Sequoia 15.1 updates. These updates included integrated writing tools, image cleanup, article summaries, and a typing input for the redesigned Siri experience. A second wave of features became available as part of iOS 18.2, iPadOS 18.2 and macOS Sequoia 15.2. That list includes, Genmoji, Image Playground, Visual Intelligence, Image Wand, and ChatGPT integration. These offerings are free to use, so long as you have one of the following pieces of hardware: Notably, only the Pro versions of the iPhone 15 are getting access, owing to shortcomings on the standard model's chipset. Presumably, however, the whole iPhone 16 line will be able to run Apple Intelligence when it arrives. When you ask GPT or Gemini a question, your query is being sent to external servers to generate a response, which requires an internet connection. But Apple has taken a small-model, bespoke approach to training. The biggest benefit of this approach is that many of these tasks become far less resource intensive and can be performed on-device. This is because, rather than relying on the kind of kitchen sink approach that fuels platforms like GPT and Gemini, the company has compiled datasets in-house for specific tasks like, say, composing an email. That doesn't apply to everything, however. More complex queries will utilize the new Private Cloud Compute offering. The company now operates remote servers running on Apple Silicon, which it claims allows it to offer the same level of privacy as its consumer devices. Whether an action is being performed locally or via the cloud will be invisible to the user, unless their device is offline, at which point remote queries will toss up an error. A lot of noise was made about Apple's pending partnership with OpenAI ahead of the launch of Apple Intelligence. Ultimately, however, it turned out that the deal was less about powering Apple Intelligence and more about offering an alternative platform for those things it's not really built for. It's a tacit acknowledgement that building a small-model system has its limitation. Apple Intelligence is free. So, too, is access to ChatGPT. However, those with paid accounts to the latter will have access to premium features free users don't, including unlimited queries. ChatGPT integration, which debuts on iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2, has two primary roles: supplementing Siri's knowledge base and adding to the existing Writing Tools options. With the service enabled, certain questions will prompt the new Siri to ask the user to approve its accessing ChatGPT. Recipes and travel planning are examples of questions that may surface the option. Users can also directly prompt Siri to "ask ChatGPT." Compose is the other primary ChatGPT feature available through Apple Intelligence. Users can access it in any app that supports the new Writing Tools feature. Compose adds the ability to write content based on a prompt. That joins existing writing tools like Style and Summary. We know for sure that Apple plans to partner with additional generative AI services. The company all but said that Google Gemini is next on that list. At WWDC 2025, Apple announced what it calls the Foundation Models framework, which will let developers tap into its AI models while offline. This makes it more possible for developers to build AI features into their third-party apps that leverage Apple's existing systems. "For example, if you're getting ready for an exam, an app like Kahoot can create a personalized quiz from your notes to make studying more engaging," Federighi said at WWDC. "And because it happens using on-device models, this happens without cloud API costs [...] We couldn't be more excited about how developers can build on Apple intelligence to bring you new experiences that are smart, available when you're offline, and that protect your privacy."
[2]
The 7 best AI features announced at Apple's WWDC that I can't wait to use
These features can do everything from mixing music for you to translating audio for you in real-time. Apple Worldwide Developers' Conference (WWDC) was expected to have little AI news -- but Apple proved everyone wrong. Even though Apple has not yet launched the highly anticipated Siri upgrade -- the company said we will hear more about it in the coming year -- when it came to event time, Apple unveiled a slew of AI features across its devices and operating systems, including iOS, MacOS, WatchOS, and iPadOS. Also: Apple's secret sauce is exactly what AI is missing While not being the most flashy features, many of them address issues that Apple users already had with their devices or in their everyday workflows. I gathered the AI features announced and ranked them from most to least helpful. Apple introduced Visual Intelligence last year with the iPhone 16 launch. At the time, Apple's Visual Intelligence allowed users to take a photo of objects around them and then use the iPhone's AI capability to search for them and find more information. Monday, Apple upgraded the experience by adding Visual Intelligence to your iPhone screen. To use it, you just have to take a screenshot. Visual Intelligence can use Apple Intelligence to grab the details from your screenshot and suggest actions, such as adding an event to your calendar. You can also use the "ask button" to ask ChatGPT for help with a particular image. This is useful for tasks in which ChatGPT could provide assistance, such as solving a puzzle. You can also tap on "Search" to look on the web. Also: Your entire iPhone screen is now searchable with new Visual Intelligence features Although Google already offered the same capability years ago with Circle to Search, this is a big win for Apple users, as it is functional and was executed well. It leverages ChatGPT's already capable models rather than trying to build an inferior one itself. Since generative AI exploded in popularity, a useful application that has emerged is real-time translation. Because LLMs have a deep understanding of language and how people speak, they are able to translate speech not just literally but also accurately using additional context. Apple will roll out this powerful capability across its own devices with a new real-time translation feature. Also: Apple Intelligence is getting more languages - and AI-powered translation The feature can translate text in Messages and audio on FaceTime and phone calls. If you are using it for verbal conversations, you just click a button on your call, which alerts the other person that the live translation is about to take place. After a speaker says something, there is a brief pause, and then you get audio feedback with a conversational version of what was said in your language of choice, with a transcript you can follow along. This feature is valuable because it can help people communicate with each other. It is also easy to access because it is being baked into communication platforms people already rely on every day. Apple made its on-device model available to developers for the first time, and although that may seem like it would only benefit developers, it's a major win for all users. Apple has a robust community of developers who build applications for Apple's operating systems. Tapping into that talent by allowing them to build on Apple Intelligence nearly guarantees that more innovative and useful applications using Apple Intelligence will emerge, which is beneficial to all users since they will be able to take advantage of them. The Shortcuts update was easy to miss during the keynote, but it is one of the best use cases for AI. If you are like me, you typically avoid programming Shortcuts because they seem too complicated to create. This is where Apple Intelligence can help. Also: Shortcuts is the best Apple app you're not using - and iOS 26 makes it even more powerful With the new intelligent actions features, you can tap into Apple Intelligence models either on-device or in Private Cloud Compute within your Shortcut, unlocking a new, much more advanced set of capabilities. For example, you could set up a Shortcut that takes all the files you add to your homepage and then sorts them into files for you using Apple Intelligence. There is also a gallery feature available to try out some of the features and get inspiration for building. The Hold Assist feature is a prime example of a feature that is not over the top but has the potential to save you a lot of time in your everyday life. The way it works is simple: if you're placed on hold and your phone detects hold music, it will ask if you want your call spot held in line and notify you when it's your turn to speak with someone, alerting the person on the other end of the call that you will be right there. Also: These 3 Apple CarPlay upgrades stole WWDC 2025 for me Imagine how much time you will get back from calls with customer service. If the feature seems familiar, it's because Google has a similar "Hold for me" feature, which allows users to use Call Assist to wait on hold for you and notify you when they are back. The Apple Vision Pro introduced the idea of enjoying your precious memories in an immersive experience that places you in the scene. However, to take advantage of this feature, you had to take spatial photos and record spatial videos. Now, a similar feature is coming to iOS, allowing users to transform any picture they have into a 3D-like image that separates the foreground and background for a spatial effect. Also: iOS 26 will bring any photo on your iPhone to life with 3D spatial effects The best part is that you can add these photos to your lock screen, and as you move your phone, the 3D element looks like it moves with it. It may seem like there is no AI involved, but according to Craig Federighi, Apple's SVP of software engineering, it can transform your 2D photo into a 3D effect by using "advanced computer vision techniques running on the Neural Engine. " Using AI for working out insights isn't new, as most fitness wearable companies, including Whoop and Oura, have implemented a feature of that sort before. However, Workout Buddy is a unique feature and an application of AI I haven't seen before. Essentially, it uses your fitness data, such as history, paces, Activity Rings, and Training Load, to give you unique feedback as you are working out. Also: Your Apple Watch is getting a major upgrade. Here are the best features in WatchOS 26 Even though this feature is a part of the WatchOS upgrade -- and I don't happen to own one -- it does seem like a fun and original way to use AI. As someone who lacks all desire to work out, I can see that having a motivational reminder can have a positive impact on the longevity of my workout. The list above is already pretty extensive, and yet, Apple unveiled a lot more AI features: Get the morning's top stories in your inbox each day with our Tech Today newsletter.
[3]
How Apple just changed the developer world with this one AI announcement
This is big. Really, really big. It's subtle. It's probably not what you think. And it's going to take a few minutes to explain. Before I deconstruct the strategic importance of this move, let's discuss what "it" is. Briefly, Apple is providing access to its on-device AI large language model (LLM) to developers. I can hear you all saying, "That's it? That's this big thing? Developers have had access to AI LLMs since there were AI LLMs. Are you saying it's big because it's from Apple? Fan boy! Nyah-nyah." No. That's not it. I'm not an Apple fan boy. And I certainly don't bleed in six colors. Also: The best AI for coding in 2025 (including a new winner - and what not to use) Another group of you is probably thinking, "Wait. What? AI from Apple? The last we looked, on the number line between barf and insanely great, Apple Intelligence was about two-thirds of the way toward barf." Yeah, I have to agree. Apple Intelligence has been a big nothingburger. I even wrote an entire article about how uninteresting and yawn-inducing Apple Intelligence has been. I still think that. But the fact that Apple's branding team oversold a feature set doesn't detract from the seismic change that Apple has just announced. I know. In this context, bringing Steve Ballmer's famous rant into an Apple story is like telling someone, "Live long and may the Force be with you." But this is a developer story. Let's be clear: everything about the modern Apple ecosystem is really a developer story. In fact, everything about the modern world is, fundamentally, a developer story. Also: Everything announced at Apple's WWDC 2025 keynote: Liquid Glass, MacOS Tahoe, and more It's hard to deny the fact that code rules the world. Nearly everything we do, and certainly all of our communications, supply chain, and daily-life ecosystem, revolves around software. We became a highly connected, mobile-computing-centric society when the smartphone became a permanent appendage to the human body in 2008 or so. But it wasn't the generic smartphone. It wasn't even the iPhone that changed everything. It was the App Store. Prior to the App Store, you needed some level of geek skills to install software. That meant there was friction between having an idea for software and installing it. Developers had to find users, manage distribution channels, and eventually sell their goods. When I started my first software company, I faced a number of barriers to entry: printing packaging cost tens of thousands of dollars per title; convincing a distributor and retailer to carry it; and then there was warehousing, shipping, assembly, and a variety of other physical supply-chain issues. Most developers only got to keep 30-40% of the eventual retail price of the product; distributors and retailers got the rest. Also: The 7 best AI features announced at Apple's WWDC that I can't wait to use Then came the App Store. First, we could sell software for as little as a buck, which could still be profitable. There were no production costs, no cost to print disks or disk labels, no labor to put labels on the disks or prepare them for shipping, and no shipping costs. Users didn't have to find some "computer kid" to install the software -- they just pushed a button and it installed. Developers who sold through the channel got to keep 70% of the revenue instead of just 30 or 40%. Back when the App Store launched, I created 40 pinpoint iPhone apps. I didn't make enough to give up my day job, but I did make a few thousand bucks in profit. Before the App Store, it would have been impossible to create 40 little apps -- impossible to get shelf space, afford production, price them at a buck, or make a profit. The App Store removed all that friction, and the number of available apps ballooned into the millions. Anybody, anywhere, with a computer and a little programming skill, could -- and still can -- create an app, get distribution, sell some, and make a profit. Anyone. Also: Is ChatGPT Plus still worth $20 when the free version packs so many premium features? Keep in mind that the power of the iPhone and of Android is the developer long tail. Sure, we all have the Facebook and Instagram apps on our phones. We probably all have a few of the big apps like Uber and Instacart. But it's not billion-dollar apps that make the platform; it's the tons and tons of little specialty apps, some of which broke out and became big apps. It's the fact that anyone can make an app, can afford to make an app, and can afford to get that app into distribution. It's not that the App Store lowered the barrier to entry. It's that the App Store effectively removed any financial barrier to entry at all. Well, technically, AI has been with us for fifty years or more. The big change is generative AI. ChatGPT, and its desperate competitor clones, changed things once again. I don't need to go into the mega-changes we've been seeing due to the emergence of generative AI. We cover that every day here at ZDNET. Just about every other publication on the planet is also covering AI in depth. Also: Your favorite AI chatbot is lying to you all the time The thing is, AI is bonkers expensive. Cited in a really interesting ZDNET article on AI energy use, Boston Consulting Group estimates that AI data centers will use about 7.5% of America's energy supply within four years. AI data centers are huge and enormously expensive to either build out or rent. Statista cites OpenAI's Sam Altman as saying that GPT-4, the LLM inside ChatGPT, cost more than $100 million. (Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) While most chatbots based on LLMs have free tiers, those tiers are often fairly limited in what they can do and how often they do it. They're loss leaders designed to get consumers used to the idea of AI so they eventually become customers. The real business is in licensing. You can oversimplify the AI business by breaking it into two categories: those who create the LLMs, and those who license the LLMs for use in their apps. Also: Your iPhone will translate calls and texts in real time, thanks to AI AI companies (those who make the LLMs) base their business models on the premise that other developers will want the benefits of generative AI for their software products. Few developers want to take on the expense of developing an AI from scratch, so they license API calls from the AI companies, effectively paying based on usage. This makes adding AI to an app absurdly easy. The bulk of the effort is in authenticating the app's right to access the AI. Then the app just sends a prompt as an API parameter, and the AI returns either plain text or structured text as a response. There are two main gotchas. First, whatever your app sends to the AI is being sent to the AI -- there's a privacy issue there. But more to the point, developers have only four business-model options for incorporating AI via API calls in their products: In all of these cases, the AI becomes a transactional expense. The AI features presented to customers have to provide big enough value (or be spun as having big enough value) to convince customers to spend for them. If developers eat the AI API fees themselves, the app has to be profitable enough for the developer to include those fees in their cost of goods. And, again, all of this has privacy concerns on top of the expense barrier to entry. If you think about it, the App Store removed barriers to entry. It removed friction. It removed the friction consumers felt in having to actually do software installation. And it removed tons of developer friction in bringing a product to market. Removing friction changed the software world as we know it. Now, with its iOS 26, iPadOS 26, MacOS 26, VisionOS 26, WatchOS 26, and TVOS 26 announcements, Apple is removing the friction involved in adding AI to apps. It's not that coding the AI into apps has been hard. No, it's that the business model has had a fairly high coefficient of friction. In fact, if you wanted to add AI, you had to deal with business-model issues. No longer, at least in the Apple ecosystem. Apple has announced that its Foundation Model Framework (essentially an LLM) is available on-device (solving the privacy issue) and at no charge (solving the business-model issue). It's the no-charge part of this that has me saying this is a revolutionary change. Up until now, if you wanted to add AI to your app, you really had to justify it. You had to have something big enough that you thought you could get an ROI from that investment. But now, developers can add AI to any of their apps like any other feature they include. You wouldn't expect a developer to have to do a business-model ROI analysis to add a drop-down menu or a pop-up calendar to an app. But with AI, developers have had to give it that extra thought, incur that extra friction. Now, any time a developer is coding along and thinking, "Ya know, an AI prompt would make this work better," the developer can add that prompt. Boom. Just part of the coding process. Also: Cisco rolls out AI agents to automate network tasks at 'machine speed' - with IT still in control For the big developers, this change won't mean all that much. But for the small and independent developers, this is huge. It means we'll start to see little bits of AI all through our apps, just helping out wherever a developer thinks it might help. Want to have some smart tags assigned to that note? Just feed a prompt to the Foundation Model API. Want to know if there are two shoes or a shoe and a handbag in that picture? Just feed the bitmap to the model. Want to generate a quick thumbnail for something? Just feed a prompt to the model. Want to have better dialog from your NPCs in your little casual game? Just ask the AI model. There's zero monetary investment required to get the AI services back out. Now, sure, the elephant in the room is that Apple's AI models are fairly meh. But the company is always improving. Those models will get better, year after year. So developers get quick, free AI code now. In a year or two, they get quick, free, really good AI code. Also: My new favorite iOS 26 feature is a supercharged version of Google Lens - and it's easy to use Let's also not forget the privacy benefits. All this is done on-device. That means the knowledge base won't be as extensive as ChatGPT's, but it also means your musings about whether you ate too many pizzas this week, your crush on someone, or your worries about a possible health scare remain private. They won't make it into some giant bouillabaisse of knowledge shared by the big AI companies. For some developers, this can be huge. For example, Automattic (the WordPress company) has an unrelated app called Day One, which is a journaling tool. You definitely don't want your private journaling thoughts shared with some giant AI in the cloud. "The Foundation Model framework has helped us rethink what's possible with journaling," said Paul Mayne, head of Day One at Automattic. "Now we can bring intelligence and privacy together in ways that deeply respect our users." Next year at this time, I'll bet we see AI embedded in tons of ways we've never even thought of before now. That's why I think Apple's new developer AI tools could be the biggest thing for apps since apps. Before we wrap this article, I want to mention that at its Platforms State of the Union, Apple announced some improvements to Xcode, the company's development environment. The company has integrated the now-typical AI coding tools into Xcode 26, allowing developers to ask an AI for help in coding, ask it to write code chunks, and more. Also: How AI coding agents could destroy open source software One feature I thought was interesting is that Apple has made Xcode 26 AI-agnostic. You can use whatever LLM you want in the chat section of Xcode. If you're using ChatGPT, you can use the free version or, if you have a paid tier, one of those. The company said you can use other models as well, but they only discussed the Anthropic models in the Platforms State of the Union session. In keeping with our previous AI discussion, Apple also said you can run models locally on your Mac, so your code doesn't have to be sent to a cloud-based AI. That could be very important if you're working under an NDA or other code-sharing restriction. Look, Apple Intelligence is still a disappointment. While Apple announced more Apple Intelligence features, there was a reason Apple focused on their liquid-glass and mirrors, and shiny new user-interface elements. It's something it does well. Face it. Nobody was asking Apple when it'd make glowing, liquid-like UI puddles. Everyone was wondering when Apple would catch up with Google, Microsoft, and especially OpenAI. Also: The 7 best AI features announced at Apple's WWDC that I can't wait to use It's definitely not there this year. But I do think that a fairly competent AI model for apps -- which is what Foundation Model offers -- will transform the types of features developers add to their code. And that is game-changing, even if it's not as flashy as what Apple usually puts out. What do you think about Apple's move to offer on-device AI tools for free? Will it change how developers approach app design? Are you more likely to add AI features to your own projects now that the business and privacy barriers are lower? Do you see this as a meaningful shift in the mobile-app ecosystem, or is it just Apple playing catch-up? Let us know in the comments below.
Share
Copy Link
Apple unveils its AI platform, Apple Intelligence, integrating advanced features across its devices and operating systems, marking a significant step in the AI race against competitors like Google and OpenAI.
Apple has made a significant leap into the AI arena with the introduction of Apple Intelligence, a comprehensive AI platform designed to seamlessly integrate into the company's existing ecosystem. Unveiled at WWDC 2024 and further expanded in subsequent events, Apple Intelligence marks the tech giant's strategic move to compete with industry leaders like Google, OpenAI, and Anthropic in the rapidly evolving AI landscape 1.
Apple Intelligence is not a standalone product but rather a suite of AI-powered features integrated into various Apple applications and services. Some of the notable features include:
Writing Tools: Powered by a large language model (LLM), this feature is available across apps like Mail, Messages, and Pages, offering text summarization, proofreading, and message composition capabilities 1.
Visual Intelligence: This feature allows users to perform image searches based on visual content they encounter while browsing. It has been expanded to work with screenshots, enabling users to interact with on-screen content more effectively 2.
Live Translation: A real-time translation feature that works across Messages, FaceTime, and Phone apps, facilitating seamless communication across language barriers 2.
Siri Enhancements: While a major Siri upgrade is still in development, the current version has been more deeply integrated into Apple's operating systems, offering improved context awareness and cross-app functionality 1.
Image Generation: Users can create custom emojis (Genmojis) and use the Image Playground app for AI-powered image creation 1.
Source: ZDNet
In a move that could significantly impact the app development landscape, Apple has made its on-device AI model available to developers. This decision opens up new possibilities for innovative applications leveraging Apple Intelligence, potentially transforming the iOS app ecosystem 3.
Apple Intelligence features are available on newer Apple devices, including iPhone 15 Pro models, iPad Pro (4th generation and later), and Mac computers with Apple silicon. The initial rollout began in October 2024 with iOS 18, iPadOS 18, and macOS Sequoia 15 updates, starting with English language support and gradually expanding to other languages 1.
Source: TechCrunch
Apple's approach to AI integration reflects its characteristic strategy of enhancing existing products rather than creating standalone AI applications. This pragmatic approach aims to improve user experience across the Apple ecosystem while maintaining the company's focus on privacy and on-device processing 1.
Apple has hinted at more personalized AI features in the pipeline, including an advanced version of Siri capable of understanding personal context. However, the release of these features has been delayed to ensure they meet Apple's quality standards 1.
The introduction of Apple Intelligence, particularly the decision to open the on-device AI model to developers, represents a significant shift in the app development landscape. This move could lead to a new wave of AI-powered applications, reminiscent of the transformative impact the App Store had on software distribution and accessibility 3.
As Apple continues to refine and expand its AI offerings, users can expect more intuitive and powerful features across their devices. The integration of AI into everyday tasks, from communication to content creation, signals a new era of user interaction within the Apple ecosystem.
Summarized by
Navi
[1]
AMD reveals its new Instinct MI350 and MI400 series AI chips, along with a comprehensive AI roadmap spanning GPUs, networking, software, and rack architectures, in a bid to compete with Nvidia in the rapidly growing AI chip market.
18 Sources
Technology
20 hrs ago
18 Sources
Technology
20 hrs ago
Google DeepMind has launched Weather Lab, an interactive website featuring AI weather models, including an experimental tropical cyclone model. The new AI system aims to improve cyclone predictions and is being evaluated by the US National Hurricane Center.
8 Sources
Technology
20 hrs ago
8 Sources
Technology
20 hrs ago
Meta's new AI app is facing criticism for its "Discover" feature, which publicly displays users' private conversations with the AI chatbot, often containing sensitive personal information.
6 Sources
Technology
20 hrs ago
6 Sources
Technology
20 hrs ago
A major Google Cloud Platform outage affected numerous AI services and popular platforms, highlighting the vulnerabilities of cloud-dependent systems and raising concerns about the resilience of digital infrastructure.
3 Sources
Technology
4 hrs ago
3 Sources
Technology
4 hrs ago
Harvard University and other libraries are releasing vast collections of public domain books and documents to AI researchers, providing a rich source of cultural and historical data for machine learning models.
6 Sources
Technology
20 hrs ago
6 Sources
Technology
20 hrs ago