6 Sources
[1]
Apple Intelligence: Everything you need to know about Apple's AI model and services | TechCrunch
If you've upgraded to a newer iPhone model recently, you've probably noticed that Apple Intelligence is showing up in some of your most-used apps, like Messages, Mail, and Notes. Apple Intelligence (yes, also abbreviated to AI) showed up in Apple's ecosystem in October 2024, and it's here to stay as Apple competes with Google, OpenAI, Anthropic, and others to build the best AI tools. Cupertino marketing executives have branded Apple Intelligence: "AI for the rest of us." The platform is designed to leverage the things that generative AI already does well, like text and image generation, to improve upon existing features. Like other platforms including ChatGPT and Google Gemini, Apple Intelligence was trained on large information models. These systems use deep learning to form connections, whether it be text, images, video or music. The text offering, powered by LLM, presents itself as Writing Tools. The feature is available across various Apple apps, including Mail, Messages, Pages and Notifications. It can be used to provide summaries of long text, proofread and even write messages for you, using content and tone prompts. Image generation has been integrated as well, in similar fashion -- albeit a bit less seamlessly. Users can prompt Apple Intelligence to generate custom emojis (Genmojis) in an Apple house style. Image Playground, meanwhile, is a standalone image generation app that utilizes prompts to create visual content than can be used in Messages, Keynote or shared via social media. Apple Intelligence also marks a long-awaited face-lift for Siri. The smart assistant was early to the game, but has mostly been neglected for the past several years. Siri is integrated much more deeply into Apple's operating systems; for instance, instead of the familiar icon, users will see a glowing light around the edge of their iPhone screen when it's doing its thing. More important, new Siri works across apps. That means, for example, that you can ask Siri to edit a photo and then insert it directly into a text message. It's a frictionless experience the assistant had previously lacked. Onscreen awareness means Siri uses the context of the content you're currently engaged with to provide an appropriate answer. Leading up to WWDC 2025, many expected that Apple would introduce us to an even more souped-up version of Siri, but we're going to have to wait a bit longer. "As we've shared, we're continuing our work to deliver the features that make Siri even more personal," said Apple SVP of Software Engineering Craig Federighi at WWDC 2025. "This work needed more time to reach our high-quality bar, and we look forward to sharing more about it in the coming year." This yet-to-be-released, more personalized version of Siri is supposed to be able to understand "personal context," like your relationships, communications routine, and more. But according to a Bloomberg report, the in-development version of this new Siri is too error-ridden to ship, hence its delay. At WWDC 2025, Apple also unveiled a new AI feature called Visual Intelligence, which helps you do an image search for things you see as you browse. Apple also unveiled a Live Translation feature that can translate conversations in real time in the Messages, FaceTime, and Phone apps. Visual Intelligence and Live Translation are expected to be available later in 2025, when iOS 26 launches to the public. After months of speculation, Apple Intelligence took center stage at WWDC 2024. The platform was announced in the wake of a torrent of generative AI news from companies like Google and Open AI, causing concern that the famously tight-lipped tech giant had missed the boat on the latest tech craze. Contrary to such speculation, however, Apple had a team in place, working on what proved to be a very Apple approach to artificial intelligence. There was still pizzazz amid the demos -- Apple always loves to put on a show -- but Apple Intelligence is ultimately a very pragmatic take on the category. Apple Intelligence isn't a standalone feature. Rather, it's about integrating into existing offerings. While it is a branding exercise in a very real sense, the large language model (LLM) driven technology will operate behind the scenes. As far as the consumer is concerned, the technology will mostly present itself in the form of new features for existing apps. We learned more during the Apple's iPhone 16 event in September 2024. During the event, Apple touted a number of AI-powered features coming to its devices, from translation on the Apple Watch Series 10, visual search on iPhones, and a number of tweaks to Siri's capabilities. The first wave of Apple Intelligence is arriving at the end of October, as part of iOS 18.1, iPadOS 18.1 and macOS Sequoia 15.1. The features launched first in U.S. English. Apple later added Australian, Canadian, New Zealand, South African, and U.K. English localizations. Support for Chinese, English (India), English (Singapore), French, German, Italian, Japanese, Korean, Portuguese, Spanish, and Vietnamese will arrive in 2025. The first wave of Apple Intelligence arrived in October 2024 va iOS 18.1, iPadOS 18., and macOS Sequoia 15.1 updates. These updates included integrated writing tools, image cleanup, article summaries, and a typing input for the redesigned Siri experience. A second wave of features became available as part of iOS 18.2, iPadOS 18.2 and macOS Sequoia 15.2. That list includes, Genmoji, Image Playground, Visual Intelligence, Image Wand, and ChatGPT integration. These offerings are free to use, so long as you have one of the following pieces of hardware: Notably, only the Pro versions of the iPhone 15 are getting access, owing to shortcomings on the standard model's chipset. Presumably, however, the whole iPhone 16 line will be able to run Apple Intelligence when it arrives. When you ask GPT or Gemini a question, your query is being sent to external servers to generate a response, which requires an internet connection. But Apple has taken a small-model, bespoke approach to training. The biggest benefit of this approach is that many of these tasks become far less resource intensive and can be performed on-device. This is because, rather than relying on the kind of kitchen sink approach that fuels platforms like GPT and Gemini, the company has compiled datasets in-house for specific tasks like, say, composing an email. That doesn't apply to everything, however. More complex queries will utilize the new Private Cloud Compute offering. The company now operates remote servers running on Apple Silicon, which it claims allows it to offer the same level of privacy as its consumer devices. Whether an action is being performed locally or via the cloud will be invisible to the user, unless their device is offline, at which point remote queries will toss up an error. A lot of noise was made about Apple's pending partnership with OpenAI ahead of the launch of Apple Intelligence. Ultimately, however, it turned out that the deal was less about powering Apple Intelligence and more about offering an alternative platform for those things it's not really built for. It's a tacit acknowledgement that building a small-model system has its limitation. Apple Intelligence is free. So, too, is access to ChatGPT. However, those with paid accounts to the latter will have access to premium features free users don't, including unlimited queries. ChatGPT integration, which debuts on iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2, has two primary roles: supplementing Siri's knowledge base and adding to the existing Writing Tools options. With the service enabled, certain questions will prompt the new Siri to ask the user to approve its accessing ChatGPT. Recipes and travel planning are examples of questions that may surface the option. Users can also directly prompt Siri to "ask ChatGPT." Compose is the other primary ChatGPT feature available through Apple Intelligence. Users can access it in any app that supports the new Writing Tools feature. Compose adds the ability to write content based on a prompt. That joins existing writing tools like Style and Summary. We know for sure that Apple plans to partner with additional generative AI services. The company all but said that Google Gemini is next on that list. At WWDC 2025, Apple announced what it calls the Foundation Models framework, which will let developers tap into its AI models while offline. This makes it more possible for developers to build AI features into their third-party apps that leverage Apple's existing systems. "For example, if you're getting ready for an exam, an app like Kahoot can create a personalized quiz from your notes to make studying more engaging," Federighi said at WWDC. "And because it happens using on-device models, this happens without cloud API costs [...] We couldn't be more excited about how developers can build on Apple intelligence to bring you new experiences that are smart, available when you're offline, and that protect your privacy."
[2]
Forget iOS 26, Jump on These 6 Apple Intelligence Features Right Now
Apple didn't have a lot to say about Apple Intelligence at last week's Worldwide Developers Conference, focusing instead on iOS 26 and the new Liquid Display interface that will extend to the iPhone and all of its devices. But even if it had, we'd still be waiting for the new operating systems to be released in the fall to take advantage of them (unless you want to live on the edge and install the first developer betas now). I sat down to figure out just which of the current Apple Intelligence features I actually use. They aren't necessarily the showy ones, like Image Playground, but ones that help in small, significant ways. Admittedly, Apple Intelligence has gotten off to a rocky start, from misleading message summaries to delayed Siri improvements, but the AI tech is far from being a bust. If you have a compatible iPhone -- an iPhone 15 Pro, iPhone 16E, iPhone 16 or iPhone 16 Pro (or their Plus and Max variants) -- I want to share six features that I'm turning to nearly every day. More features will be added as time goes on -- and keep in mind that Apple Intelligence is still officially beta software -- but this is where Apple is starting its AI age. On the other hand, maybe you're not impressed with Apple Intelligence, or want to wait until the tools evolve more before using them? You can easily turn off Apple Intelligence entirely or use a smaller subset of features. This feature arrived only recently, but it's become one of my favorites. When a notification arrives that seems like it could be more important than others, Prioritize Notifications pops it to the top of the notification list on the lock screen (with a colorful Apple Intelligence shimmer, of course). In my experience so far, those include weather alerts, texts from people I regularly communicate with and email messages that contain calls to action or impending deadlines. To enable it, go to Settings > Notifications > Prioritize Notifications and then turn the option on. You can also enable or disable priority alerts from individual apps from the same screen. You're relying on the AI algorithms to decide what gets elevated to a priority -- but it seems to be off to a good start. In an era with so many demands on our attention and seemingly less time to dig into longer topics ... Sorry, what was I saying? Oh, right: How often have you wanted a "too long; didn't read" version of not just long emails but the fire hose of communication that blasts your way? The ability to summarize notifications, Mail messages and web pages is perhaps the most pervasive and least intrusive feature of Apple Intelligence so far. When a notification arrives, such as a text from a friend or group in Messages, the iPhone creates a short, single-sentence summary. Sometimes summaries are vague and sometimes they're unintentionally funny but so far I've found them to be more helpful than not. Summaries can also be generated from alerts by third-party apps like news or social media apps -- although I suspect that my outdoor security camera is picking up multiple passersby over time and not telling me that 10 people are stacked by the door. That said, Apple Intelligence definitely doesn't understand sarcasm or colloquialisms -- you can turn summaries off if you prefer. You can also generate a longer summary of emails in the Mail app: Tap the Summarize button at the top of a message to view a rundown of the contents in a few dozen words. In Safari, when viewing a page where the Reader feature is available, tap the Page Menu button in the address bar, tap Show Reader and then tap the Summary button at the top of the page. I was amused during the iOS 18 and the iPhone 16 releases that the main visual indicator of Apple Intelligence -- the full-screen, color-at-the-edges Siri animation -- was noticeably missing. Apple even lit up the edges of the massive glass cube of its Apple Fifth Avenue Store in New York City like a Siri search. Instead, iOS 18 used the same-old Siri sphere. Now, the modern Siri look has arrived as of iOS 18.1, but only on devices that support Apple Intelligence. If you're wondering why you're still seeing the old interface, I can recommend some steps to turn on the new experience. With the new look are a few Siri interaction improvements: It's more forgiving if you stumble through a query, like saying the wrong word or interrupting yourself mid-thought. It's also better about listening after delivering results, so you can ask related followup questions. However, the ability to personalize answers based on what Apple Intelligence knows about you is still down the road. What did appear, as of iOS 18.2, was integration of ChatGPT, which you can now use as an alternate source of information. For some queries, if Siri doesn't have the answer right away, you're asked if you'd like to use ChatGPT instead. You don't need a ChatGPT account to take advantage of this (but if you do have one, you can sign in). Perhaps my favorite new Siri feature is the ability to bring up the assistant without saying the words "Hey Siri" out loud. In my house, where I have HomePods and my family members use their own iPhones and iPads, I never know which device is going to answer my call (even though they're supposed to be smart enough to work it out). Plus, honestly, even after all this time I'm not always comfortable talking to my phone -- especially in public. It's annoying enough when people carry on phone conversations on speaker, I don't want to add to the hubbub by making Siri requests. Instead, I turn to a new feature called Tap to Siri. Double-tap the bottom edge of the screen on the iPhone or iPad to bring up the Siri search bar and the onscreen keyboard. On a Mac, go to System Settings > Apple Intelligence & Siri and choose a key combination under Keyboard shortcut, such as Press Either Command Key Twice. Yes, this involves more typing work than just speaking conversationally, but I can enter more specific queries and not wonder if my robot friend is understanding what I'm saying. Until iOS 18.1, the Photos app on the iPhone and iPad lacked a simple retouch feature. Dust on the camera lens? Litter on the ground? Sorry, you need to deal with those and other distractions in the Photos app on MacOS or using a third-party app. Now Apple Intelligence includes Clean Up, an AI-enhanced removal tool, in the Photos app. When you edit an image and tap the Clean Up button, the iPhone analyzes the photo and suggests potential items to remove by highlighting them. Tap one or draw a circle around an area -- the app erases those areas and uses generative AI to fill in plausible pixels. In this first incarnation, Clean Up isn't perfect and you'll often get better results in other dedicated image editors. But for quickly removing annoyances from photos, it's fine. Focus modes on the iPhone can be enormously helpful, such as turning on Do Not Disturb to insulate yourself from outside distractions. You can also create personalized Focus modes. For example, my Podcast Recording mode blocks outside notifications except from a handful of people during scheduled recording times. With Apple Intelligence enabled, a new Reduce Interruptions Focus mode is available. When active, it becomes a smarter filter for what gets past the wall holding back superfluous notifications. Even things that are not specified in your criteria for allowed notifications, such as specific people, might pop up. On my iPhone, for instance, that can include weather alerts or texts from my bank when a large purchase or funds transfer has occurred.
[3]
How Apple just changed the developer world with this one AI announcement
This is big. Really, really big. It's subtle. It's probably not what you think. And it's going to take a few minutes to explain. Before I deconstruct the strategic importance of this move, let's discuss what "it" is. Briefly, Apple is providing access to its on-device AI large language model (LLM) to developers. I can hear you all saying, "That's it? That's this big thing? Developers have had access to AI LLMs since there were AI LLMs. Are you saying it's big because it's from Apple? Fan boy! Nyah-nyah." No. That's not it. I'm not an Apple fan boy. And I certainly don't bleed in six colors. Also: The best AI for coding in 2025 (including a new winner - and what not to use) Another group of you is probably thinking, "Wait. What? AI from Apple? The last we looked, on the number line between barf and insanely great, Apple Intelligence was about two-thirds of the way toward barf." Yeah, I have to agree. Apple Intelligence has been a big nothingburger. I even wrote an entire article about how uninteresting and yawn-inducing Apple Intelligence has been. I still think that. But the fact that Apple's branding team oversold a feature set doesn't detract from the seismic change that Apple has just announced. I know. In this context, bringing Steve Ballmer's famous rant into an Apple story is like telling someone, "Live long and may the Force be with you." But this is a developer story. Let's be clear: everything about the modern Apple ecosystem is really a developer story. In fact, everything about the modern world is, fundamentally, a developer story. Also: Everything announced at Apple's WWDC 2025 keynote: Liquid Glass, MacOS Tahoe, and more It's hard to deny the fact that code rules the world. Nearly everything we do, and certainly all of our communications, supply chain, and daily-life ecosystem, revolves around software. We became a highly connected, mobile-computing-centric society when the smartphone became a permanent appendage to the human body in 2008 or so. But it wasn't the generic smartphone. It wasn't even the iPhone that changed everything. It was the App Store. Prior to the App Store, you needed some level of geek skills to install software. That meant there was friction between having an idea for software and installing it. Developers had to find users, manage distribution channels, and eventually sell their goods. When I started my first software company, I faced a number of barriers to entry: printing packaging cost tens of thousands of dollars per title; convincing a distributor and retailer to carry it; and then there was warehousing, shipping, assembly, and a variety of other physical supply-chain issues. Most developers only got to keep 30-40% of the eventual retail price of the product; distributors and retailers got the rest. Also: The 7 best AI features announced at Apple's WWDC that I can't wait to use Then came the App Store. First, we could sell software for as little as a buck, which could still be profitable. There were no production costs, no cost to print disks or disk labels, no labor to put labels on the disks or prepare them for shipping, and no shipping costs. Users didn't have to find some "computer kid" to install the software -- they just pushed a button and it installed. Developers who sold through the channel got to keep 70% of the revenue instead of just 30 or 40%. Back when the App Store launched, I created 40 pinpoint iPhone apps. I didn't make enough to give up my day job, but I did make a few thousand bucks in profit. Before the App Store, it would have been impossible to create 40 little apps -- impossible to get shelf space, afford production, price them at a buck, or make a profit. The App Store removed all that friction, and the number of available apps ballooned into the millions. Anybody, anywhere, with a computer and a little programming skill, could -- and still can -- create an app, get distribution, sell some, and make a profit. Anyone. Also: Is ChatGPT Plus still worth $20 when the free version packs so many premium features? Keep in mind that the power of the iPhone and of Android is the developer long tail. Sure, we all have the Facebook and Instagram apps on our phones. We probably all have a few of the big apps like Uber and Instacart. But it's not billion-dollar apps that make the platform; it's the tons and tons of little specialty apps, some of which broke out and became big apps. It's the fact that anyone can make an app, can afford to make an app, and can afford to get that app into distribution. It's not that the App Store lowered the barrier to entry. It's that the App Store effectively removed any financial barrier to entry at all. Well, technically, AI has been with us for fifty years or more. The big change is generative AI. ChatGPT, and its desperate competitor clones, changed things once again. I don't need to go into the mega-changes we've been seeing due to the emergence of generative AI. We cover that every day here at ZDNET. Just about every other publication on the planet is also covering AI in depth. Also: Your favorite AI chatbot is lying to you all the time The thing is, AI is bonkers expensive. Cited in a really interesting ZDNET article on AI energy use, Boston Consulting Group estimates that AI data centers will use about 7.5% of America's energy supply within four years. AI data centers are huge and enormously expensive to either build out or rent. Statista cites OpenAI's Sam Altman as saying that GPT-4, the LLM inside ChatGPT, cost more than $100 million. (Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) While most chatbots based on LLMs have free tiers, those tiers are often fairly limited in what they can do and how often they do it. They're loss leaders designed to get consumers used to the idea of AI so they eventually become customers. The real business is in licensing. You can oversimplify the AI business by breaking it into two categories: those who create the LLMs, and those who license the LLMs for use in their apps. Also: Your iPhone will translate calls and texts in real time, thanks to AI AI companies (those who make the LLMs) base their business models on the premise that other developers will want the benefits of generative AI for their software products. Few developers want to take on the expense of developing an AI from scratch, so they license API calls from the AI companies, effectively paying based on usage. This makes adding AI to an app absurdly easy. The bulk of the effort is in authenticating the app's right to access the AI. Then the app just sends a prompt as an API parameter, and the AI returns either plain text or structured text as a response. There are two main gotchas. First, whatever your app sends to the AI is being sent to the AI -- there's a privacy issue there. But more to the point, developers have only four business-model options for incorporating AI via API calls in their products: In all of these cases, the AI becomes a transactional expense. The AI features presented to customers have to provide big enough value (or be spun as having big enough value) to convince customers to spend for them. If developers eat the AI API fees themselves, the app has to be profitable enough for the developer to include those fees in their cost of goods. And, again, all of this has privacy concerns on top of the expense barrier to entry. If you think about it, the App Store removed barriers to entry. It removed friction. It removed the friction consumers felt in having to actually do software installation. And it removed tons of developer friction in bringing a product to market. Removing friction changed the software world as we know it. Now, with its iOS 26, iPadOS 26, MacOS 26, VisionOS 26, WatchOS 26, and TVOS 26 announcements, Apple is removing the friction involved in adding AI to apps. It's not that coding the AI into apps has been hard. No, it's that the business model has had a fairly high coefficient of friction. In fact, if you wanted to add AI, you had to deal with business-model issues. No longer, at least in the Apple ecosystem. Apple has announced that its Foundation Model Framework (essentially an LLM) is available on-device (solving the privacy issue) and at no charge (solving the business-model issue). It's the no-charge part of this that has me saying this is a revolutionary change. Up until now, if you wanted to add AI to your app, you really had to justify it. You had to have something big enough that you thought you could get an ROI from that investment. But now, developers can add AI to any of their apps like any other feature they include. You wouldn't expect a developer to have to do a business-model ROI analysis to add a drop-down menu or a pop-up calendar to an app. But with AI, developers have had to give it that extra thought, incur that extra friction. Now, any time a developer is coding along and thinking, "Ya know, an AI prompt would make this work better," the developer can add that prompt. Boom. Just part of the coding process. Also: Cisco rolls out AI agents to automate network tasks at 'machine speed' - with IT still in control For the big developers, this change won't mean all that much. But for the small and independent developers, this is huge. It means we'll start to see little bits of AI all through our apps, just helping out wherever a developer thinks it might help. Want to have some smart tags assigned to that note? Just feed a prompt to the Foundation Model API. Want to know if there are two shoes or a shoe and a handbag in that picture? Just feed the bitmap to the model. Want to generate a quick thumbnail for something? Just feed a prompt to the model. Want to have better dialog from your NPCs in your little casual game? Just ask the AI model. There's zero monetary investment required to get the AI services back out. Now, sure, the elephant in the room is that Apple's AI models are fairly meh. But the company is always improving. Those models will get better, year after year. So developers get quick, free AI code now. In a year or two, they get quick, free, really good AI code. Also: My new favorite iOS 26 feature is a supercharged version of Google Lens - and it's easy to use Let's also not forget the privacy benefits. All this is done on-device. That means the knowledge base won't be as extensive as ChatGPT's, but it also means your musings about whether you ate too many pizzas this week, your crush on someone, or your worries about a possible health scare remain private. They won't make it into some giant bouillabaisse of knowledge shared by the big AI companies. For some developers, this can be huge. For example, Automattic (the WordPress company) has an unrelated app called Day One, which is a journaling tool. You definitely don't want your private journaling thoughts shared with some giant AI in the cloud. "The Foundation Model framework has helped us rethink what's possible with journaling," said Paul Mayne, head of Day One at Automattic. "Now we can bring intelligence and privacy together in ways that deeply respect our users." Next year at this time, I'll bet we see AI embedded in tons of ways we've never even thought of before now. That's why I think Apple's new developer AI tools could be the biggest thing for apps since apps. Before we wrap this article, I want to mention that at its Platforms State of the Union, Apple announced some improvements to Xcode, the company's development environment. The company has integrated the now-typical AI coding tools into Xcode 26, allowing developers to ask an AI for help in coding, ask it to write code chunks, and more. Also: How AI coding agents could destroy open source software One feature I thought was interesting is that Apple has made Xcode 26 AI-agnostic. You can use whatever LLM you want in the chat section of Xcode. If you're using ChatGPT, you can use the free version or, if you have a paid tier, one of those. The company said you can use other models as well, but they only discussed the Anthropic models in the Platforms State of the Union session. In keeping with our previous AI discussion, Apple also said you can run models locally on your Mac, so your code doesn't have to be sent to a cloud-based AI. That could be very important if you're working under an NDA or other code-sharing restriction. Look, Apple Intelligence is still a disappointment. While Apple announced more Apple Intelligence features, there was a reason Apple focused on their liquid-glass and mirrors, and shiny new user-interface elements. It's something it does well. Face it. Nobody was asking Apple when it'd make glowing, liquid-like UI puddles. Everyone was wondering when Apple would catch up with Google, Microsoft, and especially OpenAI. Also: The 7 best AI features announced at Apple's WWDC that I can't wait to use It's definitely not there this year. But I do think that a fairly competent AI model for apps -- which is what Foundation Model offers -- will transform the types of features developers add to their code. And that is game-changing, even if it's not as flashy as what Apple usually puts out. What do you think about Apple's move to offer on-device AI tools for free? Will it change how developers approach app design? Are you more likely to add AI features to your own projects now that the business and privacy barriers are lower? Do you see this as a meaningful shift in the mobile-app ecosystem, or is it just Apple playing catch-up? Let us know in the comments below.
[4]
The 7 best AI features announced at Apple's WWDC that I can't wait to use
These features can do everything from mixing music for you to translating audio for you in real-time. Apple Worldwide Developers' Conference (WWDC) was expected to have little AI news -- but Apple proved everyone wrong. Even though Apple has not yet launched the highly anticipated Siri upgrade -- the company said we will hear more about it in the coming year -- when it came to event time, Apple unveiled a slew of AI features across its devices and operating systems, including iOS, MacOS, WatchOS, and iPadOS. Also: Apple's secret sauce is exactly what AI is missing While not being the most flashy features, many of them address issues that Apple users already had with their devices or in their everyday workflows. I gathered the AI features announced and ranked them from most to least helpful. Apple introduced Visual Intelligence last year with the iPhone 16 launch. At the time, Apple's Visual Intelligence allowed users to take a photo of objects around them and then use the iPhone's AI capability to search for them and find more information. Monday, Apple upgraded the experience by adding Visual Intelligence to your iPhone screen. To use it, you just have to take a screenshot. Visual Intelligence can use Apple Intelligence to grab the details from your screenshot and suggest actions, such as adding an event to your calendar. You can also use the "ask button" to ask ChatGPT for help with a particular image. This is useful for tasks in which ChatGPT could provide assistance, such as solving a puzzle. You can also tap on "Search" to look on the web. Also: Your entire iPhone screen is now searchable with new Visual Intelligence features Although Google already offered the same capability years ago with Circle to Search, this is a big win for Apple users, as it is functional and was executed well. It leverages ChatGPT's already capable models rather than trying to build an inferior one itself. Since generative AI exploded in popularity, a useful application that has emerged is real-time translation. Because LLMs have a deep understanding of language and how people speak, they are able to translate speech not just literally but also accurately using additional context. Apple will roll out this powerful capability across its own devices with a new real-time translation feature. Also: Apple Intelligence is getting more languages - and AI-powered translation The feature can translate text in Messages and audio on FaceTime and phone calls. If you are using it for verbal conversations, you just click a button on your call, which alerts the other person that the live translation is about to take place. After a speaker says something, there is a brief pause, and then you get audio feedback with a conversational version of what was said in your language of choice, with a transcript you can follow along. This feature is valuable because it can help people communicate with each other. It is also easy to access because it is being baked into communication platforms people already rely on every day. Apple made its on-device model available to developers for the first time, and although that may seem like it would only benefit developers, it's a major win for all users. Apple has a robust community of developers who build applications for Apple's operating systems. Tapping into that talent by allowing them to build on Apple Intelligence nearly guarantees that more innovative and useful applications using Apple Intelligence will emerge, which is beneficial to all users since they will be able to take advantage of them. The Shortcuts update was easy to miss during the keynote, but it is one of the best use cases for AI. If you are like me, you typically avoid programming Shortcuts because they seem too complicated to create. This is where Apple Intelligence can help. Also: Shortcuts is the best Apple app you're not using - and iOS 26 makes it even more powerful With the new intelligent actions features, you can tap into Apple Intelligence models either on-device or in Private Cloud Compute within your Shortcut, unlocking a new, much more advanced set of capabilities. For example, you could set up a Shortcut that takes all the files you add to your homepage and then sorts them into files for you using Apple Intelligence. There is also a gallery feature available to try out some of the features and get inspiration for building. The Hold Assist feature is a prime example of a feature that is not over the top but has the potential to save you a lot of time in your everyday life. The way it works is simple: if you're placed on hold and your phone detects hold music, it will ask if you want your call spot held in line and notify you when it's your turn to speak with someone, alerting the person on the other end of the call that you will be right there. Also: These 3 Apple CarPlay upgrades stole WWDC 2025 for me Imagine how much time you will get back from calls with customer service. If the feature seems familiar, it's because Google has a similar "Hold for me" feature, which allows users to use Call Assist to wait on hold for you and notify you when they are back. The Apple Vision Pro introduced the idea of enjoying your precious memories in an immersive experience that places you in the scene. However, to take advantage of this feature, you had to take spatial photos and record spatial videos. Now, a similar feature is coming to iOS, allowing users to transform any picture they have into a 3D-like image that separates the foreground and background for a spatial effect. Also: iOS 26 will bring any photo on your iPhone to life with 3D spatial effects The best part is that you can add these photos to your lock screen, and as you move your phone, the 3D element looks like it moves with it. It may seem like there is no AI involved, but according to Craig Federighi, Apple's SVP of software engineering, it can transform your 2D photo into a 3D effect by using "advanced computer vision techniques running on the Neural Engine. " Using AI for working out insights isn't new, as most fitness wearable companies, including Whoop and Oura, have implemented a feature of that sort before. However, Workout Buddy is a unique feature and an application of AI I haven't seen before. Essentially, it uses your fitness data, such as history, paces, Activity Rings, and Training Load, to give you unique feedback as you are working out. Also: Your Apple Watch is getting a major upgrade. Here are the best features in WatchOS 26 Even though this feature is a part of the WatchOS upgrade -- and I don't happen to own one -- it does seem like a fun and original way to use AI. As someone who lacks all desire to work out, I can see that having a motivational reminder can have a positive impact on the longevity of my workout. The list above is already pretty extensive, and yet, Apple unveiled a lot more AI features: Get the morning's top stories in your inbox each day with our Tech Today newsletter.
[5]
I Thought Apple Was Falling Behind in AI -- Then WWDC Changed My Mind
I recently wrote about how Apple's lagging AI technology might impact its device and OS market share. But as I sat at the company's headquarters in Cupertino and listened to the WWDC keynote, I came to the opposite conclusion. Apple falling behind Google and Microsoft might not matter much after all. In fact, I question whether the latest versions of iOS, iPadOS, and macOS will even meaningfully trail competitors once they fully launch in the fall. Yes, people are suing Apple for not delivering on its AI promises with new iPhones. I think this is a spurious, very first-world problem. And the tech giant is at least trying to be more honest now: At WWDC, Apple's senior vice president for software engineering, Craig Federighi, admitted that Apple Intelligence was not where the company wanted it to be yet. That's fine. Apple shouldn't release anything that doesn't have the polish and quality that its users expect. Importantly, the development, hardware, and privacy components are all still in place for Apple Intelligence to succeed. The company can still build a seamless AI experience that people genuinely want to use. Apple Picked the Right Time to Fall Behind Apple's more measured development of AI coincides with a societal change in attitude about the technology. Pundits on Bluesky talk about how no one really needs to use AI, and that it's just another money grab by tech bros and big tech. Instagram memes bemoan its excessive energy demands and water use. That's a big shift from the initial widespread euphoria over ChatGPT, and these are all legitimate issues. I primarily see AI as a beneficial tool for professional work (such as coding and medical research) and for handling otherwise tedious tasks. Apple seems to be taking a wholly pragmatic approach, with features such as screenshot information extraction and spam call detection. If Apple continues to make AI something people find helpful in daily interactions with their devices, then I think it can combat the shifting, more cynical views on the technology. On-Device AI Has Its Advantages Apple is also prioritizing on-device AI features, thus reducing the need for power-hungry data centers. AI models across the industry continue to get smaller and less resource-heavy, too, as evidenced by Deepseek, Google's Gemma, and Microsoft's Phi. If Apple keeps applying development resources to local processing and optimization, it could win over people who otherwise consider the energy usage of AI models egregious. An efficient set of AI tools that arrives eventually is better than a rushed one that sucks up energy. AI Development Is Fast, Market Share Changes Might Not Be At its I/O conference, Google announced features for Gemini that resemble those Apple did for Apple Intelligence in 2024. It said, for example, that Gemini could use information in your calendar, contacts, and email to take various actions, in combination with public data about traffic conditions and the like. Copilot+ PCs can perform actions for you via the Click to Do feature, but not to the personalized extent as with Google's Gemini. Both Copilot and Gemini can generate images and suggest writing improvements using local processing, but Apple has already delivered on those parts of its AI promises, so it's not really behind there. These are the kinds of features with which Apple can bring around naysayers. Apple currently lets you tap into ChatGPT for more advanced generative capabilities, and at WWDC, the company announced that you would soon be able to use the chatbot for advanced image generation within Image Playground. This means you will be able to get more compelling visuals than the cartoon-like art it heretofore offered; think oil-painting-style or photorealistic images. However, you will need to connect to ChatGPT's servers to do so. As for Apple losing market share, dramatic changes are usually pretty slow. MacOS still takes a consistent and significant percentage away from Windows in the desktop OS market, and iOS still beats Android. Could things start to change if people don't buy into Apple's AI strategy? Certainly. Are Apple users likely to jump ship to Android and Windows over AI features? Maybe, but probably not. Apple can't be stagnant forever, but practical and socially positive AI features can help it stay ahead of any negative trends that emerge. Apple Shouldn't Change Its Thoughtful AI Approach What's the upshot of all this? It's just fine for Apple not to debut drastic AI features for a while (or maybe ever). If the best form of AI ends up being one that works seamlessly in the background, then Apple should just continue to introduce features only where it sees an actual need for them. In the meantime, users can acclimate to the new Liquid Glass interface that extends across platforms. Although I didn't find the OSes in particular need of a design update, I can appreciate the improvements. It's what people interact with all day, after all -- at least until AI takes things over for good.
[6]
Hold on. Did Apple Intelligence just become... good?
This year's WWDC may not have been an AI-centric keynote, but it has meaningfully boosted Apple Intelligence. Last month, I revealed that Google I/O's AI announcements made me question my loyalty towards the iPhone. Android users are getting all sorts of next-level, sci-fi-like features, while Apple Intelligence on iOS has generally been underwhelming. That kind of changed this week. WWDC25's main keynote, as expected, didn't really focus on AI advancements; I can't remember Siri being mentioned once outside of the opening mea culpa. Nevertheless, Apple previewed over a dozen AI features -- some subtle, others less so -- coming to iOS, iPadOS, macOS, and watchOS 26. What stood out to me is that, while not as jaw-dropping as those we've seen elsewhere, these handy tools will fit into most users' everyday digital lives. They're useful rather than showy. I had an epiphany: Apple isn't ahead or behind in the mainstream AI race, it's running on a separate track. Here are the AI upgrades from WWDC that you need to know about. When Google previewed real-time translation in Meet during I/O, I was confident it would take Apple years to replicate it. I was so very wrong. As of the launch of iOS 26 and macOS 26, users will get access to Live Translation in calls and messages. While the language pairs are currently limited, the feature works fully offline, making it faster and more private than rivals' cloud-based approaches. Whether you're touring a foreign country or helping out a visitor, this addition will make communication easier and more intuitive. Another useful addition launching in September is Call Screening. This AI-powered tool automatically answers calls from unknown numbers, asks callers to identify themselves and provide the reason they're calling, then neatly displays the information on your Lock Screen. You can decide whether it's a call you want to pick up or ignore accordingly. Similarly, the Messages app can now place texts from unknown numbers in a separate inbox -- unless it detects time-sensitive content. So, for example, authentication codes and reservation confirmations should come through, but not random, irrelevant texts. Hold Assist is yet another AI-powered communication feature launching with iOS 26. Thanks to this handy tool, you can set your iPhone aside when put on hold, and it'll automatically alert you once you're connected to an agent. This spares you from wasting your time to wait in a digital queue. This year Apple is introducing Cleanup Recommendations for iCloud Mail. Similar to iOS 18's email categorization feature, the tool scans your inbox and suggests ways to minimize the noise. The tips include deleting old promotions, unsubscribing from mailing lists, and more. Instead of manually going through endless emails to delete unwanted content, the system can spotlight the likely culprits so you can take quick action. One thing I disliked about Visual Intelligence when it first launched was being limited to camera input. I often need to identify content on my screen, and the alternative was manually asking Siri to send my queries to ChatGPT. That was neither reliable nor intuitive. With iOS 26, Apple is bringing Visual Intelligence to screenshots. As with Google Lens, you'll be able to quickly inquire about on-screen matters without jumping through unnecessary hoops. Apple is also bringing the AI goods to the Reminders app. With this year's releases, Apple Intelligence can recommend tasks you may want to add based on your emails and other indicators. It can also separate relevant items in a list based on their category. Given Apple's acquisition of Mayday, it's safe to assume that Calendar and Reminders will only get more powerful in future updates. Image Playground's launch last year was disastrous. From the questionable app icon to the nightmarish human animations, it just didn't feel like a finished product. With iOS, iPadOS, and macOS 26, Apple has switched to a more presentable icon. More importantly, it has acknowledged its shortcomings by baking ChatGPT into the app. So you can now create more polished cartoons using OpenAI's servers when Apple's models disappoint. Similarly, you can now tweak the appearance and facial expressions of humans in Genmoji stickers. This helps you get the exact look you're aiming for, without needing to get too specific with the text prompt. Speaking of AI images, iMessage's new background feature lets you opt for artificially generated graphics. This lets you create unique wallpapers that match the vibe of a conversation. Hopefully Apple extends this useful tool to the system's wallpaper in a future update. Another underrated yet extremely powerful new feature is support for AI actions in the Shortcuts app. You can use Apple's Private Cloud Compute, on-device models, or ChatGPT to tweak text, get answers, generate images, and much more when building shortcuts and automations. While it may not mean much to casual users, the possibilities are truly endless for tech-savvier folk. Spotify has long offered an AI DJ feature, and Apple Music appears to be heading down a similar road. iOS 26 introduces a new AutoMix feature that optionally replaces the traditional cross-fade lasting a preset number of seconds. Instead, the new tool analyzes songs in your queue and "crafts unique transitions between songs with time stretching and beat matching to deliver continuous playback and an even more seamless listening experience." Apple Music on iOS 26 similarly gets a taste of Live Translation, letting you view real-time English lyrics for foreign songs. While the Apple Watch doesn't generally get a lot of AI love, watchOS 26 does bring one exclusive Apple Intelligence feature to your wrist. Workout Buddy delivers motivational messages based on your previous exercises, health data, and achievements. This could push interested users to commit to their active lifestyles and aim higher during workouts. Lastly, Apple is integrating ChatGPT into Xcode with macOS Tahoe. The feature could become invaluable for developers who need assistance with building and debugging. That's not to mention that devs can now integrate Apple Intelligence into their own apps. Until a few days ago, I was quite skeptical about Apple's AI efforts and overall direction. The company seemed clueless compared to the competition -- especially after the Siri 2.0 delay. Now, however, I'm starting to see the bigger picture. Apple is choosing to build AI features that make sense within users' daily routines; they're not wacky gimmicks that show off the power of an artificial brain. While Google and Samsung rush to overload their phones with AI, Apple is testing the waters with reliable solutions that cater to customers' needs. Siri in its current state is unfit for purpose. But AI is much more than just a chatbot -- and a future release is bound to clean up its mess. All the company needs to do is release a dozen foolproof AI features per year, and Apple Intelligence on iOS will naturally mature and feel more comprehensive down the road.
Share
Copy Link
Apple's approach to AI, branded as Apple Intelligence, is evolving with new features and integrations across its devices and operating systems, balancing innovation with privacy and user experience.
Apple has been steadily developing its AI capabilities, branded as Apple Intelligence, to compete with tech giants like Google, OpenAI, and Anthropic. Introduced in October 2024, Apple Intelligence aims to leverage generative AI to enhance existing features across Apple's ecosystem 1. The company's approach focuses on integrating AI seamlessly into its operating systems and applications, rather than creating standalone AI products.
Apple Intelligence introduces several noteworthy features:
Writing Tools: Powered by a large language model (LLM), this feature is available across various Apple apps, including Mail, Messages, and Pages. It offers text summarization, proofreading, and message composition based on content and tone prompts 1.
Visual Intelligence: This feature allows users to perform image searches for objects they see while browsing. It has been upgraded to work directly on the iPhone screen, enabling users to interact with screenshots and ask ChatGPT for assistance with image-related queries 34.
Source: CNET
Live Translation: A new real-time translation feature has been introduced for text in Messages and audio on FaceTime and phone calls, facilitating seamless communication across language barriers 4.
Siri Improvements: The smart assistant has received a significant upgrade, with a new visual interface and improved contextual awareness. It now works across apps and can perform more complex tasks, such as editing photos and inserting them into text messages 1.
Source: PC Magazine
Apple's approach to AI emphasizes privacy and user experience. The company prioritizes on-device AI features, reducing the need for power-hungry data centers and addressing concerns about data privacy 5. This strategy aligns with Apple's longstanding commitment to user privacy and could give the company an edge in the AI race.
Source: ZDNet
At WWDC 2025, Apple announced that developers would have access to its on-device AI large language model (LLM) 3. This move is significant as it opens up new possibilities for third-party app developers to create innovative AI-powered features within the Apple ecosystem.
Despite the advancements, Apple Intelligence has faced some criticism. The platform got off to a rocky start, with issues such as misleading message summaries and delayed Siri improvements 2. Additionally, some highly anticipated features, like a more personalized version of Siri, have been delayed due to quality concerns 1.
While Apple may have seemed to lag behind competitors in the AI race initially, the company's measured approach and focus on practical, privacy-conscious AI features could prove beneficial in the long run. The integration of AI across Apple's devices and operating systems, combined with the company's strong market position, suggests that Apple Intelligence could have a significant impact on the AI landscape 5.
1: https://techcrunch.com/2025/06/10/apple-intelligence-everything-you-need-to-know-about-apples-ai-model-and-services/ 2: https://www.cnet.com/tech/services-and-software/forget-ios-26-jump-on-these-6-apple-intelligence-features-right-now/ 3: https://www.zdnet.com/article/how-apple-just-changed-the-developer-world-with-this-one-ai-announcement/ 4: https://www.zdnet.com/article/the-7-best-ai-features-announced-at-apples-wwdc-that-i-cant-wait-to-use/ 5: https://www.pcmag.com/news/wwdc-2025-changed-my-mind-apple-intelligence-ai
Summarized by
Navi
[1]
Google introduces Search Live, an AI-powered feature enabling back-and-forth voice conversations with its search engine, enhancing user interaction and information retrieval.
15 Sources
Technology
1 day ago
15 Sources
Technology
1 day ago
Microsoft is set to cut thousands of jobs, primarily in sales, as it shifts focus towards AI investments. The tech giant plans to invest $80 billion in AI infrastructure while restructuring its workforce.
13 Sources
Business and Economy
1 day ago
13 Sources
Business and Economy
1 day ago
Apple's senior VP of Hardware Technologies, Johny Srouji, reveals the company's interest in using generative AI to accelerate chip design processes, potentially revolutionizing their approach to custom silicon development.
11 Sources
Technology
16 hrs ago
11 Sources
Technology
16 hrs ago
Midjourney, known for AI image generation, has released its first AI video model, V1, allowing users to create short videos from images. This launch puts Midjourney in competition with other AI video generation tools and raises questions about copyright and pricing.
10 Sources
Technology
1 day ago
10 Sources
Technology
1 day ago
A new study reveals that AI reasoning models produce significantly higher COβ emissions compared to concise models when answering questions, highlighting the environmental impact of advanced AI technologies.
8 Sources
Technology
8 hrs ago
8 Sources
Technology
8 hrs ago