3 Sources
[1]
7 AI features the iPhone 17 needs to keep up with Google, OpenAI, and others
The new AI camera features in the Pixel 10 could be the biggest differentiator between it and the iPhone 17. While the iPhone has virtually all of the smartest AI apps available from the latest AI trailblazers, it lacks the kind of deep integration of AI features that are only available at the intersection of the operating system and the latest hardware. That's what we've seen with the rollout of Google's Pixel 10 lineup. Here are seven features from AI leaders that would make a huge impact if they were seamlessly embedded at the system level in the iPhone 17. OpenAI's Voice Mode in ChatGPT essentially works the way I've always wanted Siri to work on the iPhone. You just fire it up and start talking to it in natural language, and it can answer questions, pull up information, and even carry out a few actions. ZDNET's Sabrina Ortiz has explained how to assign Voice Mode to the iPhone's Action Button to use it like a Siri replacement. Also: How to use ChatGPT's Voice Mode (and why you'll want to) But Voice Mode -- which is being renamed ChatGPT Voice and is soon rolling out to free users -- is still limited in the commands it can carry out on your iPhone. An Apple version of this feature or a partnership with OpenAI could allow much deeper integration across calendar, email, text messages, notes, settings, and other operating system tasks (with Apple privacy protections in place). Similarly, Google already has Gemini Live and Microsoft offers Copilot Voice, so Apple needs to move more deliberately to help the iPhone keep up. I've written about the fact that I love zoom photography and how it's the one area where phone cameras still fall down and I have to regularly turn to my Sony mirrorless camera and 70-200mm zoom lens. However, Google has recently taken a big step to fill the gap in zoom photography in the Pixel 10 Pro. With its new Super Res Zoom feature, the Pixel 10 Pro will fill in missing data and automatically process a digital zoom image up to 100x to make it more usable. Also: Pixel just zoomed ahead of iPhone in the camera photography race This brings up a number of questions about what makes a photo, and I still need to try it out on the Pixel 10 Pro to report back on how well it works, but this feels like a worthy use of computational photography. And the only smartphone maker that's going to compete with Google on computational photography is Apple. Last year at WWDC 2024, Apple made a big deal about its Personal Intelligence feature that could understand your questions and requests because it had information about you from your calendar, mail, text messages, and other data stored privately in the Apple ecosystem. In the WWDC keynote, Apple used examples like "Pull up the files Joz shared with me last week" and a real-time alert that a meeting you're about to reschedule could conflict with giving your kid a ride to a regular activity. Of course, Apple has never shipped this feature -- but now Google has. In the Pixel 10, Google launched Magic Cue, which can save you from having to jump between apps by knowing enough about you to help cue you with info. One example it provided was a text message where someone asked you what time dinner reservations were, and Magic Cue presumably used info from a Gmail confirmation message to surface the info right in the messaging app, and the user simply had to tap it to send a response. Google says this kind of action can now happen locally on the device because of the Tensor G5 chip in the Pixel 10. Still, I think more people would trust Apple with their privacy on a feature like this because Apple doesn't make money off of using your data in opportunistic ways. One of the biggest ways generative AI saves me time is by using it as a research assistant. Several of the AI apps now offer a Deep Research feature where you can ask an important question about a more complex topic and give the AI extra time (usually 5-30 minutes) to scour available sources and come back with an answer that includes clearly marked links to where the info came from. Also: Anthropic wants to stop AI models from turning evil - here's how I prefer to use Deep Research from Anthropic's Claude app because of its focus on accuracy. There have been many reports that Apple has been in talks with Anthropic about various collaboration opportunities. Integrating Claude's Deep Research into Siri so that you could trigger it quickly from a voice or text prompt would be a powerful option. Google first launched its Best Take feature on the Pixel 8 in 2023 and recently gave it another big upgrade on the Pixel 10. The feature came about from a collaboration by the Google Pixel, Google Photos, and Google Research teams working together to solve the "group shot dilemma." It uses multiple photos taken back-to-back of a group of people where not everyone has their eyes open, is looking at the camera, or is making an awkward expression. It then combines everyone's best take into a more usable photo. The new "Auto Best Take" on the Pixel 10 does this in the background and produces the Best Take photo for you. Also: I tried every new AI feature on the Google Pixel 10 series Similarly, there's also the Add Me feature (launched on the Pixel 9), which uses AR and AI in clever ways to allow the photographer to get added to the group shot, by essentially combining two photos -- guided by the camera app. It's reasonable to expect that Apple has the computational photography chops to pull this off, or the relationship with Google to license the technology, especially since it's based in the Google Photos app that's already available on iOS. One of the most advanced capabilities of large language models is translating between different languages, and we've seen not only smart phones take advantage of this but also smart glasses as well -- including Meta Ray-Bans, Solos AirGo 3, Even Realities G1, and The Frame from Brilliant Labs. Some of these smart glasses, along with several phone apps, can now translate into dozens of languages (Google Translate supports over 100 languages). Apple still lags behind by only supporting 20 languages in Apple Translate. By tapping into the power of LLMs, Apple should boost the number of supported languages considerably and integrate them into Siri and other AI features, such as Live Translation in phone calls and text messages, and Visual Intelligence. Perhaps the biggest surprise feature in the new Pixel 10 phone is its new Conversional Editing feature in Google Photos. This allows you to describe the changes you'd like to make to a photo, and then the AI automatically goes in and does it. For example, you could have it move the subject in a scene, remove glare or reflections, re-center an object, replace the background, add clouds to a blue sky, increase or decrease background blur, and more. Also: Google Pixel 10 series hands-on: I did not expect this model to be my favorite Of course, altering photos can be sensitive. On LinkedIn, Google's product lead for computational photography noted, "We have tuned our models to be hypersensitive to small details in the photo so that it reflects the context you want to keep with the changes you want to make." I suspect this is going to be a very popular feature, since it is super easy to access and doesn't involve the advanced technical skills that you would have previously needed to do these kinds of photo edits. Apple has a lot of work to do to catch up with the features that the leading AI companies are bringing to their iPhone apps -- let alone the deep AI integration that Google is now bringing to key features on its Pixel phones. While the delay in rolling out Apple Intelligence features may not have seemed to hurt the iPhone during the past year, Apple will need to close the gap to avoid the iPhone 17 feeling like a device that's a step behind. As of right now, Google can make a pretty strong case that it's now got the smartest phone in the industry.
[2]
7 AI features I'd like to see the iPhone 17 embrace from Google, OpenAI, and others
The new AI camera features in the Pixel 10 could be the biggest differentiator between it and the iPhone 17. While the iPhone has virtually all of the smartest AI apps available from the latest AI trailblazers, it lacks the kind of deep integration of AI features that are only available at the intersection of the operating system and the latest hardware. That's what we've seen with the rollout of Google's Pixel 10 lineup. Also: Apple's iPhone 17 event launch date is official - here's everything we expect Here are seven features from AI leaders that would make a huge impact if they were seamlessly embedded at the system level in the iPhone 17. OpenAI's Voice Mode in ChatGPT essentially works the way I've always wanted Siri to work on the iPhone. You just fire it up and start talking to it in natural language, and it can answer questions, pull up information, and even carry out a few actions. ZDNET's Sabrina Ortiz has explained how to assign Voice Mode to the iPhone's Action Button to use it like a Siri replacement. Also: iPhone 17 Air: Everything to know about the 'ultra-thin' Plus killer coming in September But Voice Mode -- which is being renamed ChatGPT Voice and is soon rolling out to free users -- is still limited in the commands it can carry out on your iPhone. An Apple version of this feature or a partnership with OpenAI could allow much deeper integration across calendar, email, text messages, notes, settings, and other operating system tasks (with Apple privacy protections in place). Similarly, Google already has Gemini Live and Microsoft offers Copilot Voice, so Apple needs to move more deliberately to help the iPhone keep up. I've written about the fact that I love zoom photography and how it's the one area where phone cameras still fall down and I have to regularly turn to my Sony mirrorless camera and 70-200mm zoom lens. However, Google has recently taken a big step to fill the gap in zoom photography in the Pixel 10 Pro. With its new Super Res Zoom feature, the Pixel 10 Pro will fill in missing data and automatically process a digital zoom image up to 100x to make it more usable. Also: Pixel just zoomed ahead of iPhone in the camera photography race This brings up a number of questions about what makes a photo, and I still need to try it out on the Pixel 10 Pro to report back on how well it works, but this feels like a worthy use of computational photography. And the only smartphone maker that's going to compete with Google on computational photography is Apple. Last year at WWDC 2024, Apple made a big deal about its Personal Intelligence feature that could understand your questions and requests because it had information about you from your calendar, mail, text messages, and other data stored privately in the Apple ecosystem. In the WWDC keynote, Apple used examples like "Pull up the files Joz shared with me last week" and a real-time alert that a meeting you're about to reschedule could conflict with giving your kid a ride to a regular activity. Of course, Apple has never shipped this feature -- but now Google has. In the Pixel 10, Google launched Magic Cue, which can save you from having to jump between apps by knowing enough about you to help cue you with info. One example it provided was a text message where someone asked you what time dinner reservations were, and Magic Cue presumably used info from a Gmail confirmation message to surface the info right in the messaging app, and the user simply had to tap it to send a response. Google says this kind of action can now happen locally on the device because of the Tensor G5 chip in the Pixel 10. Still, I think more people would trust Apple with their privacy on a feature like this because Apple doesn't make money off of using your data in opportunistic ways. One of the biggest ways generative AI saves me time is by using it as a research assistant. Several of the AI apps now offer a Deep Research feature where you can ask an important question about a more complex topic and give the AI extra time (usually 5-30 minutes) to scour available sources and come back with an answer that includes clearly marked links to where the info came from. Also: Anthropic wants to stop AI models from turning evil - here's how I prefer to use Deep Research from Anthropic's Claude app because of its focus on accuracy. There have been many reports that Apple has been in talks with Anthropic about various collaboration opportunities. Integrating Claude's Deep Research into Siri so that you could trigger it quickly from a voice or text prompt would be a powerful option. Google first launched its Best Take feature on the Pixel 8 in 2023 and recently gave it another big upgrade on the Pixel 10. The feature came about from a collaboration by the Google Pixel, Google Photos, and Google Research teams working together to solve the "group shot dilemma." It uses multiple photos taken back-to-back of a group of people where not everyone has their eyes open, is looking at the camera, or is making an awkward expression. It then combines everyone's best take into a more usable photo. The new "Auto Best Take" on the Pixel 10 does this in the background and produces the Best Take photo for you. Also: I tried every new AI feature on the Google Pixel 10 series Similarly, there's also the Add Me feature (launched on the Pixel 9), which uses AR and AI in clever ways to allow the photographer to get added to the group shot, by essentially combining two photos -- guided by the camera app. It's reasonable to expect that Apple has the computational photography chops to pull this off, or the relationship with Google to license the technology, especially since it's based in the Google Photos app that's already available on iOS. One of the most advanced capabilities of large language models is translating between different languages, and we've seen not only smart phones take advantage of this but also smart glasses as well -- including Meta Ray-Bans, Solos AirGo 3, Even Realities G1, and The Frame from Brilliant Labs. Some of these smart glasses, along with several phone apps, can now translate into dozens of languages (Google Translate supports over 100 languages). Apple still lags behind by only supporting 20 languages in Apple Translate. By tapping into the power of LLMs, Apple should boost the number of supported languages considerably and integrate them into Siri and other AI features, such as Live Translation in phone calls and text messages, and Visual Intelligence. Perhaps the biggest surprise feature in the new Pixel 10 phone is its new Conversional Editing feature in Google Photos. This allows you to describe the changes you'd like to make to a photo, and then the AI automatically goes in and does it. For example, you could have it move the subject in a scene, remove glare or reflections, re-center an object, replace the background, add clouds to a blue sky, increase or decrease background blur, and more. Also: Google Pixel 10 series hands-on: I did not expect this model to be my favorite Of course, altering photos can be sensitive. On LinkedIn, Google's product lead for computational photography noted, "We have tuned our models to be hypersensitive to small details in the photo so that it reflects the context you want to keep with the changes you want to make." I suspect this is going to be a very popular feature, since it is super easy to access and doesn't involve the advanced technical skills that you would have previously needed to do these kinds of photo edits. Apple has a lot of work to do to catch up with the features that the leading AI companies are bringing to their iPhone apps -- let alone the deep AI integration that Google is now bringing to key features on its Pixel phones. While the delay in rolling out Apple Intelligence features may not have seemed to hurt the iPhone during the past year, Apple will need to close the gap to avoid the iPhone 17 feeling like a device that's a step behind. As of right now, Google can make a pretty strong case that it's now got the smartest phone in the industry.
[3]
Time's Ticking! This Will Make or Break the iPhone 17
As it stands right now -- even with the release of the iOS 26 public beta, and now the latest iOS 26 developers beta -- Apple Intelligence features are starting to feel dated against the competition. And I'm not just saying that -- I've tested out the new set of Galaxy AI features that launched with the Galaxy S25 Ultra earlier this year, while the slew of new Google AI features with the Pixel 10 are looking very promising. If Apple wants us to buy the next generation of its hardware, it can't afford for a weak showing with the iPhone 17 with Apple Intelligence. Here's why. When Apple Intelligence features first rolled out last year alongside the rollout of iOS 18.1, it was already playing catch up to its rivals. Don't get me wrong, I was just as thrilled as everyone else when they finally launched, but once I broadened my horizons with other platforms, suddenly Apple Intelligence just didn't seem so ground-breaking anymore. This is indicative of how I feel overall about Apple Intelligence as a whole: It feels like it's constantly just playing catch up. One thing I've come to enjoy with the best Android phones is how I can use the Gemini app to access more multimodal AI experiences. Take for example the Motorola Razr Ultra (2025) and Samsung Galaxy Z Flip 7, two of the best foldable phones around right now, that perfectly showcase the power of multimodal AI. Not only can I have a conversation with Gemini Live, but the truly powerful thing about this AI tool is how it goes to the next level by tapping into the camera to see what I see. Visual Intelligence is almost similar, but it's much more limited in what it can do. Yes, I can use Visual Intelligence to learn more about a restaurant I want to dine at, or how iOS 26 extends those capabilities to onscreen searches on my iPhone, but it lacks Gemini's native multimodality and advanced reasoning. When the power went out in my home, Gemini Live inspected my circuit breaker to see if anything was wrong with it -- and when it noticed a breaker was tripped, it instructed me on how to reset it. That is a practical, real world application of the power of AI. I don't want Apple Intelligence to match that, I want it to exceed it. I capture a lot of photos for work, often so that I can pit the best camera phones against one another in our photo face-offs. While the iPhone 16 Pro Max has performed very well, Apple Intelligence could make the iPhone 17 cameras even better. Samsung has leaned on Galaxy AI to add new features to its phones, like how it uses generative AI to convert standard videos into slow motion. Likewise, the most recent Pixel 10 reveal showed me how AI is having more of an effect to how people capture content -- and how they look too. Take for example the Pixel 10's new Camera Coach feature, which uses Gemini to guide users on how to capture a scene using on-screen instructions. It's like having a professional photographer right there giving you advice on how to frame the shot and adjust the exposure. There's also generative AI in Pro Res Zoom, an AI feature exclusive to the Pixel 10 Pro and Pixel 10 Pro XL, that enhances zoom photos with a little help from AI. Apple currently doesn't have any Apple Intelligence features that specifically are tied to the in-camera experience. It's simply relying on the hardware and image processing algorithms to get the best results, but those won't be enough to save the iPhone 17. Whatever happens at this rumored September iPhone event, Apple Intelligence can't afford to have a weak showing. Apple's already behind Google and Samsung when it comes to the amount of AI features it offers, but it can pull itself ahead if it finally brings us new and innovative ideas around Apple Intelligence.
Share
Copy Link
As Apple prepares for the iPhone 17, it faces pressure to integrate advanced AI features to compete with Google's Pixel and Samsung's Galaxy devices. The article explores potential AI enhancements for the iPhone 17, drawing inspiration from competitors and AI leaders.
As the tech world eagerly anticipates the release of the iPhone 17, Apple faces a significant challenge in keeping pace with its competitors in the realm of artificial intelligence (AI). While the iPhone has access to many third-party AI applications, it lacks the deep integration of AI features at the system level that rivals like Google and Samsung are now offering 12.
One area where Apple needs to make strides is in voice assistance. OpenAI's ChatGPT Voice Mode has set a new standard for natural language interaction, functioning in ways that many users have long desired from Siri 1. An Apple version of this feature, or a partnership with OpenAI, could allow for deeper integration across the iPhone's core functions, including calendar, email, and messaging, while maintaining Apple's commitment to privacy 2.
Google's Pixel 10 Pro has made significant progress in computational photography, particularly with its Super Res Zoom feature. This AI-powered capability can enhance digital zoom images up to 100x, making them more usable 1. Apple, known for its prowess in smartphone photography, will need to respond with its own AI-driven enhancements to maintain its competitive edge in this crucial area 3.
Source: Tom's Guide
Apple previewed a Personal Intelligence feature at WWDC 2024, promising to understand user queries based on personal data stored within the Apple ecosystem. However, Google has already implemented a similar feature called Magic Cue in the Pixel 10 1. Apple's challenge will be to deliver on its promise while emphasizing its superior privacy protections 2.
Source: ZDNet
Integrating an AI research assistant, similar to Anthropic's Claude app with its Deep Research feature, could significantly enhance the iPhone's utility. Such a feature could allow users to quickly access in-depth information on complex topics, complete with source citations 12.
Google's Best Take and Add Me features use AI to solve common photography problems, such as group shots with closed eyes or missing photographers 1. Apple could potentially develop similar features or partner with Google to bring these capabilities to iOS, given that Google Photos is already available on the platform 2.
Android devices are leveraging multimodal AI experiences through apps like Gemini, which can interact with users through text, voice, and camera input. Apple's Visual Intelligence feature currently offers more limited functionality in comparison 3. To stay competitive, Apple may need to develop more advanced multimodal AI capabilities for the iPhone 17.
While Apple has traditionally relied on hardware and image processing algorithms for its camera performance, competitors are increasingly integrating AI directly into the camera experience. Features like Camera Coach on the Pixel 10, which provides real-time photography guidance, showcase the potential for AI to enhance the user experience beyond mere image quality improvements 3.
As the smartphone market becomes increasingly AI-driven, the success of the iPhone 17 may hinge on Apple's ability to innovate in this space. With competitors already showcasing advanced AI features, Apple faces pressure to not just match but exceed these capabilities while maintaining its commitment to user privacy and seamless integration within its ecosystem 123.
Nvidia reports record Q2 revenue of $46.7 billion, with two unidentified customers contributing 39% of the total. This concentration raises questions about the company's future prospects and potential risks.
2 Sources
Business
5 hrs ago
2 Sources
Business
5 hrs ago
Julie Sweet, CEO of Accenture, discusses the importance of AI integration in business operations and warns against failed AI projects. She emphasizes the need for companies to reinvent themselves to fully leverage AI's potential.
2 Sources
Business
5 hrs ago
2 Sources
Business
5 hrs ago
Stanford researchers have developed a brain-computer interface that can translate silent thoughts in real-time, offering hope for paralyzed individuals but raising privacy concerns.
2 Sources
Technology
5 hrs ago
2 Sources
Technology
5 hrs ago
The term 'clanker' has emerged as a popular anti-AI slur, reflecting growing tensions between humans and artificial intelligence. This story explores its origins, spread, and the complex reactions it has sparked in both anti-AI and pro-AI communities.
2 Sources
Technology
5 hrs ago
2 Sources
Technology
5 hrs ago
Tesla and Waymo are employing radically different strategies in their pursuit of autonomous ride-hailing services, with Tesla aiming for rapid expansion and Waymo taking a more cautious approach.
4 Sources
Technology
2 days ago
4 Sources
Technology
2 days ago