5 Sources
[1]
Apple Intelligence hasn't lived up to my expectations, but these 3 upgrades could win me back
With WWDC 2025 around the corner, Apple has an opportunity to stage an AI comeback. Here's what it needs to show us. Apple finally entered the AI race at last year's Worldwide Developer Conference when it revealed Apple Intelligence. However, some of the biggest updates announced at WWDC 2024 -- such as a new and improved Siri and an AI that's aware of your personal context from your daily phone use -- have yet to deploy, leaving users frustrated. Still, I think there is hope. Apple has done a lot well with the limited features it has shipped -- and offered a promising glimpse of what's to come. For example, many of the new features -- including Genmoji, voice memo transcriptions, and photo clean-up -- are useful and easy to access, while also not being forcefully pushed to iOS users. Also: Forget Siri: Apple Intelligence's true potential on iPad and Mac lies in third-party apps Most importantly, Apple's A18 chip provides the iPhone 16 models with the infrastructure and compute necessary for Apple to support more compute-heavy AI features, while keeping Apple's promise of on-device processing that can preserve your information's privacy and security. Although the full Apple Intelligence suite of tools has yet to be revealed, the foundation is there. Here are three features that would make me a believer in Apple Intelligence. While Siri has a new look, with the screen glow showing up every time it's activated and a new way to interact with the AI via text, it's still trailing behind most AI voice assistants, The biggest perk of conversational chat with a voice assistant is having it provide you -- almost instantly -- with feedback on anything you may be thinking of, from simple tasks such as the weather and notifications to more complex ones such as advice and math problems. Siri doesn't yet have the knowledge or intelligence to support this breadth of assistance. The actions it can perform for you are still pretty limited, and the more advanced conversational prompts require a reliance on ChatGPT. Because ChatGPT is a third-party application, there's a time lag when sending the message to the chatbot and then Siri answering. There's room for Apple to remove some reliance on ChatGPT and give its own assistant some TLC so that it can do more both in terms of knowledge and actions, which leads me to the next point. Agentic AI, which takes AI assistance one step further by taking actions on your behalf to carry out a task with minimal intervention, has been the biggest AI trend since last year's WWDC. Interestingly enough, at WWDC 2024, before AI agents were all the buzz, Apple said that Apple Intelligence would be able to carry out tasks on your behalf, such as "Pull the files that my coworker shared with me last week." While last year it was a really cutting-edge feature, to keep up, this year, it is especially necessary to roll these features out as quickly as possible to keep up and provide that extra something users are looking for. Also: What are AI agents? How to access a team of personalized assistants Last, agentic AI was a cutting-edge feature. This year, it's the least Apple can do to keep up. Microsoft, Google, and Anthropic all held their annual developer conferences last week, and all of them unveiled agentic AI products. Apple Intelligence was originally poised as a "personal assistant," and I think adding this agentic functionality can bring it a step closer to that reality. The concept of Apple Intelligence being grounded in your personal information and context, retrieving data from across your apps, and referencing the content on your screen was a unique and helpful approach to implementing AI into an ecosystem of phones. While it would be amazing if Apple could ship this feature promptly, it is a relatively complex undertaking. I foresee further delays. In the meantime, Apple could announce another unique feature that is less of an undertaking, such as the rumored revamped Health app that features an AI agent meant to replicate the insights a doctor can give patients based on their biometric data. Key to the success of any new feature Apple announces is that it needs to be ready to ship to restore user confidence lost due to the extended waiting period. Also: Apple's AI doctor will be ready to see you next spring A combination of the features above would also help give users who only purchase a new phone with the A18 chip for Apple Intelligence a return on their investment beyond the phone's basic functionalities.
[2]
After Google IO's big AI reveals, my iPhone has never felt dumber
As Apple breaks its Siri promises, Google boldly builds the future of AI. I can't believe I'm about to state this, but I'm considering switching from iOS to Android. Not right now, but what I once considered an absurd notion is rapidly becoming a realistic possibility. While Apple may have an insurmountable lead in hardware, iPhones and Android phones are no longer on par with each other when it comes to AI and assistants, and the gap is only growing wider. At its annual I/O conference on Tuesday, Google didn't just preview some niche AI gimmicks that look good in a demo; it initiated a computing revolution that Apple simply won't be able to replicate anytime soon, if ever. The first thing I noticed during the main I/O keynote was how confident the speakers were. Unlike Apple's canned Apple Intelligence demo at last year's WWDC, Google opted for live demos and presentations that only reflect its strong belief that everything just works. Many of the announced features were made available on the same day, while some others will follow as soon as this summer. Google didn't (primarily, at least) display nonexistent concepts and mockups or pre-record the event. It likely didn't make promises it can't keep, either. If you have high AI hopes for WWDC25, I'd like to remind you that the latest rumors suggest Apple will ignore the elephant in the room, possibly focusing on the revolutionary new UI and other non-AI goods instead. I understand Apple's tough position -- given how last year's AI vision crumbled before its eyes -- but I'd like to think a corporation of that size could've acquired its way into building a functional product over the past 12 months. For the first time in as long as I can remember, Google is selling confidence and accountability while Apple is hiding behind glitzy smoke and mirrors. A few months ago, Apple added ChatGPT to Siri's toolbox, letting users rely on OpenAI's models for complex queries. While a welcome addition, it's unintuitive to use. In many cases, you need to explicitly ask Apple's virtual assistant to use ChatGPT, and any accidental taps on the screen will dismiss the entire conversation. Without ChatGPT, Siri is just a bare-bones voice command receiver that can set timers and, at best, fetch basic information from the web. Conversely, Google has built an in-house AI system that integrates fully into newer versions of Android. Gemini is evolving from a basic chatbot into an integral part of Google's ecosystem. It can research and generate proper reports, video chat with you, and pull personal information from your Gmail, Drive, and other Google apps. Google also previewed Project Astra, which will let Gemini fully control your Android phone, thanks to its agentic capabilities. It's similar to the revamped Siri with on-screen context awareness (that Apple is reportedly rebuilding from scratch), but much more powerful. While, yes, it's still just a prototype, Google has seemingly delivered on last year's promises. Despite it infamously killing and rebranding projects all the time, I actually believe its AI plans will materialize because it has been constantly shipping finished products to users. Unlike Apple, Google is also bringing some of its AI features to other platforms. For example, the Gemini app for iPhone now supports the live video chat feature for free. There are rumors that Apple will open up some of its on-device AI models to third-party app developers, but those will likely be limited to Writing Tools and Image Playground. So even if Google is willing to develop more advanced functionalities for iOS, Apple's system restrictions would throttle them. Third-party developers can't control the OS, so Google will never be able to build the same comprehensive tools for iPhones. Google's AI plan doesn't strictly revolve around its Gemini chatbot delivering information. It's creating a new computing experience powered by artificial intelligence. Google's AI is coming to Search and Chrome to assist with web browsing in real time. For example, Gemini will help users shop for unique products based on their personal preferences and even virtually try clothes on. Similarly, other Google AI tools can code interfaces based on text prompts, generate video clips from scratch, create music, translate live Meet conferences, and so on. Now, I see how dystopian this all can be, but with fair use, it will be an invaluable resource to students and professionals. Meanwhile, what can Apple Intelligence do? Generate cartoons and proofread articles? While I appreciate Apple's private, primarily on-device approach, most users care about the results, not the underlying infrastructure. During I/O, Google shared its long-term vision for AI, which adds robotics and mixed-reality headsets to the equation. Down the road, the company plans to power machines using the knowledge its AI is gaining each day. It also demoed its upcoming smart glasses, which can mirror Android phone alerts, send texts, translate conversations in real time, scan surrounding objects, and much, much more. While Apple prioritized the Vision Pro headset no one asked for, Google has been focusing its efforts on creating the sleek, practical device users actually need -- a more powerful Ray-Ban Meta rival. Before long, Android users will be rocking stylish eyewear and barely using their smartphones in public. Meanwhile, iPhone users will likely be locked out of this futuristic experience because third-party accessories can't read iOS notifications and interact with the system in the same way. iOS and Android launched as two contrasting platforms. At first, Apple boasted its stability, security, and private approach, while Google's vision revolved around customization, ease of modding, and openness. Throughout the years, Apple and Google have been learning from each other's strengths and applying the needed changes to appease their respective user bases. Recently, it seemed like the two operating systems were finally intersecting: iOS had become more personalizable, while Android deployed stricter guardrails and privacy measures. However, the perceived overlap only lasted for a moment -- until the AI boom changed everything. The smartphone as we know it today seems to be fading away. AI companies are actively building integrations with other services, and it's changing how we interact with technology. Mobile apps could become less relevant in the near future, as a universal chatbot would perform the needed tasks based on users' text and voice prompts. Google is slowly setting this new standard with Android, and if Apple can't keep up with the times, the iPhone's relevancy will face the same fate as so many Nokia and BlackBerry phones. And if Apple doesn't act fast, Siri will be a distant memory.
[3]
Apple has gotten in the way of its own AI rollout -- here's how it can get Apple Intelligence back on track
During the Android Show: I/O Edition, Google revealed plans to bring Gemini to even more devices in the coming months. This is great news for Gemini fans, and it means a wealth of new features are coming to WearOS, Google TV, and more. I have no doubt this will be a major boost for fans, but it really hits home just how much Apple got in its own way regarding the release of Apple Intelligence. The simple fact is that, due to a single hardware choice made years ago, Apple has made releasing its AI ten times more difficult than it needed to be. However, that doesn't mean that Apple is completely out of the game, as there are more than a few things that the company is doing that might hold a solution. The first thing we need to do, however, is go into why Apple is having so many issues and what exactly was the choice that made life so difficult. The issue that Apple has is relatively simple: for the longest time, iPhones were released with around 6GB of RAM. This allowed Apple to keep the cost of the phone down while relying on other factors to help keep pace with some of the best Android phones. When it comes to performance, several factors allow the best iPhones to remain leagues above other devices in terms of performance. The first is that iOS compiles its apps into native code, which means that they run directly on the device without needing to be interpreted. This means that the apps are much more efficient and, as such, use less RAM. On top of that, Apple's iOS uses a much more aggressive memory management system that is more likely to free up RAM from apps that are no longer in use. Finally, Apple has a lot more control over the hardware and software used in its devices, which means it's a lot easier for it to optimize the performance of the memory on the device. All of this comes together to make Apple devices capable of doing much more with less. However, as we have seen with the release of Apple Intelligence, it has had some major downsides too. While Apple's design means it functions well with less RAM, Apple Intelligence resoundingly doesn't. Here's the thing: Most of Apple Intelligence's features, especially those with a Large Language Model (LLM), require a lot more RAM than most iPhones have. Now, this might have been less of an issue, but Apple decided to run its AI on the device's local memory rather than utilizing the cloud. This can have some great benefits, primarily enhanced security due to data being kept local, as well as improved overall performance with much lower latency. However, it also meant that devices with less than 8GB of RAM simply cannot run Apple Intellifence, which leaves a lot of customers out in the cold. Google Gemini, meanwhile, is a cloud-based system, meaning that it can run on the majority of devices that have a minimum of 2GB of RAM. To be fair to Apple, focusing on cloud-based processing has its downsides, namely that it's usually slower and less secure, as the data can be intercepted. However, what is painfully evident is that the cloud-based processing was a much better starting point than on-device. The issue is that Apple can't go backwards to make older devices compatible with Apple Intelligence. Meanwhile, apps like Google Gemini and Chat GPT can, and do, work on older iPhones. As such, you have a horde of customers whose only experience with AI has nothing to do with Apple, Apple might be in a bit of a tough spot, but there are some solutions. One of the most interesting we've heard about is that Apple is releasing a Software Development Kit to app makers that includes many of its AI models. This will help these developers to better integrate Apple's AI into their apps, as well as allow them to create new ones. We're also seeing that Apple is working to increase the RAM across its range to help future-proof its devices from this kind of issue happening again. For instance, the iPhone 17 series is rumored to feature up to 12GB of RAM. Meanwhile, Apple just released the iPad Air M3, which features 8GB of RAM. Finally, there's something to be said for allowing users to install Gemini on their devices, even having it as an option to replace Siri. Here's the thing: Apple is relatively new to the AI game, so having one of the more successful AIs available in a place where Apple can track what people are interested in and what feature they use the most, likely could be a big help. As it stands, we're going to have to wait until iOS 19 to see if Apple can solve some of its issues, especially with Siri. However, it's clear that Apple is on the back foot at the moment, and it's going to take something special to help them keep up.
[4]
Google I/O showed off AI done right -- hopefully, Apple was paying attention
As Google CEO Sundar Pichai wrapped up the company's Google I/O 2025 keynote this past week, you could be forgiven if amid the sound of all those AI announcements made during the show, you thought you could hear the faint sound of Apple crying out "uncle." Apple, which has AI-fueled ambitions of its own, isn't about to throw in the towel on its push to get more Apple Intelligence features into its products. In fact, when Apple convenes its own developers conference next month, AI figures to take up a lot of the focus at WWDC 2025. But as I sat in the Shoreline Amphitheatre last Tuesday, listening to Google outline one new AI initiative after another, I couldn't help but think of Apple -- specifically how far behind Google the Cupertino company is with its own AI efforts. Consider this: the Google I/O keynote lasted 2 hours, and nearly every moment of it focused on something Google was doing involving AI. And not just pie-in-the-sky proclamations, either -- Google showed off real features that it's either making available now or rolling out very shortly across a wide swath of its products. Google's AI focus at I/O was so relentless, it had to move Android announcements to an entirely separate live stream. And it's not like Android is some quaint side business at Google. Apple is not going to have that dilemma at WWDC. I'm not a believer that technology is a zero-sum game, that just because Google has been rather successful at making AI its primary focus that Apple might as well give up the game with Apple Intelligence. But I don't think its unkind to Apple to describe the initial launch of the company's suite of AI tools as uneven, especially when compared to the more polished offerings that Google has on display. And it's worthwhile to examine why that is. Part of the issue stems from the fact that Google's been at this a while now, whereas Apple got a fairly late start in building out the large language models that power many AI features. It'd be like comparing racers, where one is just getting set at the starting line while the other is about to complete its first lap. It's also quite clear that while Google has many different parts of its business from search to hardware to productivity tools, AI is now part of most of those efforts. During I/O alone, Google announced a new AI-specific tab for its search engine, unveiled an AI-powered real-time translation feature for its Google Meet video chat software, introduced personalized email replies to Gmail that draw on past messages and made it possible to link up Gemini with all your Google app activity for a more personalized experience using Google's chatbot. Make no mistake about it -- Google is an AI company now, while Apple offers devices and services with some AI features. I'm also struck by how practical a lot of the advances Google announced this past week are, and I say that as someone who's ambivalent about the tech industry's push to incorporate AI into everything. For example, Google announced a new version of its Veo video generation tool that can now use your text prompt to add sound to the video it's spun into existence. I have reservations about how people might use that capability -- or even if they should -- but I can certainly see the appeal to someone in the business of mass-producing creative content. Apple Intelligence features are more geared toward individuals, to be fair, so they're not going to have the impact of something like Google's Veo or the Imagen image generation tool that also saw a new version. Still, you look at things like Image Playground or Genmoji on an iPhone, and apart from whipping up something to amuse your friends in text messages, there doesn't seem to be that larger practical use That Google is focusing on. But I think the starkest AI difference between Apple and Google boils down to the state of their assistants. Efforts to bolster Siri for Apple Intelligence have stalled for Apple, while the Gemini assistant seems to be at the heart of everything -- including products that haven't been released yet. I got a chance to try on a prototype of Google's latest smart glasses built on the Android XR platform. And right now, the primary way you interact with those glasses is by talking to Gemini. The assistant can see the same things you're looking at and even take action on those things -- recommending nearby sushi restaurants after I looked at a photo of sushi in a book on Japanse cuisine, for example. Similarly, Google shared an update on Project Astra, featuring a video of a man repairing a bike with the help of a digital assistant that could look up online repair manuals, identify specific parts in a workshop and call around to find replacement parts at nearby bike shops. I'm not sure how close the scenario in that video is to being a reality, but it seems a lot further along than the features Apple touted for Siri a year ago that likely won't arrive until after the iOS 19 launch later this year. This isn't to suggest that Apple's AI efforts are doomed before they get started. Rather, instead of feeling deflated by all the AI progress Google showed off at I/O, I hope Apple draws some lessons from it. Obviously, Apple needs to get its Siri house in order. Reportedly, the company is moving away from the hybrid version of its assistant to a more LLM-driven approach -- in other words, more chatbot-like. But that's going to take some time. I would hope that Apple uses WWDC 2025 to be honest about when we can expect a Siri reboot, but a Bloomberg report suggests Apple is going to play down that element of its AI efforts. If true, that's a bummer. More encouraging are some of the rumored Apple Intelligence features that have leaked out. I'm talking about an AI-based health coach with recommendations on improving health or using AI to optimize iPhone battery performance. Those aren't the flashiest implementations of AI, but they could mean everyday improvements in how we use our Apple devices. But Siri will be the key to Apple's AI revival, just as Gemini is driving things for Google right now. Until Apple can make its own assistant just as reliable, it's always going to be the opening act to Google's more polished AI headliner.
[5]
At I/O, Google Just Shipped Apple's AI Promises
It's a familiar idea: Apple promised something similar a year ago when it rolled out Apple Intelligence. And by all accounts, Apple should be leading in AI -- but has struggled to improve its Siri assistant, even without adding AI features. Meanwhile, Google seems closer than ever to delivering on it in a way that could actually matter to everyday users. The most notable part of Google's keynote wasn't a flashy hardware announcement or a dramatic reveal. It was a subtle shift in how AI fits into the tools people already use. "With your permission" (a phrase the company repeated a number of times), Google says Gemini can now respond with more nuance by using context pulled from your inbox, calendar, and files. So, for example, when replying to an email, it can automatically adjust tone based on the recipient, reference relevant documents, and even draft a summary that sounds more like something you'd write. Another example was Gemini Live, a real-time conversational assistant designed to replace traditional voice assistants like Siri and Alexa. Instead of issuing one-off voice commands, users can now speak naturally with Gemini, interrupt it mid-sentence, ask follow-up questions, and even reference things on screen.
Share
Copy Link
Google's recent AI announcements at I/O 2025 have highlighted the growing gap between Google's AI capabilities and Apple's delayed Apple Intelligence rollout, prompting concerns about Apple's AI strategy and its ability to compete in the evolving tech landscape.
Google's I/O 2025 conference has set a new benchmark in artificial intelligence (AI) development, showcasing a range of practical and innovative features across its product ecosystem. The event highlighted Google's commitment to AI integration, with nearly every moment of the two-hour keynote focused on AI initiatives 1. This comprehensive approach stands in stark contrast to Apple's delayed and limited rollout of its Apple Intelligence features.
Source: Tom's Guide
Apple's entry into the AI race with Apple Intelligence at WWDC 2024 has been met with disappointment as many promised features, including an improved Siri and context-aware AI, have yet to materialize 2. The company's decision to prioritize on-device processing for privacy and security has inadvertently created obstacles for widespread AI implementation, particularly on older devices with limited RAM 3.
The disparity between Apple's Siri and Google's Gemini assistant is becoming increasingly apparent. While Siri struggles with basic tasks and relies on third-party applications like ChatGPT for more complex queries, Gemini is evolving into an integral part of Google's ecosystem 4. Google's Project Astra demonstrates advanced agentic capabilities, allowing Gemini to control Android devices and perform complex tasks with minimal user intervention 14.
Source: Macworld
Google's approach to AI integration focuses on cloud-based processing, enabling compatibility with a wide range of devices and facilitating rapid feature deployment 3. In contrast, Apple's commitment to on-device processing for enhanced privacy has limited the scope and speed of its AI rollout 2. This fundamental difference in strategy has allowed Google to push forward with more ambitious AI projects while Apple grapples with hardware limitations.
The advancements showcased at Google I/O 2025 promise to revolutionize user interactions with technology. Features like real-time translation in Google Meet, personalized email replies in Gmail, and the integration of Gemini with Google app activity demonstrate practical applications of AI that could significantly enhance productivity and user experience 5. Apple's more cautious approach, while prioritizing privacy, may risk leaving users feeling that their devices are becoming comparatively less capable 4.
Source: ZDNet
As Google continues to expand its AI capabilities, Apple faces mounting pressure to accelerate its AI development. Rumors suggest that Apple may be working on increasing RAM in future iPhone models and developing a Software Development Kit for app makers to integrate AI models 3. However, the company's ability to catch up with Google's AI advancements remains uncertain, especially given the head start and momentum Google has gained.
The contrasting approaches of Google and Apple in AI development are likely to shape the future of the tech industry. Google's aggressive push into AI across its product line may set new consumer expectations for AI-powered features and capabilities. Apple, known for its hardware prowess, may need to reevaluate its AI strategy to maintain its competitive edge in the rapidly evolving tech landscape 45.
As the AI race intensifies, both companies will face challenges in balancing innovation with user privacy and ethical considerations. The outcome of this competition could have far-reaching implications for how AI is integrated into our daily lives and the future direction of personal computing technology.
Apple is reportedly in talks with OpenAI and Anthropic to potentially use their AI models to power an updated version of Siri, marking a significant shift in the company's AI strategy.
22 Sources
Technology
12 hrs ago
22 Sources
Technology
12 hrs ago
Microsoft unveils an AI-powered diagnostic system that demonstrates superior accuracy and cost-effectiveness compared to human physicians in diagnosing complex medical conditions.
6 Sources
Technology
20 hrs ago
6 Sources
Technology
20 hrs ago
Google announces a major expansion of AI tools in education, including Gemini for Education and NotebookLM for under-18 users, aiming to transform classroom experiences while addressing concerns about AI in learning environments.
7 Sources
Technology
12 hrs ago
7 Sources
Technology
12 hrs ago
NVIDIA's upcoming GB300 Blackwell Ultra AI servers, slated for release in the second half of 2025, are poised to become the most powerful AI servers globally. Major Taiwanese manufacturers are vying for production orders, with Foxconn securing the largest share.
2 Sources
Technology
4 hrs ago
2 Sources
Technology
4 hrs ago
Elon Musk's AI company, xAI, has raised $10 billion through a combination of debt and equity financing to expand its AI infrastructure and development efforts.
3 Sources
Business and Economy
4 hrs ago
3 Sources
Business and Economy
4 hrs ago