The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Fri, 22 Nov, 12:02 AM UTC
5 Sources
[1]
The first question I asked Apple Intelligence was how to disable it
After lagging behind the competition for the major part of 2023 and 2024, Apple finally unveiled its version of generative AI dubbed Apple Intelligence. Keeping the narcissistic rebranding of the abbreviation aside, Apple Intelligence promised to offer some nifty utilities to iOS and macOS users. From tools to help you write better to a revamped version of Siri (finally), Apple made some tall claims during the iPhone 16 announcement. ✕ Remove Ads Eager to try out the features myself, I hit the update button on my Mac as soon as the latest build of macOS Sequoia was rolled out. I then went through the hassle of changing my Mac's region to the United States and the system language to English (US) since Apple Intelligence isn't available in my country yet. What followed was a series of disappointments and underwhelming results. So much so that I've now disabled Apple Intelligence so that I can put my Mac's resources to better use. Wondering what went so wrong? Let me explain. Related Mac Mini (M4, 2024) review: I can't wait for Windows PCs to catch up Apple is winning me over Limited availability to start Wait, wait, and more wait Apple first showed off its version of AI back in June 2024 during WWDC. It set the tone for a grand launch three months later during the annual iPhone event. After all, the iPhone 16 series was marketed with a heavy emphasis on Apple Intelligence. However, consumers were disappointed to learn that Apple Intelligence wasn't available to use at the time of launch. Neither was it on the new iPhones nor was it available on the latest iteration of macOS at that time. ✕ Remove Ads Why announce something that's not yet available to use? This was a bummer, since the interest in Apple intelligence started fizzling out to a large extent. Why announce something that's not yet available to use? Consumers had to wait too long to experience Apple Intelligence on their iPhone, iPad, or Mac. Moreover, even after the feature was rolled out, it was limited to a select few abilities. The full suite of generative AI features is still in the works as per Apple, and will supposedly be launched in early 2025. At the time of writing, the Writing Tools feature is the only headlining feature of Apple Intelligence on both iOS and macOS. ✕ Remove Ads Writing Tools add little value Just ask ChatGPT instead Alright, let's leave the logistics to the side for a bit and focus on what's available. Writing Tools, as the name suggests, includes features that use generative AI to read pieces of text and help you improve them. It can proofread a block of text to find errors, paraphrase it, make it sound more friendly or professional, and shorten it. Additionally, you can get a quick summary of an article or webpage or break it down to quickly view key points, format it as a list, or even convert numerical data into tables. While all of this sounds good in theory, the tools are either seldom useful or they don't work as intended. For example, I've never felt the necessity to paraphrase my email or message to make it more friendly or professional. This could also be because I'm fluent in English. So, it might be useful if I'm learning a different language like German, right? Well, no. That's because Writing Tools, just like all other features of Apple Intelligence so far, is only available in the English language. ✕ Remove Ads Furthermore, the Proofread option fails to work at times. Even when I've typed a sentence that doesn't sound right, Apple Intelligence fails to recognize and correct it. Take the text in the screenshot below, for example, where the verbiage is very wrong in the sentence. However, despite running it through the Proofread feature, Apple Intelligence recommends no changes. When I ran the same text in ChatGPT with a prompt to proofread it, I got the expected response with suggestions to correct the sentence. It's surprising to me that Apple decided to ship a half-baked feature like this. ✕ Remove Ads To Apple's credit, though, the Summarize and List tools work well. The ability to create a Table is slightly finicky. It works with certain data sets but throws an error the moment you introduce a lot of values. New Siri is still... Siri Miss you, Bixby There was a fair bit of hype surrounding the new improvements to Siri during Apple's announcement. It was long overdue, to be honest, since Siri has been the least helpful voice assistant, even lagging behind Bixby, a voice assistant that was extremely redundant, since Google Assistant has always been solid. While Siri now understands context in a better fashion, the responses are still terrible. It searches the web for the most basic queries, whereas Google Assistant or Gemini can provide immediate responses that are much more helpful. I asked Siri if I could make chocolate chip cookies without eggs. Instead of giving me a definitive answer, it gave me a list of search results. What is this, 2014? ✕ Remove Ads All the other improvements to Siri that were announced are yet to roll out, so we'll have to keep waiting. The other bits and pieces Better notification management, finally? Both iOS and macOS have been infamous for poor notification management. Apple aimed to fix that with the Summarize Previews feature, which uses AI to determine the contents of your notifications and give you a short summary. It essentially saves space on the lock screen or in the notification panel. While I like the idea, the implementation is done poorly. There are several memes on the internet about how Apple Intelligence is summarizing notifications in a funny manner. In fact, I've had instances where AI has interpreted the contents of my notifications wrongly and shown me summaries that are completely opposite of what the message said. A friend of mine messaged on a group chat asking if a bunch of us were possibly interested in attending a concert, and that he would be booking tickets for it based on any interest in joining him. ✕ Remove Ads It's surprising to me that Apple decided to ship a half-baked feature like this. I was in a meeting, so I couldn't respond immediately. Later, I just glanced at my phone to read the Apple Intelligence summary. It said "Tickets to concert booked, questions about mode of transport." I was surprised to see this and wondered who booked tickets for me without my consent. I then read the entire message in the group and realized the summary didn't match what was being said. Below I've included another nonsense AI notification summary a friend shared with me. ✕ Remove Ads Then, there's the Clean Up feature in the Photos app on both iOS and macOS. While it does a fairly good job with a plain background, it struggles when you throw something challenging at it. I defaulted to using the Magic Eraser on Google Photos since it's included with my Google One subscription. It works much better in my opinion. Too soon to judge? Too late to launch Just like most things that are new, Apple Intelligence, too, has some teething issues. But, what annoys me the most is that after making us wait for this long, Apple should have ironed out these types of basic issues. There's no excuse for the Proofread tool not to work, when its only job is to proofread and fix your text. Thanks to these jarring issues, I decided to disable Apple Intelligence on my Mac. I would rather allocate the extra RAM for other tasks instead of wasting resources on a feature that just doesn't work. On the other hand, Microsoft has done an excellent job of integrating Copilot into Windows. It's usable, reliable (on most occasions), and delivers what it promises. ✕ Remove Ads Related This Windows app brings Apple Intelligence's text tools to your PC Windows PCs get a new writing tool that is bloat-free and doesn't require a subscription. We all knew Apple was late to the AI game, just the way it generally is with cutting-edge tech. However, when Apple adopts something, it perfects it. That's exactly what I expected with Apple Intelligence too. Alas, the real-world experience is far from that. Hopefully, things improve by early 2025 and we have a usable version of Siri. I'll also watch out for the other promised features of Apple Intelligence, like Image Playground, Visual Intelligence, Genmoji, ChatGPT integration, etc. I sincerely hope Apple steps on the pedal with them, since other brands are running away with AI integration on their devices like it's nobody's business. ✕ Remove Ads
[2]
5 reasons Google's AI is leagues ahead of Apple Intelligence
Apple Intelligence made its public debut amid much hype from the company. While I appreciate the direction Apple took with its AI implementation, Apple bit off more than it could chew. The first letdown came when the new iPhone 16 series launched without its marquee feature. Even when Apple Intelligence arrived with iOS 18.1, it felt underwhelming compared to what Google did with AI on Pixel phones. ✕ Remove Ads Moments like these make me realize how far ahead Google Gemini and Google's AI efforts are for real-world use. After testing Apple Intelligence on an iPhone 16 Pro Max and stacking it up against Gemini, here's what I discovered (spoiler: Google has the edge). 5 Text creation is no small feat Turns out, not every AI can generate text Source: Pexels When ChatGPT came into our lives about two years ago, it was pitched as a tool that creates text from scratch with simple prompts. Since then, every mainstream generative AI service, including Google Gemini, has offered text generation. Apple Intelligence includes a suite of writing tools, but it can't write for you. All it does is fix grammar and tone, something Grammarly has been doing for as long as I can remember. Still, it isn't good at it. When I gave the same email text to Gemini and Apple Intelligence for editing, Gemini returned a richer draft, while Apple's AI made basic edits and called it a day. ✕ Remove Ads Close Apple Intelligence's writing tool (left) vs. Google Gemini When you ask Apple for a friendlier tone, it often dials up the spirit too much, with no way to fine-tune the results with prompts. Gemini, being chat-based, handles that without a sweat. While Apple aims to bring similar options to its AI writing tools with iOS 18.2, it won't add a full-fledged writing mode to generate text from scratch. 4 Google's Magic Editor > Apple's Clean Up Wiping the floor with Clean Up Close Original, Apple's Clean Up edit, and Google's Magic Editor edit (in that order) ✕ Remove Ads Magic Eraser was the tool Google used to show off its AI capabilities in 2021. It could erase distractions from your photos, often without leaving a trace, as it got better over time. Last year, Magic Eraser evolved into Magic Editor, like a Pokémon with enhanced generative capabilities. It now fills in areas you want to edit with generated content that blends in. Apple introduced a similar feature in its Photos app, but it's still in the early days. It struggles when you edit complex areas of a photo with many objects. I tested Google Photos and Apple Photos on several images, and Google consistently did a better job, while Apple left smudges as evidence of its shabby editing skills. Related Our top 16 Google Photos editing tips and tricks to help you get the best smartphone photos Don't dismiss Google Photos as an average photo editing app ✕ Remove Ads 3 Transcription is Google's strong suit Apple completely fails at the job Apple seemed pumped about its new call recording and AI-based transcription feature during the launch event. The problem? It fails to work most of the time. The main issue is that Apple Intelligence supports only US English, limiting its functionality. When I recorded a call with a store's support team to keep a record of the conversation, the iPhone failed to transcribe it, even though we spoke in English with an Indian accent. Audio quality couldn't have been the issue because I played the same recording for Google Recorder on a Pixel phone, and it had no trouble transcribing it. I've long relied on Google Recorder for recording meetings and taking notes. Even with the thickest accents or a mix of languages, Google consistently delivers results with 80% to 90% accuracy on the worst days. It is simply a better tool. ✕ Remove Ads 2 Google's AI works everywhere But Apple wants you to buy a new iPhone Language support isn't the only limitation of Apple Intelligence. Hardware support is another major factor. Even last year's iPhone 15 doesn't support the new AI tools, showing how short-sighted Apple's AI efforts are. You must own an iPhone 15 Pro or the latest iPhone 16 series to access Apple Intelligence. Even then, you don't get these AI features in the EU due to local laws. Meanwhile, Google is more generous with its device support. The photo editing tools are available across brands and devices, working as well on the three-year-old Pixel 6 as on the latest Samsung phones. The same goes for Google's voice assistant. Gemini works on any Android phone and offers the advanced Gemini Live mode, which allows two-way conversations. Google even made Gemini available on the iPhone. Google: 1, Apple: 0 ✕ Remove Ads Related 5 Gemini Live limitations I hate as Google Assistant can do them just fine A work in progress 8 1 Gemini looks like the future While Siri is stuck in the past Close Siri got a sleek new animation with the screen's edges lighting up, signaling the voice assistant's omnipresence on the iPhone. I prefer it over the old animation. However, as soon as you ask Siri a question, it becomes clear that nothing has changed under the hood. Siri still pulls up Google Search results and Wikipedia snippets, even for simple queries. It's so underwhelming that Wall Street Journal's Joanna Stern recently excluded Siri from her comparison of AI chatbots. ✕ Remove Ads Google Assistant has always outshone Siri, and Gemini takes things to a new level. Its conversational abilities make it a better assistant, and its deep integration with Google apps is a bonus. You can ask Gemini to create a custom YouTube Music playlist based on your mood, and it generates one on the spot. But that isn't all. Gemini Live allows free-flowing conversations, the kind of intuitive voice assistant we imagined, akin to Jarvis. You can use it to brainstorm ideas, explore complex topics, or chat to get a fresh perspective on your thoughts. While Gemini's features outpace Siri, it recently gave unsolicited and potentially harmful advice . Double-check responses containing critical information or personal advice and exercise caution when considering them. ✕ Remove Ads Apple Intelligence still gets some things right Apple Intelligence doesn't live up to the hype, but it gets a few things right. For example, the writing tools are built into the system, and I can access them anywhere. It's a smoother experience than using a separate AI app to proofread text. Also, after using the iPhone for a month, I find AI notification summaries useful (in the few apps where it works, thanks to its language limitations). Still, some have had funny moments with it. The direction Apple is going with AI is solid, but the execution needs more work. Since Apple is late to the party, it may take a while to catch up in the areas where Google Gemini has a solid lead. ✕ Remove Ads
[3]
Apple Intelligence: Is It Worth the Hype?
Apple's Worldwide Developers Conference (WWDC) 2024 marked a significant milestone in the company's AI journey with the introduction of Apple Intelligence. This suite of AI-driven features aims to transform user interaction across Apple's ecosystem, encompassing productivity, creativity, and convenience. From generative text editing to advanced photo manipulation tools, Apple Intelligence promises to redefine the way users engage with their devices. Marques Brownlee gives us a detailed look at the current Apple Intelligence features and the ones coming in the near future in his latest video. However, as with any new technology, it is essential to critically evaluate the actual impact and potential of these features. While some tools show promise, others reveal limitations that temper the initial excitement surrounding Apple Intelligence. Let's take a closer look at the key components of this AI suite and assess their strengths and weaknesses. One of the most prominent features of Apple Intelligence is its generative AI for text editing. Designed to refine the tone, clarity, and conciseness of your writing, this tool seamlessly integrates across iPhone, iPad, and Mac devices. Its offline functionality ensures privacy, setting it apart from cloud-reliant competitors. However, the tool's limited customization options and repetitive interface may frustrate advanced users seeking more granular control over their editing process. While the generative AI for text editing is a helpful addition for casual edits, it struggles to differentiate itself in a crowded market of similar tools. Its impact, although notable, may be more incremental than innovative. Apple's AI-driven notification summarization aims to simplify the often overwhelming task of managing notifications by condensing alerts into concise summaries. The goal is to help users focus on what matters most, reducing the cognitive load associated with constant interruptions. However, the feature's execution falls short of its promising concept. The notification summarization feature frequently misinterprets context or prioritizes irrelevant information, diminishing its practical value. While the idea behind it is commendable, the current implementation leaves much to be desired, making it more of a work in progress than an innovative solution. GenMoji allows users to create personalized emojis by describing their desired design. Whether you're looking for a quirky character or a unique expression, the AI generates custom emojis tailored to your input. This feature showcases Apple's commitment to injecting creativity and fun into its ecosystem. While undeniably entertaining and creative, GenMoji caters to a niche audience. Its appeal is likely limited to younger users or social media enthusiasts, making it more of a novelty than a must-have tool for the broader Apple user base. Although it adds a layer of personalization to the Apple experience, its impact on the overall ecosystem remains relatively minor. Apple's Image Playground enables users to generate cartoon-style images based on prompts, offering customization options for themes, props, and backgrounds. This playful tool is ideal for casual creative projects, allowing users to bring their ideas to life in a visually engaging manner. However, the Image Playground falls short for professional use due to its non-photorealistic outputs. While entertaining and suitable for lightweight creative endeavors, its limited versatility restricts its appeal to a broader audience. It serves as a fun addition to Apple's ecosystem but may not be a catalyst in terms of practical application. Apple Intelligence introduces AI-powered notification prioritization within Focus modes, surfacing critical alerts and emails during work or personal time. Integrated with the Mail app, this feature ensures that users never miss important updates, enhancing productivity and reducing distractions. While the priority notifications feature adds convenience for users already invested in Apple's ecosystem, its functionality mirrors existing tools on competing platforms like Gmail. It offers little in the way of groundbreaking innovation but rather builds upon familiar territory. Nonetheless, the integration with Focus modes and the Mail app makes it a useful addition for Apple users seeking a more streamlined notification management experience. The Photos app introduces a background object removal tool that rivals Google's Magic Eraser in terms of precision and effectiveness. Apple's implementation excels with accurate object detection and generative fill, often delivering superior results compared to its competitors. Whether you're editing vacation photos or removing distractions from a portrait, this feature stands out as one of the most polished and practical additions to Apple Intelligence. It demonstrates Apple's potential to deliver high-quality, user-friendly AI solutions that enhance the creative capabilities of its users. The background object removal tool is a clear highlight of the Apple Intelligence suite, showcasing the company's commitment to pushing the boundaries of AI-driven photo editing. Apple's new audio tools enable users to transcribe and summarize conversations or meetings with impressive accuracy. The inclusion of speaker identification adds clarity to multi-person recordings, making it a valuable tool for professionals and students alike. However, the lack of integration with the default Voice Memos app limits the accessibility of these features. While the audio transcription and summarization tools offer clear utility, their fragmented implementation diminishes their overall impact. To truly capitalize on the potential of these features, Apple would need to ensure seamless integration across its ecosystem, making them more readily available to users. Available on iPhone 16 models, Apple's visual intelligence tools include object recognition and reverse image search. These features allow users to identify objects, retrieve related information, or perform contextual searches directly from their camera app. While functional and undoubtedly useful in certain scenarios, these visual intelligence features feel more like incremental updates rather than groundbreaking innovations. When compared to established tools like Google Lens, Apple's offerings don't quite push the envelope in terms of capabilities or user experience. They serve as a solid addition to the iPhone's feature set but may not be the defining factor that sets Apple Intelligence apart from its competitors. One of the most significant upgrades in Apple's AI strategy is the integration of ChatGPT with Siri. By escalating complex queries to ChatGPT, Siri can now provide more detailed responses and perform advanced tasks, enhancing its overall capabilities. The integration is offered for free, with optional account linking available for users who desire even more advanced functionality. This move demonstrates Apple's commitment to improving its virtual assistant and keeping pace with the rapidly evolving AI landscape. However, despite this significant improvement, Siri's core capabilities remain largely unchanged, leaving room for further development and refinement. Apple Intelligence showcases several strengths, particularly in the areas of photo editing and audio transcription. The background object removal tool in the Photos app and the accuracy of the transcription features highlight Apple's ability to deliver polished, user-friendly AI solutions that enhance productivity and creativity. However, many of the other tools, such as notification summarization, GenMoji, and Image Playground, feel underdeveloped or lack practical use cases. These features, while promising in concept, struggle to justify their inclusion in a premium ecosystem like Apple's. The result is a suite of AI-driven features that, while showcasing glimpses of innovation, fails to deliver a cohesive and consistently impressive user experience. Apple Intelligence, in its current state, offers a mixed picture of strengths and weaknesses, leaving room for improvement and refinement. Despite the limitations and inconsistencies present in the current iteration of Apple Intelligence, the company has outlined plans to refine these features and expand Siri's capabilities. A full rollout of these updates is expected by March 2025, indicating Apple's commitment to continuously improving its AI offerings. If executed effectively, these future updates could significantly enhance the utility and appeal of Apple Intelligence. By addressing the shortcomings of the current features and introducing new, innovative tools, Apple has the potential to solidify its position as a leader in the AI space. However, for now, Apple Intelligence remains a work in progress. The current suite of features offers glimpses of potential rather than fully realized innovation. As Apple continues to iterate and refine these tools, the true impact of Apple Intelligence may become more apparent. Apple Intelligence introduces a range of AI-driven features designed to enhance the user experience across Apple's ecosystem. While standout tools like the background object removal in the Photos app and the accuracy of the transcription features demonstrate clear value, others fall short of expectations, feeling underdeveloped or lacking practical use cases. The current state of Apple Intelligence reflects a blend of innovation and redundancy, with some features pushing the boundaries of what's possible with AI, while others struggle to differentiate themselves in a crowded market. As a result, the overall impact of Apple Intelligence remains limited, offering more promise than immediate payoff. However, with Apple's commitment to refining these features and expanding Siri's capabilities, the future potential of Apple Intelligence cannot be ignored. As the company continues to iterate and improve upon its AI offerings, the true impact of these tools may become more apparent, potentially redefining the way users interact with their devices. For now, Apple Intelligence serves as a glimpse into the future of AI-driven features within Apple's ecosystem. While not yet a catalyst, it lays the foundation for what could become a transformative suite of tools that enhance productivity, creativity, and convenience for Apple users worldwide.
[4]
In the era of smartphone AI, everyone's a beta tester
Although I don't get a chance to use every smartphone released throughout the year, my role as Phones Editor at AP means I usually have first-hand knowledge of the trends taking over the mobile industry. That said, you don't need to try out every device alike to recognize AI as the buzzword du jour. From Samsung to Google to, yes, even Apple, we've spent the last 12 months -- and in Google's case, even longer -- hearing about how AI is rebuilding the core concept of what a smartphone can be. ✕ Remove Ads Except, is it? With an entire AI-based product cycle under our collective belts, it feels like it's as good a time as any to look at how the earliest days of the so-called AI era have been going. And while there's an argument to be made that some of the features are pretty exciting (or, at the very least, have the potential to evolve into something exciting), I can't say I feel the same way. With every "new" smartphone now feeling like a testing ground for tools users haven't been asking for, I'm wondering if our devices will ever start to feel like complete products ever again. Apple Intelligence is a worst-in-class example Unfinished, unpolished, and frankly embarrassing ✕ Remove Ads The iPhone 16 lineup is, in my eyes, the most egregious, offense example here. Practically every OEM has been shipping unfinished features under early access labels throughout 2024, but no company has pushed unavailable apps and tools at the same clip as Apple. The iPhone 16's "Genius" marketing campaign continues to persuade buyers to upgrade their phones to devices that are still months away from launch, despite the entire lineup having hit store shelves nearly nine weeks ago. Not that Apple Intelligence is much to write home about anyway. In the month since iOS 18.1 dropped, I've found myself entirely unimpressed with the first lineup of AI-based tools. Notification summaries were, unsurprisingly, the big change among enthusiasts, but for every successful combination of words, you'll get five more that alternate between contextual misunderstandings, complete gibberish, and occasionally hilarious mishaps. That latter category might make for an entertaining change, but it doesn't make for a useful tool. And with AI hallucinations seemingly locked in as a permanent problem to overcome, I can't imagine a world where these end up as game-changers. ✕ Remove Ads Even then, notification summaries aren't even a groundbreaking approach to managing incoming alerts and messages -- they're a solution to a problem of Apple's own making, where too many apps flood you with too many pings. It's a Band-Aid over a broken system, and considering how much easier it is to manage notifications on Android, I hope Google doesn't follow in Apple's footsteps. Outside of those changes, however, practically everything in iOS 18.1 -- automated photo movies, suggested replies in Messages, object removal in Photos -- is something we've seen done before on different platforms, and better at that. The marquee features, the ones being shown off in the onslaught of iPhone 16 ads leading up to the holidays, aren't scheduled to launch until sometime next year. Hope your parents aren't hyped for Apple's upgraded Siri experience, because it won't be waiting for them under the tree. ✕ Remove Ads Related Google is already delivering on the Apple Intelligence promise Android takes the lead with AI 8 Google Pixel-exclusive AI apps are all in early states But it's far and away the best of the bunch Okay, we've all had fun dunking on Apple, but Android OEMs aren't immune to this, either. While I'd say apps like Pixel Screenshots and Pixel Studio feel more finished than anything we've seen from Apple, they're definitely not finished. Pixel Studio, in particular, came under intense scrutiny -- and deservedly so -- when the Pixel 9 launched, thanks to its lack of a safety feature that made it feel more like a product from modern day X (formerly Twitter, sigh) than it did Google. And right at the top of that app is, you guessed it, a "Preview" label, letting you know that your phone's new AI image generator might not work as expected. ✕ Remove Ads Pixel Screenshots doesn't seem to use the "Preview" label (or the word "beta," for that matter), digging through the app's settings reveals its app version to be well under 1.0, suggesting we're a ways off from a fully finished product. Considering how mixed I've found that app's utility to be in the months since the Pixel 9 launched, though, I can't say I'm surprised -- its descriptions of screenshots are pretty unreliable at the moment. ✕ Remove Ads Even beyond those two new spots in your app drawer, though, everything feels a little untested. Add Me is a great idea on paper that is pretty finicky in real life. You might get some impressive photos out of it, but your friends might get frustrated at you in the process (and, you guessed it, it's rocking that "Preview" label). Reimagine isn't labeled as an early access tool, but considering how buried in Photos it is, I'd bet most Pixel owners haven't even found it yet. And despite being on its second generation, Video Boost continues to supplement meaningful hardware changes in exchange for a cloud-enhanced video that drops all notions of shadows and contrast. I really like the Pixel 9 Pro -- and its foldable cousin -- but in many ways, it feels in spite of its AI-selling points, not because of them. And considering the amount of space that its Gemini Nano LLM takes up on your phone (about 5GB), I'm reluctant to keep those tools around in exchange for continuing to test out Google's AI tools. If I want to be an unpaid beta tester for Google, I'll download Android 16's developer preview, thank you very much. It's not just Google and Apple It's an industry epidemic of selling hardware with untested software ✕ Remove Ads It's not just Apple and Google. I don't need to tell you about Samsung's ongoing Galaxy AI experiment, which has all of the markings of another Bixby-level failure. At least with Apple and Google, these utilities are fairly easy to discover on your smartphone. Nearly all of Samsung's tools are buried in settings, with names like Note Assist, Photo Assist, Browsing Assist, and -- stay with me here -- Transcript Assist. I get it, guys. The phone's going to assist me, somehow. But after nearly a year, I haven't seen any of Samsung's Galaxy AI suite stick around in the public consciousness (though I continue to think about Taylor Kerns' AI-generated Poké-bong, of course). And now, the latest rumor on One UI 7 suggests that we'll actually be waiting for One UI 7.1 for any meaningful changes, completely bypassing the Galaxy S25 launch. Those rumored tools include notification summaries and AI-based emoji, both of which are right out of Apple's playbook. ✕ Remove Ads OnePlus and Motorola have their own respective plans for AI tools as well, with OxygenOS 15 chock-full of assistant tools for notes, photos, and even Google Wallet. Still, Razr fans can breathe easy -- since Motorola's changes would require pushing a timely software update to any of its phones, I'm not sure we need to worry about those features coming along any time soon. Until it works as intended, you shouldn't buy a smartphone for AI And frankly, who knows when that'll be ✕ Remove Ads At this point, no matter what smartphone you're upgrading to, you're bound to stumble on your fair share of unfinished, untested, rough-around-the edges AI tools. These OEMs are going to promise you it's the future of mobile, the future of the entire industry, and all you need to do is use their apps and wait for things to get better. A revolution is right around the corner, just as soon as the last few bugs are completely ironed out. I think that's a fundamental misunderstanding of what people want from their phones in 2024. If Google, or Samsung, or Apple, want to employ me to test their apps, then by all means, I'll drop my PayPal at a moment's notice. But what I'm looking for in a smartphone right now isn't necessarily some far-away glimpse at an AI-powered future -- it's stability. I want a device that I can rely on to navigate me around an unfamiliar city, capture crisp, colorful photos of my friends and family, and keep my entire digital life always within arm's reach. ✕ Remove Ads Right now, our phones still basically do all of those things, but I can feel the attention slipping. Whether it's a constant influx of bad directions from Google Maps -- which continues to try and get me to drive down a one-way not too far from my house -- or the constant discussion surrounding what a photo even is, the era of AI just feels sloppy and unfinished. We all deserve a stable mobile experience, even if it comes at the cost of these companies' bottom lines. Related I'm convinced AI will take over, but not in the way you think We're outsourcing people skills 5
[5]
I love AI, but it's absolutely not worth paying extra for your phone
I remember sitting at the Samsung Galaxy S24 FE reveal, disappointed that Samsung was highlighting AI as the phone's biggest selling point. The company highlighted all the Galaxy AI functions now available at a lower price point, but it didn't land with me as a reason I'd spend more on a device. ✕ Remove Ads Artificial intelligence is a fantastic tool, but it will take more than Circle to Search to convince me it's worth paying a premium. If companies want AI enhancements to replace camera improvements as the marquee feature, a lot of work has to be done. Related 10 best Galaxy AI features every Samsung owner should try From writing assist to live call translate AI needs to move away from novelty I don't know how many cat pictures I need I don't know how many generative cat images I need, whether I'm using Sketch to Image from Samsung or Google's Pixel Studio. It's a cute idea, and I've always wondered what my black cat Xavi would look like wearing a sailor cap, but it's not a reason to pay $150 more for a phone or pick one device over another. AI feels like a solution searching for a problem, which is unfortunate because I believe there are legitimate uses. ✕ Remove Ads I need to see manufacturer-specific AI enhancements, features I can't get on several other devices at multiple price points. If manufacturers want AI to add value to phones, they need to get serious by releasing killer apps and features that feel more productive and less like distractions. I think Gemini Live is a step in the right direction, as I've used it more than any AI chatbot. However, Google must keep adding features to make it a full-fledged Google Assistant replacement. Gemini Live brings me to the next problem with AI, though. It can't be the same across the board There is no reason to buy a Galaxy over a Pixel for AI It's heartwarming to see the collaborations between Samsung and Google with AI; features like Circle to Search and Gemini Live were released quickly on my Galaxy phones, and I didn't feel like having a Pixel was an advantage in getting the latest and greatest. ✕ Remove Ads However, there's a flip side to that; it falls flat when Samsung tries to use features like Circle to Search as a selling point for the Galaxy S24 FE. Why would I consider AI as a reason to buy the Galaxy S24 FE when I can have the same features on a much less expensive Google Pixel 8a? Related I used the Samsung Galaxy S24 FE and Google Pixel 8a for a week -- here's what I learned Different phones for different people 3 I need to see manufacturer-specific AI enhancements, features I can't get on several other devices at multiple price points. I had a similar problem when Google advertised Pixel phones highlighting the Magic Eraser, only to find it on other Android phones within weeks. I understand Google is in an awkward position as both a phone manufacturer and the keeper of Android, but I'm not convinced there isn't a way for companies to leverage features as a competitive advantage for their phones. ✕ Remove Ads Maybe it doesn't have to be obvious UI experiences can change for the better If Google won't hold back features from Samsung and others, maybe there's another way to create AI differences worth paying for. We've already seen how Google and Samsung handle computational photography, with different areas of emphasis on creating varying looks for photos. Side-by-side imagery shows how a Galaxy S24 Ultra processes the same image differently than a Google Pixel 9 Pro XL. I want that to bleed over into the user experience. I want to see One UI and Material You embrace AI for more than just wallpapers. The underlying tools could be the same, but the presentation would be different enough to justify picking one manufacturer over another. A future version of One UI, rebuilt from the ground up, could better integrate productivity features into the user experience. ✕ Remove Ads Beyond that, I'd really love to see something like an intelligent version of Microsoft's old Metro UI, bringing us up to speed on family and friends through social media in real-time while also presenting the information we go to the most on a fluid home screen. I think there are plenty of possibilities for anyone bold enough to try. Something needs to be done It's hard to get excited about smartphones with more powerful chipsets and better cameras when we barely use the capabilities we have now. AI can change much of that, but how companies utilize it currently feels more novel than revolutionary. Galaxy AI and Google Gemini have potential, but seem plagued by a lack of vision; manufacturers default to whatever feels safest because of the risk of alienating customers. Still, there is a huge shift out there waiting for Samsung, Google, Apple, or whoever can mobilize first and bring us AI in a way that creates value -- not just more cat pictures. ✕ Remove Ads
Share
Share
Copy Link
Apple's foray into AI with Apple Intelligence has been met with disappointment, as users find the features limited, buggy, and less capable compared to competitors like Google's AI offerings.
Apple's much-anticipated entry into the AI arena with Apple Intelligence has left many users underwhelmed. Launched with the iPhone 16 series, the feature set has been plagued by limited availability, delayed rollouts, and functionality that falls short of competitors' offerings 1.
Apple Intelligence's initial release focused primarily on Writing Tools, which offer basic proofreading and text manipulation features. However, users have reported that these tools often fail to catch obvious errors or make meaningful improvements to text 1. The feature is currently limited to US English, severely restricting its global utility 2.
Despite promises of significant improvements, the new Siri powered by Apple Intelligence has shown little advancement. Users report that it still relies heavily on web searches for basic queries, falling behind competitors like Google Assistant and even Samsung's Bixby 12.
The AI-powered notification summarization feature, intended to streamline user experience, has been criticized for misinterpreting message contents and providing inaccurate summaries 13.
Google's AI implementations, particularly on Pixel phones, have been noted to outperform Apple Intelligence in several areas:
Many of Apple Intelligence's features feel incomplete or are still unavailable, despite being heavily marketed. This has led to criticism that Apple is essentially using paying customers as beta testers for unfinished technology 45.
The tech industry's focus on AI as a primary selling point for new devices has come under scrutiny. Critics argue that many AI features feel more like novelties than essential tools, questioning whether they justify price premiums or device upgrades 5.
For AI to become a compelling reason to choose one device over another, manufacturers need to develop unique, practical applications that go beyond basic text and image manipulation. The challenge lies in creating AI features that genuinely enhance user experience and productivity, rather than serving as mere technological showcases 5.
As the AI race continues, Apple will need to significantly improve Apple Intelligence to compete with more mature offerings from Google and others. The coming months will be crucial in determining whether Apple can deliver on its AI promises and regain ground in this rapidly evolving technological landscape.
Reference
[1]
[2]
[3]
[4]
[5]
Apple's delayed entry into AI with Apple Intelligence shows promise but faces criticism for its staggered rollout and mixed user reception. The tech giant aims to expand its AI offerings in 2025, balancing innovation with privacy concerns.
7 Sources
7 Sources
Apple's voice assistant Siri lags behind competitors, causing delays in product launches and raising questions about the company's AI strategy. This struggle reflects broader challenges in the consumer tech industry's push for AI integration.
3 Sources
3 Sources
Apple rolls out its AI features, Apple Intelligence, with a focus on privacy and security. The update brings new capabilities but faces criticism for inconsistent performance and battery drain issues.
4 Sources
4 Sources
Apple's new AI features, Apple Intelligence, are rolling out with iOS 18 updates. While promising, analysts doubt their immediate impact on iPhone 16 sales, citing production cuts and delayed feature releases.
8 Sources
8 Sources
Apple's latest iOS 18.1 developer beta introduces 'Apple Intelligence', a suite of AI-powered features set to transform user experience on iPhones and other Apple devices. This update showcases Siri's enhanced capabilities and various AI integrations across the operating system.
10 Sources
10 Sources