Curated by THEOUTPOST
On Tue, 31 Dec, 4:02 PM UTC
7 Sources
[1]
2025 Analyst Outlook: Mark Hibben On Chip Competition, What's Next For Tech Giants
Tech - again - stole the market spotlight in 2024 thanks to AI and investor focus on the Magnificent 7. Can the tech giants repeat the feat in 2025? Mark Hibben, Investing Group leader behind the Rethink Technology service at Seeking Alpha, suggests key companies in the sector will continue their 2024 successes this year. Nvidia (NVDA) stock ended 2024 with a 180% gain. Hibben says the chip company may continue to dominate its data center and consumer markets, and he expects more gains for shares in 2025. Not so clear: How Intel (INTC) competes in the chip market and if it really follows through with its foundry strategy. Investors also will closely follow Google (GOOG) (GOOGL) and its antitrust case, which Hibben thinks could end up with a resolution that's friendly for the search giant. And the markets will watch how Apple (AAPL) traverses tricky political and trade issues. Hibben expects more clarity on Apple Intelligence, which he says could evolve into a bigger opportunity for the tech giant. Seeking Alpha: What else can be said about Nvidia? Obviously, the company remains the tech darling for investors. It looks as if the momentum will continue into 2025? Mark Hibben: Nvidia investors have much to look forward to in 2025. With Nvidia's flagship Blackwell AI accelerator in full production, the Data Center segment will likely continue its impressive growth. In addition, Nvidia is likely to see continued, albeit more modest, growth in consumer markets through the release of its new RTX 50 series discrete GPUs and through a new series of ARM processors for Windows Copilot+ PCs. On Data Center growth: There's natural anxiety among Nvidia investors whether the explosive revenue growth of the Data Center segment can continue into calendar 2025. Especially since Nvidia's guidance for its fiscal Q4 suggested a downturn in the rate of growth. Considering that guidance, revenue growth in the Data Center segment for fiscal 2025 (ending January 2025) will be a mere 137%. The rate of growth may well moderate next year, and I'm currently modeling a little over 50% revenue growth in the Data Center for fiscal 2026. Demand is still very strong for generative AI platforms in the cloud and enterprise data centers. Next year, Nvidia's competition, at least in the first half, will still be relatively weak. Advanced Micro Designs (AMD) will offer the AMD Instinct MI325X: And Intel will offer the Gaudi 3: Betraying its heritage as an accelerator for supercomputing, the MI325X excels at high precision floating point calculations. But these number formats, 32 bit and 64 bit floating point numbers, FP32 and FP64, are rarely used for AI. AI models have been moving to progressively lower precision numbers, such as FP16 and FP8, and here, Blackwell's performance towers above its competition: These TOPS (tera (10^12) operations per second) ratings are provided by the manufacturers and represent theoretical maxima. For operational AI performance, I prefer to rely on ML Commons benchmarks. However, few companies besides Nvidia post their results to ML Commons. Google posts their results for their custom Tensor Processing Units (TPUs), and there are a few inference results for the AMD Instinct MI300X and Intel Xeon CPUs. The lack of postings for AMD Instinct, Gaudi, or Intel Ponte Vecchio probably sums up the competitive landscape better than the raw TOPS ratings. A dark horse competitor to Nvidia which also has never posted results to ML Commons is Cerebras (CBRS). Cerebras has created a "wafer scale" GPU accelerator which claims a huge size advantage over Nvidia: The Cerebras chip is made by stitching together the 84 different zones which would normally be separate devices during the lithography process. It's a difficult process that hasn't been accomplished before. Cerebras also claims a huge performance advantage over Blackwell in a blog post: The most comparable Nvidia system is the rack system consisting of 36 Grace-Blackwell superchips (72 Blackwell B200), the GB200-NVL72: The previous generation of Cerebras processors, the CS-2, also boasted impressive performance compared to Nvidia's H100 "Hopper." So why hasn't Cerebras cornered the market? Probably, it's due to the cost of the systems. Building conventional GPUs on a wafer is probably much less expensive than combining all those GPUs so that they work together as a single chip on a wafer. Cerebras filed for an IPO on Oct. 1. In the filing they revealed that they only had $136.4 million in revenue in the first six months of 2024 and lost $66.6 million. So it's probably going to be a few years at least before Cerebras can make its wafer scale chips profitably. Until then, it can't afford to make many. At Computex this year, AMD revealed that the MI350 series would come out in 2025, but no details about exactly when or what the performance of the new series would be. And that's it for the competitive landscape going into 2025. Is it any wonder that Nvidia stated back in October that Blackwell is "sold out" for the next 12 months? On Growth in consumer markets: Given the explosion in Data Center revenue the past couple of years, it's easy to overlook the fact that Nvidia's other market segments have been growing at a healthy rate. The Gaming segment, where Nvidia posts its revenue for its consumer GPU add-in boards, popular with gamers, grew by 15% in fiscal 2024 and will likely by about the same in fiscal 2025. Nvidia is about to announce a new series of cards, dubbed the RTX 50 series, with the RTX 5090 replacing the now venerable RTX 4090 as the flagship. There are, of course, the usual complaints in advance from reviewers about how expensive the cards will be. But Nvidia is simply doing what any well run business should do, charging what the market will bear. Nvidia has become so dominant in PC gaming that AMD's SVP and General Manager of the Computing and Graphics Business Group, Jack Huynh, indicated in an interview with Paul Alcorn of Tom's Hardware that AMD was bailing out of the high-end GPU competition with Nvidia. Alcorn asked Huynh: There's been a lot of anxiety in the PC enthusiast community that, with this massive amount of focus on the data center that AMD has created and your success, there will be less of a focus on gaming. There have even been repeated rumors from multiple different sources that AMD may not be as committed to the high end of the enthusiast GPU market, that it may come down more to the mid-range, and maybe not even have flagship SKUs to take on Nvidia's top-of-stack model. Are you guys committed to competing at the top of the stack with Nvidia? Huynh replied: I'm looking at scale, and AMD is in a different place right now. We have this debate quite a bit at AMD, right? So the question I ask is, the PlayStation 5, do you think that's hurting us? It's $499. So, I ask, is it fun to go King of the Hill? Again, I'm looking for scale. Because when we get scale, then I bring developers with us. So, my No. 1 one priority right now is to build scale, to get us to 40 to 50 percent of the market faster. Do I want to go after 10% of the TAM (Total Addressable Market) or 80%? I'm an 80% kind of guy because I don't want AMD to be the company that only people who can afford Porsches and Ferraris can buy. We want to build gaming systems for millions of users. I think Huynh's argument is a little specious. I own an RTX 4090 and enjoy playing Cyberpunk 2077 at 8K, but I don't own a Porsche or Ferrari. The market share consideration is not unreasonable. As of 2024 Q1, Nvidia's share of the add-in board market was 88%, according to Jon Peddie Research, via Tom's Hardware. And according to Steam's Hardware Survey as of August, 76.5% of Steam users had Nvidia GPUs. In fact, I think AMD has diverted resources to the Data Center effort and minimized investment in gaming GPUs. Once again, the RTX 50 series will see little competition from AMD. Or from Intel for that matter. Intel's latest "Battlemage" GPUs, released on Dec. 3, have been applauded by reviewers for their value, but no one is claiming that they will compete at the high end. In consumer GPUs, Nvidia once again stands alone at the high end. The RTX refresh will undoubtedly spur sales and growth next year. And Nvidia is thought to be preparing to enter the market for Microsoft Copilot+ PCs. Nvidia has long had a line of SOCs (Systems on Chip) that featured ARM architecture CPU cores and its own GPU architecture sections. These have mainly been targeted at robotics and automotive driver assistance and self driving. Given their strong GPU and AI capability, these would seem to be ideal for the new generation of AI PCs. Rumors to that effect first appeared about a year ago, and a more recent report from Oct. 31 confirms that Nvidia plans to release a consumer ARM-based SOC by September 2025 for Windows PCs. This could greatly expand the sales volume for its ARM SOCs, but Nvidia will not have this market to itself. It will have vigorous competition from Qualcomm (QCOM), AMD, and Intel (INTC). But Nvidia will have a powerful advantage in its on-board GPUs as well as AI capability. Overall, I expect continued revenue and earnings growth in both the Data Center and in consumer-driven markets such as Gaming, PCs, and automotive. I continue to be long Nvidia and rate it a Buy. Seeking Alpha: The other side of the spectrum is Intel. With CEO Pat Gelsinger out, what's next for this beleaguered company? Mark Hibben: I've seen it suggested that firing Gelsinger was a mistake and that he might even be reinstated. Whatever befalls at Intel in the future, I'm quite certain that Gelsinger will not be returning to the company. Intel investors should assume that the board did not act capriciously in ousting Gelsinger, even though investors have been left in the dark regarding its reasons. This lack of transparency is an ongoing problem with Intel's corporate culture which needs to be corrected by the next CEO. Investors and analysts are left to sift through the available data in order to arrive at a viable hypothesis for Gelsinger's removal and Intel's future prospects. I summarized much of this data in my article Intel: The Problems Gelsinger Leaves Behind. Much of this data is incontrovertible: Nvidia's disruption of the data center and its huge Data Center segment revenue growth compared to Intel's relative stagnation. Nvidia's fiscal Q3 Data Center segment revenue of $30.77 billion dwarfed Intel's total Q3 revenue of $13.284 billion. These facts speak plainly to the failure of Intel's own data center GPU accelerator, Ponte Vecchio, now called the Data Center GPU Max 1550. Released in 2022, it should have been perfectly timed to capture a major share of the data center AI market. But Intel doesn't even have it listed in its processor data archives any more, which is odd considering that the processor would normally have had a lifespan of several years. And the latest Gaudi 3 AI accelerator isn't going to help either. As I reviewed above, its specs indicate that it's completely inadequate to compete with Blackwell or even the MI325X. This leaves Intel with nothing to counteract Nvidia, and to a lesser extent AMD, in data center GPUs until its next generation of GPUs, dubbed Falcon Shores is released sometime in 2025. In March 2023, Intel updated its data center GPU roadmap, indicating that Falcon Shores would be delayed from 2024 into 2025. Intel made big promises for Falcon Shores: But Intel made similar promises for Ponte Vecchio and came up short, and late. I would not bet on Falcon Shores to staunch the hemorrhaging in the Data Center. In advanced semiconductor process development and Intel Foundry, the facts are less clear cut, but no less damning. By all appearances, Intel is staying the Gelsinger course, and by implication maintaining that "5 nodes in 4 years" is "on track." This at least was what Intel seemed to want to convey at the UBS Global Technology and AI Conference in December at which David Zinsner, Interim co-CEO, and Naga Chandrasekaran, Chief global Operations Officer and GM of Foundry Manufacturing, gave an interview. Zinsner began by saying: . . . the Board was pretty clear that the core strategy remains intact. We still want to be a world-class foundry. We want to be the western provider of leading edge silicon to customers and that remains our goal. But we also understand that it's important for the No. 1 customer of foundry to be successful in order for foundry to be successful. And so the board wants to also put emphasis on execution around the product side of the business to make sure that the foundry business remains successful. I think this should be qualified as "the core strategy remains intact, for the time being." Or until Intel finds a new CEO. I thought it was interesting that Zinsner seemed to put the burden on the Products group to generate more sales. Yet, in Q3, almost all the segment operating loss was in Foundry. Intel's Foundry strategy has a real problem, which is being cost competitive with mature foundries such as Taiwan Semiconductor Manufacturing Company (TSM). And it doesn't help that Intel is still playing catch up in advanced processes. As I discussed in detail, the cancellation of the Intel 20A process left it with no recourse but to use TSMC's "3 nm" N3 process for its latest Lunar Lake Copilot+ PC processors as well as its Arrow Lake desktop processors. On process node development, Chandrasekaran was far less sanguine than Gelsinger had been. Tim Arcuri of UBS asked: But can you talk a little bit, A, about where 18A is vs. where you think it needs to be to sort of intersect the second half of '25 a ramp. And B, the thing that I hear from some of the customers is that or some of the prospective foundry customers is that 18A is still a bit more geared toward HPC. And as a broad foundry node, the customers that I talk to are sort of like 18A is great if you have an HPC application, 14A might be the node that's more broadly applicable to external foundry customers. Can you talk about that as well? Chandrasekaran replied to the first part: So when Pat announced the defect density D0 less than 0.4, it was a point in time and it was to give the indication that we are progressing as expected. If I look at it today, we are progressing. There are several milestones that we have met and there are still many milestones ahead for the technology development. And if I wear my technology development hat for a minute, there's always challenges when you're introducing new technology and there's ups and downs. But what I would say is there's nothing fundamentally challenging on this node. Now it is about going through the remaining yield challenges, defect density challenges, continuing to improve it, improving process margin and getting it ramped. Will there be challenges? There will be, but I think we are progressing. And next year, as I look at it, primarily the first half will be getting the node into engineering samples into our customers' hands and getting the feedback and starting a ramp in Oregon. And the second half of 2025, our milestone is certifying the node, getting it ramped in Arizona and getting the product on the shelves so that customers can buy it. So that's the milestones and we are working towards meeting all those milestones over the next year. It's very critical for us. What's notable in the reply is that Chandrasekaran never uses the word "expect" with regard to 18A readiness. Instead, he states that they have goals for production, first half of 2025 sampling, then a production ramp in the second half. At the same time, he acknowledges remaining yield and defect challenges. So does Intel have an 18A node that can yield sufficiently at production volumes to be a viable manufacturing process? At this juncture, I think the obvious answer is no. With regard to the second part of the question, it appears that Arcuri is aware that prospective customers are already dissatisfied with 18A and have put off any commitments until 14A is ready. Chadrasekaran continued: It [18A] can benefit mobile depending on how the designs are done, but because the customer engagement is more later, it doesn't address the full TAM. And 18A, our biggest customer for the next two, three years is still Intel products, which goes back to what Dave was saying. The Intel products, we know the demand, we know what needs to happen and our focus is to ramp it and continue to get more customers on 18A. But all this learning is getting implemented into 14A. So as 14A comes in, there will be a broader market that 14A will address, including compute and mobile and other applications and also how the PDKs are done so that it's not just for with Intel Focus, but it's also focused on the broader ecosystem taking 14A and applying it to their designs. Chandrasekaran acknowledges that Intel doesn't have many customers for 18A but expects more interest in 14A. When might 14A be ready? He doesn't say. As I keep saying, what's important in process node development is what a manufacturer can deliver in terms of high volume production, not what it can show in a marketing presentation. Chandrasekaran was refreshingly honest in acknowledging the challenges that Intel still faces in bringing 18A to full production. But he's also hemmed in by the expectations of his management, which prevented him from acknowledging the obvious: 18A probably won't be ready for high volume production next year. Here, I'll go out on a limb and make some inferences and predictions. My inference is that the board has given Chandrasekaran until the end of 2025 to deliver high volume production. This was done not because it was satisfied with the state of progress in process nodes, but because Intel had already invested so much money in advanced process and manufacturing. I predict that if 18A mass production doesn't arrive next year, the board will pull the plug on the Foundry strategy, simply because it doesn't see a way to become competitive. One could argue that this would be premature and short sighted, but Intel's bottom line probably can't sustain more than another year of the Foundry strategy without some sign of a payoff. This doesn't necessarily mean that Intel would give up on advanced manufacturing. I've argued that Intel's efforts to become a Foundry actually made its manufacturing less efficient in the near term. However, there will certainly be a very strong monetary temptation to offload Foundry and go fabless. How this plays out, time will tell. I continue to rate Intel a Sell based on its poor financial performance and uncertain future. Seeking Alpha: Google's antitrust case was a big development for the company. How does this impact the search giant in 2025 and beyond? Mark Hibben: Predicting how the incoming Trump Administration will handle an antitrust case initiated by the Biden administration is difficult. Normally, conservatives are hostile to business regulation. However, Trump supporters in the media have expressed hostility toward so-called "woke" companies, and, regardless of how "woke" is defined, such commentators would likely place Google in this category. As such, Google may not garner much sympathy from the new administration. Subsequent to my article on Google's loss in the antitrust case, the DOJ requested rather draconian remedies, including the spin-off of the Android operating system and the Chrome browser, as well as elimination of "exclusive dealing" contracts with companies such as Apple (AAPL). In my article, I argued that a breakup of Google was unlikely to be granted: While a possible breakup of Google is appealing to its rivals, I'm not convinced that it will be implemented. The problem here is that it will be difficult to show that a breakup along reasonable organizational lines will be effective in reducing Google's dominance in search and search text advertising. As free offerings, Chrome and Android depend entirely on Google's search revenue. I pointed out: The fundamental problem here is that separating the search business from other Alphabet businesses simply leaves the search business free of the financial burden of supporting the other Alphabet businesses. The search would be free to apply its enormous revenue to maintain its dominance. Any spin-off scenario one can concoct ends up in the same predicament. While search user tracking is enabled within Chrome, it doesn't require Chrome, since tracking can be done through any browser that supports cookies. Also, Google Analytics probably provides most of the user tracking that Google needs. I'm not surprised the DOJ requested a breakup, but I don't think it would have been granted in any case when the remedies phase begins next August. Under the Trump Administration, the DOJ is likely to back off on the breakup remedies, but still pursue the behavioral remedies that I thought likely to be implemented: I believe that rather than a breakup, which arguably harms consumers, the court will mainly focus on behavioral remedies, such as the abolition of RSAs (Revenue Sharing Agreements). There will likely also be restrictions on auction pricing, since this was clearly abusive. And the proposal that Google not prefer its own services in search results will also likely be adopted. The most impactful remedy, financially, is the abolition of RSAs, so let's look at that. The cost of the RSAs, including Apple, is reported as Traffic Acquisition Costs - TAC. In fiscal 2023, TAC was $50.866 billion, according to Alphabet's 2023 annual report, or 29% of search advertising revenue. Abolition of the RSAs would actually save Google about $50 billion per year in costs. Instead of default placement in browsers and the Android home screen, users would need to select from a menu of options at the initial setup of the device or browser. Google would likely lose some search share in this process, but would it lose roughly 30%? That's a hard question to answer, but I think that it would not, at least at the beginning when competitors are still relatively weak. Over time, competitors may gain market share. And Apple, deprived of the incentive to do nothing, may pursue development of its own search engine. Also, in the context of future AI enabled operating systems, search and generative AI are inextricably linked. Apple would likely pursue search as part of its broader AI strategy. So, in the short term, I doubt that Google is harmed financially by the lack of RSAs. It loses some percentage of search revenue but makes up for it by recovery of the TAC expense. In the near term, I think Google comes out ahead, although the top line will see a year-over-year decline. Given that the RSAs are viewed by some as illegal "exclusive dealing" contracts under Sherman, I can't see the judge not granting this remedy at the very least. Unless of course, the new DOJ simply moves to dismiss the case. I continue to rate Google a Hold, but I may upgrade them to Buy depending on the disposition of the incoming administration. Seeking Alpha: You're also well known for your coverage of Apple. How does the company navigate tricky political and trade issues in 2025 and beyond? And thoughts on Apple and its AI efforts? Mark Hibben: On tariffs and trade: President-elect Trump has indicated that he will impose tariffs of 60% or higher on Chinese made products. Since most of Apple's (AAPL) main products, including Mac, iPhone and iPad, are still assembled in China, that could have a major impact. When Trump first imposed tariffs on Chinese goods, consumer electronics including Apple's products were excluded. The tariff burden mostly fell on Chinese-made components used by manufacturers in this country. It's not clear that such an exclusion will be made this time around. Probably not. Apple has been moving to diversify its manufacturing to places such as Vietnam and India, and Apple will likely accelerate this process with or without tariffs. However, Apple's contract manufacturers such as Foxconn (Hon Hai Precision) have made huge infrastructure investments in mainland China. It could take years to get all of that manufacturing moved out of China. How Apple will respond to the immediate tariff impact is uncertain. Apple's margins are not so large as to absorb the entire cost of a 60% tariff, so most of it would have to be passed on to consumers. I think it likely that the Trump Administration will once again exempt consumer electronics rather than suffer the political fallout of huge price increases being borne by consumers. Trump has vowed to roll back the inflationary price increases of the Biden years, and a large price increase in consumer electronics would run contrary to that goal. On Apple's AI efforts: I recently posted an article for my Rethink Technology investing group subscribers on my personal experiences with Apple Intelligence. As an Apple Developer, I've invested in high end versions of the latest M4 Max MacBook Pro, the M4 iPad Pro, and the iPhone 16 Pro Max, so I was able to explore Apple Intelligence (let's call it AI for short), on the best available Apple devices. Most AI features are implemented on-device and don't require connection to the internet. This is in keeping with Apple's emphasis on privacy and security, but it limits what AI can do. Apple has delivered all the AI features it promised for the end of the year back at WWDC. The main effort still outstanding is a cloud based version of Siri that uses generative artificial intelligence. Everything I tried out worked, but often not impressively. There's only so much smart you can squeeze into a smartphone, even an iPhone. Users looking for the functionality of a Microsoft (MSFT) Copilot or Google Gemini will likely be disappointed. But these are huge cloud-based generative AI models and it's unreasonable to compare them to what Apple has done on-device, although consumers may do so in any case. And that's a potential problem for Apple. Consumers may not care about the distinction. Apple has made a long term bet that it can weave on-device and cloud based AI into a seamless whole that "just work." Microsoft is also trying to blend on-device AI with cloud-based AI in its Copilot+ PCs. Apple's main advantage in this is Apple Silicon, which continues to make enormous strides compared to competitors, whether using ARM or x86 architecture. According to Geekbench results, Apple's M4 Max CPU outperforms the latest Intel Lunar Lake and Arrow Lake processors in the multicore CPU benchmark: Apple's internal graphics also test out to be far superior to competitors in the Geekbench OpenCL benchmark: I've personally confirmed these results on my own 16" MacBookPro with the M4 Max processor. The internal graphics results are particularly relevant since the GPU section can be used for AI calculations. The biggest problem with Apple Intelligence right now is that Apple has mandated that it be backwardly compatible to the M1 series, which means that it can't take advantage of the processing power available with the M4 series Macs. Fortunately, Mac users aren't limited to Apple's on-device AI but can take advantage of open source AI models downloadable through an MIT-licensed AI platform called Ollama. I've used Ollama and found that I could run even very large 405 billion parameter Llama 3.1 models on the MacBook Pro. My conclusion is that the latest Apple Silicon Macs are an excellent platform for on-device AI, even if Apple Intelligence doesn't fully exploit them. As Apple Intelligence software matures and progresses to more capable platforms in the future, users will find them ever more capable and useful. The power of Apple Silicon also bodes well for the server based version of intelligent Siri to come next year. This new Siri will run on Apple Silicon based cloud servers. These servers will likely be Apple's secret weapon in the competition with cloud based AIs from Google and Microsoft. I continue to expect that Apple's ultimate destination for its Intelligence is a new AI based user interface in which virtually all computer interactions are mediated by the on-device AI. These are what Microsoft and Google have referred to as "agency" functions, where the AI is allowed to take actions on the device on behalf of the user. But both companies have been very tentative in their approach to agency because of the obvious security implications of having a cloud-based AI control the user's local device. These security concerns mostly go away if the AI is hosted on-device. Siri already has more agency capability than Microsoft or Google contemplate. Users can ask Siri to turn on WiFi or launch an app, just by asking. Voice response is very reliable, and it's all on-device. The more intelligent Siri will use Apple's secure server approach that allows it to process user queries in the cloud when needed. User data is always sent encrypted, and never stored once the query is processed. Ultimately, I expect Siri to become a fully functional user interface capable of handling almost any function the user might perform on the device by conventional means. Apple is once again pioneering a new computer interface, something that the cloud based AIs can't do without putting the user and local device at risk of a privacy breach or worse, a malware attack. While Apple Intelligence may be off to a rocky start, I think it has a bright future. I remain long Apple and rate it a Buy.
[2]
Apple's AI Ambitions Will Finally Take Flight in 2025
The year 2024 has turned out to be a big year for Apple. Until last year, everyone mocked the Cupertino tech giant for being unable to develop its AI-driven features. Interestingly, 2024 turned the game as Apple has finally stepped into AI. First previewed at WWDC 2024, Apple Intelligence is Apple's name for AI, which now sits at the core of its iOS, iPadOS, and macOS. It brings a suite of AI features to compatible devices, which includes the latest iPhones, A17 & M-series iPads, and Apple Silicon Macs. However, rather than releasing its AI venture in one go, Apple launched its AI features in waves over 2024, and more will arrive in 2025. In October, the tech titan launched the first wave of Apple Intelligence features with iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1 which brought Writing Tools, a Clean Up tool, Notification Summaries, redesigned Siri UI, Call Recording, and more. The next wave arrived in December with iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2, bringing Genmoji, Image Playground, ChatGPT Integration, Visual Intelligence, and more. While 2024 has been an amazing year, Apple's AI-ambitious will finally take flight in 2025. Let's see what's more in store with Apple Intelligence. Earlier, Apple was criticized for the delayed rollout of its Apple Intelligence system. The giant rolled out the first version of iOS 18 without any AI features. In fact, the iPhone 16 models, the first-ever iPhones built for AI, arrived without any Apple Intelligence features right out of the box. Despite all the criticism and mockery, Apple Intelligence maintained its hype and impressed users despite a delayed and staggered rollout. Fans were overjoyed when Apple launched its first set of AI features with iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1. Since then, the giant has earned all the praise for its impressive and practical AI features that can actually make your life easier. Beyond its features, Apple has made headlines for its smart approach to AI. The giant learned from the mistakes of competitors and launched its AI system with a unique strategy. What differentiates Apple from other players in the AI league is the fact that Apple lets you use AI in your everyday lives, with a BIG focus on privacy. Yes, you've heard it right! Apple's Intelligence system will maintain your privacy while you use AI features on iPhone, iPad, and Mac. Also, unlike other AI players such as Google and Microsoft, Apple has no plans to charge for its AI features. In short, Apple Intelligence marks a big and bold move that could change the way how we interact with our devices. Apple released its first serving of AI features in iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1, and expanded them with iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2 updates. Here are all the major Apple Intelligence features available right now: With Apple Intelligence, Siri has become more powerful and functional which allows for better conversation. While Apple had rolled out the redesigned Siri with a new UI and ChatGPT Integration, some Siri features are set to arrive in 2025. Since its preview, Siri has been praised for its ability to understand the user's personal context. across notes, mail, messages, and more. Therefore, you could ask Siri things like flight details saved in the Mail app or a lunch plan you discussed with your friend. However, this personal context will come sometime in 2025. That's not all. Apple is also working on an "Onscreen awareness" feature which will allow Siri to understand and respond to what's currently displayed on your screen. For instance, if you've opened a document on your iPhone, you can ask Siri to send the contents to a supported app. Also, Siri will offer more in-app actions, for both first and third-party apps. For instance, Siri could edit a photo for you and then email it to someone, or fetch a PDF from your email and save it to the Files app. These new capabilities arriving with iOS 18.3 or iOS 18.4 in 2025 would definitely make Siri more capable and interesting. In 2025, Apple will also roll out the Priority Notifications feature that will show you the most important or urgent notifications first. With this Apple Intelligence feature, more urgent notifications like a meeting reminder or booking confirmation will be pushed to the top of the notification stack. This way, you can know what to pay attention to at a quick glance. This upcoming Apple Intelligence feature will be a savior for users who have a loaded notification center, which often leads to missing out on the important ones. Debuted with iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2, Image Playground brought the first set of image generation capabilities for Apple users. At the moment, Image Playground lets you try out Animation and Illustration image styles. There's a third style called Sketch, which Apple will add in 2025. Apple describes the Sketch style as "an academic and highly detailed style that uses a vibrant color palette combined with technical lines to produce realistic drawings." Sketch style would be different from the Animation style which provides a 3D cartoon look and the Illustration style which creates a flatter, 2D image. With macOS 15.2, Macs got support for Image Playground, but the Genmoji integration is currently available in macOS 15.3 beta. As of now, you can create Genmoji on iPhone and iPad. If you wish to try custom AI-powered emojis on Mac right now, you'll have to install macOS 15.3 beta on your machine. Honestly, installing the beta version isn't on your primary device isn't a practical option as it might have unexpected bugs, glitches, and performance issues. To use Genmoji on a Mac with the stable macOS version, you'll have to wait until 2025. With iOS 18.1 and iPadOS 18.1, Apple added the brand-new Memory Movie feature that creates a story you wish to see. All you have to do is enter a description and Apple Intelligence will find the best photos and videos that match. It then crafts a storyline, chooses a perfect song, and arranges your photos into a beautiful memory movie with a narrative arc. Currently, Memory Movies are only available on iPhones and iPads. Interestingly, 2025 will bring support for Memory Movies on Mac. Currently, Apple Intelligence isn't available to all countries and regions. With iOS 18.1, Apple first launched AI features in English (United States). The tech titan then expanded it to English (Australia), English (Canada), English (Ireland), English (UK), English (New Zealand), and English (South Africa), with the recent iOS 18.2 update. In 2025, Apple will roll out support for more languages, including Chinese, English (India), English (Singapore), Italian, French, German, Portuguese, Spanish, Japanese, Korean, and Vietnamese. The support for the first set of new languages will arrive in April 2025 (probably via iOS 18.4), and more languages will continue to arrive at a later date. With these features, Apple Intelligence will feel complete and offer a more connected experience across the Apple ecosystem. To sum up, 2024 was a landmark year for Apple, marking the advent of its AI initiatives, and 2025 is set to be even more significant, as it would mark the completion of Apple Intelligence.
[3]
Apple Has Lost Its Way with Apple Intelligence
Apple has long been a company that has embraced the latest trends with a slight pause, almost as if they're waiting for the thing to be well built-out, before incorporating it into its products. This has been the case with most advancements in smartphones, for example. Dual cameras, OLED screens, high refresh rates, and almost everything else that Apple hailed as innovation, was only brought to its products after other brands had implemented it into their products. Sure, this strategy makes sense -- let the others do the market research, take the risks of bringing a nascent technology to consumers, and spend millions figuring it out. And once it's matured enough, add it to the next generation of iPhones and call it a day, albeit a massively succesfull one if you're counting by the amount of dollars it rakes in. Of course, that meant that it was not going to be Apple, who first introduced full-fledged AI features into its smartphones or its operating systems. And it wasn't; not by a long shot. Other smartphone makers kept incorporating more and more AI features into their smartphones over the years. We got Magic Eraser, which wasn't always perfect, but it was quite impressive nonetheless and it only got better over time. We got the weird photo features in the Pixel that let us take group selfies without leaving one person out of the frame. We got Best Take. There's just a lot of AI features that were implemented into smartphones before Apple made its move. All that, as I said, was expected. However, it also meant that I, at least, expected Apple's implementation of AI to be something that makes me use the thing, or at least want to use it. So far, it does neither. When WWDC 2024 arrived, I was excited by the prospect of new Apple software updates and features. This, in itself, is nothing new. I am perpetually caught in a cycle of telling myself I wouldn't install the Developer Betas this year and then caving in and installing them the moment they release. It's a side-effect of both my love for technology, and the field of work in which I am involved, and it's not a side-effect I dislike even a little bit. Taking my interest even further was the anticipation of hearing about Apple Intelligence -- a much awaited foray into AI for Apple. To be fair, the company did a stellar job presenting the thing. It was well set up, and the examples shared by various Apple executives seemed to be interesting and impressive, if not entirely mind-blowing. The examples were all about how Apple Intelligence knows your personal context and works accordingly, blending in the background and doing the work for you so you don't have to. It was AI, the way I have always wanted it to be -- invisible, potent, and available for handling the mundane tasks, and the things you would usually forget. Now, I know that Apple Intelligence isn't completely here yet, and there are things that will be coming sometime in 2025. And that's my first gripe with it -- Apple Intelligence was clearly not ready for prime time when Apple announced it. Sure, they started rolling out some features in the betas, and by now we have a lot available in the stable build as well. However, it didn't feel like an Apple rollout at all this time around. Usually, the new iPhones are launched, and the stable iOS update is released in time for the iPhone series. And usually the new iOS update comes with all its features. Usually. This time around, iOS 18 came out with absolutely no AI features. Which meant the iPhone 16 series was a brand new phone without its most talked about feature. Sure, the Camera Button was a new thing as well, but that's a whole other rant for some other day. The truth is, Apple Intelligence was the reason for most people to upgrade to the iPhone 16 series, and it was just. too. late. It's still late, by the way, and we don't know exactly when it will be fully available. And what's more, this is not even the worst of it. Okay, so that analogy might not work too well since a lot of old wines would actually be amazing if I could get my hands on them. However, as the saying goes, it does explain what Apple Intelligence feels like to me, at least so far as it has been made available to us. First, there are the features. Broadly speaking, I would classify the AI features in iOS 18 into five categories: Let me explain. There are two main features that come under this category so far -- Writing tools, and Image Playground. Writing tools is something that has the potential to be useful, I can see it every time I try to use Writing Tools to rephrase something or get the key pointers out of a block of sample text. However, it's nothing exceptional, and it's definitely nothing new. There are countless online and on-device AI tools that do exactly this. ChatGPT does it. In fact, if you want to use Apple's writing tools to do anything more than a very basic rephrasing of your words, Apple does usually end up asking you permission to send over your text to ChatGPT to get the desired results. Image Playground is basically DALL-E, or Stable Diffusion, or any of the other AI image generation apps and tools that we've seen over the last couple of years. If anything, Image Playground is more limited. You can't create realistic images, for one thing. No matter how much you try, Apple Intelligence will simply not let you create realistic images. One argument for this is to prevent the generation of deepfakes, which I can understand, especially since I have previously written about how easy it is to generate realistic looking AI images these days. However, Image Playground won't let you create a realistic-looking image of a car, a landscape, or even a tree. Nothing can be realistic. It's either cartoonish, a sketch, or something else entirely -- anything but realistic. Image Playground also lends itself to the Image Wand feature on the iPad and iPhone, and while it works, it doesn't really work well. For one thing, anything you draw in the Notes app and then use the Image Wand on will undoubtedly come out looking like a sketch. Usually, it's at least a good looking sketch, but it's always a sketch. Sure. Whatever. However, Image Wand also entirely screws up at times. That's not something I expected from Apple Intelligence, because Apple, being Apple, is usually good at implementing features in a way that they just work. Simply, and often intuitively. Not Image Playground, though. Sure, Writing Tools and Image Playground were always going to be more of a feature that you'd use once in a while when really required, and otherwise not really bother with. I get that. And iOS 18 actually does bring some really useful AI features. Or at least, features that had the potential to be really useful. First up, Notification Summaries. Look, I get a lot of notifications throughout my day, and while a lot of them are potentially useless ramblings from certain friends and coworkers (looking at you, Sagnik, and Upanishad), some messages, even from them, are actually useful. What's more, since most of these ramblings are on Slack, they can sometimes bury out actually important messages from our Editor (now you know why I sometimes miss your messages, Anmol, it's these two guys. Do with that information what you will). Notification Summaries, therefore, felt like the savior I was looking for. Unfortunately, Apple Intelligence is very stupid when it comes to summarising notifications. It does hilarious things sometimes, and a lot of times, it ends up completely messing up the context. Then, there are the Smart Replies. These show up just above the keyboard based on your conversation, but they aren't very... intelligent. Okay, to be fair, the options it shows up usually make sense, but they aren't the kinds of things you'd normally say. Even weirder is that the Smart Replies feature works quite well in the Mail app, where it even shows follow-up questions to further fill out the response, but it doesn't do so in iMessage. Third, and last, is the Clean Up tool. This is another new addition to the Photos app, and it's basically Magic Eraser, but gone wrong. It does work on the easy stuff, but it's nowhere near what you'd get with Google's Magic Eraser. That, makes it a feature that I'd not use very often, especially since Magic Eraser is available on the iPhone with the Google Photos app anyway. One thing I have noticed with smartphone brands bringing AI features to their devices, is that they always incorporate some AI features that look really cool and are fun to play around with, but aren't the kind of thing you would actually use. Novelties, in other words. Apple Intelligence has these as well. Particularly, I'm talking about the Genmoji feature. Credit where it's due, Genmoji works really well. And usually, you can get the emoji you want to make by describing it in simple terms. That does mean you get the ability to send custom emojis to your friends, and since these are added as "stickers" in your keyboard, you can actually use them outside of iMessage too, including on WhatsApp. That's great. However, after the initial few days of trying out Genmoji and creating a few, I have neither created new ones, nor have I found myself reusing the ones I created. Features like these are great to get people talking about the cool new thing. Not everyone is always keeping them up-to-date with all the features in a new software update, and having people talk about "did you see this cool new emoji I made with AI" is a clever way of ensuring more people find out about the feature, and are pulled into wanting to buy the new iPhone that supports Apple Intelligence. At the end of the day, however, whether you buy the iPhone 16 for Apple Intelligence or not, the truth is you'll not really use Genmoji beyond a couple of days. But for Apple, Genmoji will have done its job -- getting more people excited about Apple Intelligence, and making more people want AI on their iPhone. Having Apple Intelligence on your iPhone isn't going to fundamentally change the way you use your device. That much is obvious, and I would actively ask people not to upgrade to the latest iPhone if they're doing it solely for the AI features. However, Apple Intelligence does have some good things as well. The call recording feature is really good. It's something that Android phones have been doing for years, and something for which iPhone users have had to rely on third-party iPhone call recording apps. Now, you can just tap the call recording icon on your call screen and start recording any call. That's not exactly "AI", and in fact, the feature is available on phones that don't support Apple Intelligence as well. What the iPhones compatible with Apple Intelligence do bring to the table, in related to call recording, is call summaries. These show up in your Notes app, and you can quickly view a summary of your entire call here. I have yet to find a situation where I wanted a summary of a recorded call, but that's just me. I'm sure loads of people regularly have conversations that they not only want to record, but also summarised for a quick overview, later. Speaking of "later", there are still more features that aren't even here yet. As I have mentioned earlier, Apple Intelligence was clearly not ready for prime time when Apple announced it, and nothing makes this more evident than the fact that we're six months into iOS 18 having been announced and almost 3 months since the stable public build was released, and a lot of Apple Intelligence is still "coming soon". Case in point, photo editing with Siri. With Apple Intelligence, you can, apparently, open up a photo, launch Siri and ask it to make edits, such as "make this photo pop!". Right now, this doesn't work. It simply shows a search result for an image editing app that has the word "pop" in it, or generates a reply from ChatGPT informing you how the photo can be made to pop. There's a lot more that's still coming soon, and as per Apple, will roll out with software updates "in the coming months". So there's really no fixed timeline for it either. We also have ChatGPT integration, which in itself is actually fairly useful and can make Siri do better at certain things than it normally would. However, there's a limit to the free ChatGPT requests you can make, or you can subscribe to ChatGPT Plus for $20 every month. Again, credit where it's due, Siri does tell you when it wants to share information with ChatGPT to provide you with better answers, and you can choose to share or not share information at that point. That is at least better than simply not knowing when your data is being processed on-device, on Apple's own cloud (with Private Cloud Compute), or on OpenAI's servers. So far, Siri will utilise ChatGPT's prowess if you use the "Describe Changes" feature in Writing Tools, or if you ask Siri to describe a photo. This, by the way, still isn't available in the stable public builds for iOS 18, but it should be making its way to users soon; it is, at least, available in the beta. And ChatGPT is actually good at doing most of these tasks. Although, I'm not entirely sure I am comfortable having my photos shared with ChatGPT in order for them to be described, but I also don't know if that's something the A18 Pro's NPU is capable of doing -- probably not. Look, I don't hate Apple Intelligence. If anything, I'm still cautiously optimistic about the thing. I am just surprised by the rather messy rollout that it has had so far. Some features are here in Developer and Public betas, some features are here in the stable iOS 18 builds, and some features are simply "coming soon." I expected Apple Intelligence to be here faster, if not exactly in time, for the first public release of iOS 18. I expected Apple's implementation of AI to work better than what I have already seen from other smartphone makers. And I definitely expected the features that were here, to work better than they do. Especially Notification Summaries... god, I had such high hopes from Notification Summaries.
[4]
2025 will be the year of Apple Intelligence (again)
As a sports fan, I'm besieged with ads for gambling these days. Sports media is full of experts that are happy to claim they know who's going to win and who's going to lose, but of course, if they really had all the answers they'd be rich and not flogging their predictions. What I'm saying is, nobody knows anything. And while I've been covering Apple since time immemorial (okay, the 1990s) and predicting in this space for a decade, let's just say that nobody's perfect. Still, it's fun to think about the blank canvas that 2025 offers us. Here are my predictions for what's to come in the next year. As always, no wagering. The simplest prediction one could make about Apple in 2025 is this: just as 2024 was the year of Apple Intelligence, so too will 2025 be the year of Apple Intelligence. Apple's crash project to add AI models to all aspects of its software got a brand name in 2024, but the work is far from over. Apple will spend the first half of 2024 making good on its remaining unfulfilled promises from WWDC 2024, and then in June it'll make a whole new year's worth of promises. That's as close as anyone can come to a stone-cold mortal lock of a prediction. It will take years for Apple to take its foot off the gas when it comes to Apple Intelligence, because it's at least a few years behind some of its competition. It's also not much of a prediction to say that Apple Intelligence will continue to be the same mishmash of useful and useless features as it has been up to now. I'm not sure if anyone in the tech industry really knows which AI features will be the ones that blossom into game changers and which ones will be duds. So for now, everyone just keeps throwing spaghetti against the wall. Apple's got several pots full of spaghetti still on a boil. And yet, after all of this, by the end of 2025 Siri still won't be as good as it should be. All of Apple's OS updates in 2025 will be primarily focused on, you guessed it: Apple Intelligence. The company will need to ship its promised features that leverage its on-device index of your personal information and take advantage of App Intents to control other apps. My guess is that those releases will be extremely limited in terms of scope and functionality. But they're something to build on, and I'd be surprised if next June there aren't major announcements to extend the ability for Apple Intelligence to learn about your personal data and control your apps. Ever since June Apple has been making noises about supporting third-party AI tools other than ChatGPT, but there haven't been any announcements. In 2025, I expect that the company will sign up at least one partner beyond OpenAI, and maybe even more than that. Support for additional third-party chatbot providers will probably debut in the fall with an early version of iOS 19 and macOS 16. None of Apple's ancillary products got Apple Intelligence this year, but 2025 might be when we first see some signs of that. visionOS 3 will probably add in support for Apple Intelligence, and I think there's a decent chance that a new HomePod Mini will include explicit support for Apple Intelligence. The real surprise debut of the year will be a new home product, which (as has been reported by Mark Gurman at Bloomberg) will be a small, iPad-like display running a custom Apple-build OS that's designed to be a home controller and ambient display. I like the idea that it'll be modular, with an optional speaker dock or a mount to hang it on your wall. And it'll obviously also bring Apple Intelligence to the party. What will it be called? I have no idea, but I'll put down 20 quatloos on... "the new HomePod." (Sorry, old HomePod!) Those searching for additional hardware will be disappointed. There won't be a new visionOS device in 2025, nor will there be an Apple answer to Meta's Ray-Ban glasses-call 'em AirPods Specs-even though there should be. It's a good thing that Apple Intelligence is all consuming, because it's shaping up to be a fairly quite year for Apple's core hardware product lines. Yes, in the spring we'll get an M4 MacBook Air, but it's unlikely to be different in any appreciable way beyond getting an improved webcam. We'll also likely see M4-powered updates to the Mac Pro and Mac Studio, though I'm anticipating that they won't be particularly exciting ones. Similarly, M5 MacBook Pros will debut in the fall because that's what's required, but they're unlikely to offer improvements over this year's models beyond the chip itself. Similarly, the iPad seems primed for a dull year after a year full of excitement. A new low-end model will likely be introduced, but that'll hardly move the needle. On the iPhone, things will similarly be static for two of Apple's three phone models. Maybe the iPhone 17 will pick up a higher-refresh rate screen and the iPhone 17 Pro and Pro Max will see slight camera improvements-always a safe bet. The big news will be the replacement for the iPhone Plus, in the form of the iPhone 17 Air. The iPhone 17 Air will cost less than $1000 and will be the thinnest iPhone ever, but with enough technical compromises to infuriate a whole bunch of tech nerds. (It'll still sell better than the iPhone Plus or Mini ever did.) At some point in 2025, Apple's total profit from its services will surpass its profit from products. While it's not fair to say that Apple's going to lose its soul at that moment-the fact is, most of Apple's services revenue is directly tied to its success selling hardware-it's worth pondering just how important the services budget line has become to Apple. Apple will continue to invest in its services in 2025, of course. But I think those investments will be incremental in nature-a big film here, a bunch of prestigious TV shows there, maybe some new Fitness or News content. But while Apple will be rumored to be in the running for a major chunk of sports rights or even a movie studio, in the end I don't think it will buy anything that large. (If Apple buys Disney or Warner Bros. in 2025, well, we're all going to look back on this column and laugh.) Speaking of services revenue, two of Apple's biggest revenue drivers in the category are browser referral money from Google and revenue from the App Store. I think it's safe to assume that the Google deal and Apple's App Store policies will continue to be under assault from regulators in 2025. I'll also predict that the company will continue its policy of fighting attempts to change its business model tooth and nail. Will a record fine be levied against Apple in 2025? I wouldn't bet against it, but as I warned you earlier, it's probably not smart to bet on this stuff. There's no such thing as a sure thing.
[5]
The good and bad of Apple Intelligence after using it on my iPhone for months
Table of Contents Table of Contents The AI that Apple got right What Apple got wrong Much ado about nothing Whether you love or hate it, AI doesn't appear to be going away anytime soon. In fact, AI is evolving quite rapidly, and it's now in the palms of our hands with our smartphones, as Google, Samsung, and even Apple have now fully embraced our AI future. Though Apple was late to the game with Apple Intelligence, the company majorly hyped it up for the iPhone 16 launch in September, even though, amazingly, it did not roll out until October with the iOS 18.1 update. The staggered release schedule for Apple Intelligence confused many consumers as to why they did not have Apple Intelligence immediately with their iPhone 16 purchases, and it felt like a big misstep from Apple. Recommended Videos But now that we've all had access to Apple Intelligence for the last few months of 2024, I have to say that it hasn't made as big of an impact on my iPhone usage as originally thought. The AI that Apple got right There are a lot of features that Apple packed into Apple Intelligence, but so far, I've only found a few of them actually useful in my daily usage. For one, the Clean Up tool has been very helpful when I need it. I've always been annoyed that prior to iOS 18, iOS users would have to download some kind of third-party photo editing app in order to get an object-removing tool, which is usually locked away behind a paywall, too. Meanwhile, Google has had the Magic Eraser tool since the Pixel 6 series, and Samsung has its own Object Eraser. But until iOS 18, Apple users were left out in the dust. I don't necessarily need to use Clean Up every time I want to share a photo, but it has been very useful to have when an image needs a touchup. Removing pieces of trash on the ground, power lines from a beautiful sky background, small scuffs and other imperfections, and various strangers passing by -- Clean Up does a great job with these things. Previously, if I needed to edit a photo to remove something, I'd have to do it in Google Photos on my iPhone 16 Pro or even use my Pixel 9 Pro. But now that Clean Up is available, I no longer have to juggle various apps or phones to get the job done. Another Apple Intelligence tool that I like is Visual Intelligence. This feature is exclusive to the iPhone 16 line as it requires the Camera Control button, and for me, it has made the button worth using. This isn't a feature I use dozens of times every day, but I have encountered some situations where it is convenient. For example, identifying plants or animals and translating text. I'm surprised it took Apple this long to integrate such a feature, as it's just like Google Lens. What Apple got wrong I was excited to check out more Apple Intelligence features when I got the iOS 18.2 update on my iPhone 16 Pro. But aside from what I've already mentioned, the rest isn't as exciting. I already hate AI art in general, so I wasn't too thrilled about Image Playground. However, since it's a new feature, I had to try it at least once. I tried to get Apple Intelligence to generate an AI image of me, in various scenarios, to perhaps share on social media. But every result I got did not look good to me, and I felt it had no actual resemblance to my image. It kept giving me odd-looking teeth in my smiles, hair that looked nothing like what I had, and other imperfections. I wasn't expecting a perfect picture, but I was hoping I would get something that would be decent enough to share online -- dozens of tries, and I wasn't happy with any of them. I suppose my appearance doesn't work with Apple's AI art style? Whatever the reason is, my experience with it hasn't been positive. Genmoji, on the other hand, is pretty fun to use. I often send emojis in my chats, so creating some unique ones that I can't get with the regular emojis is fun to mess with. And the fact that they show up in your "recently used" emoji can mean fast access in the future. I also feel similarly to the AI tools for text, though summarization is nifty even if I don't use it much. As a writer myself and someone who enjoys writing in general, I'm not a big fan of any AI writing tool. Plus, if you have your own writing style, the AI-generated text will look out of place anyway, as it usually tries too hard, especially the professional tone. And while Siri got a little smarter with iOS 18, it still is not good. It still doesn't seem to be able to handle multi-modal requests, so hopefully, that comes sooner rather than later. But even with some basic things, Siri gets confused easily. Compared to the competition, there is still a way to go. Adding ChatGPT support was a good idea, though. Much ado about nothing In the end, I think Apple's staggered rollout of Apple Intelligence did more harm than good. A lot of people bought the new iPhone 16 devices because they wanted these AI features, which Apple marketed heavily in the stores, but it didn't even launch with the devices. So everyone, myself included, continued to use the iPhone 16 and iPhone 16 Pro like their predecessors. A month after the launch of the iPhone 16, Apple finally started to roll out Apple Intelligence, but not all of the features, just a few of them. We only got Clean Up, Writing Tools, Summarization, priority messages in Mail, and slightly improved Siri in iOS 18.1 in October. With iOS 18.2 in December, we finally got Image Playground, Genmoji, Visual Intelligence, and ChatGPT integration. This is a slow rollout of AI features that Apple's biggest competitors have already offered for months. And at this point, aside from a few cool tools, it just feels like Apple Intelligence is already losing its luster. Apple Intelligence hasn't affected my overall use of the iPhone 16 Pro, as I'm still primarily using it like my iPhone 15 Pro from a year ago. That's not a bad thing for me, but it's also not a great look for Apple Intelligence's future.
[6]
The best (and worst) AI phone features in 2024
Here are the AI-powered features that impressed -- and the ones that fell short There's no guesswork required to figure out the biggest trend in smartphones this past year. Artificial intelligence led off and finished just about any conversation focused on the new phones that launched in 2024 -- especially when it came to the latest models from the biggest smartphone makers. The year began with Samsung announcing new Galaxy AI features as part of the Galaxy S24 launch and then quickly rolling out those AI-powered capabilities to other recent Samsung flagship phones. (Even the midrange Galaxy A35 features Circle to Search support, making it one of the few sub-$400 phones to offer at least some AI capabilities.) By the end of the year, Apple had gotten into the act, finally rolling out a suite of AI tools of its own. And we're not done with the Apple Intelligence launch, either, as future iOS 18 updates figure to bring other promised improvements to the latest iPhones. Then there's the Google Pixel phones, long established as the leaders when it comes to AI features. That lead was only cemented by the August release of the Pixel 9 lineup, as the new Tensor G4 silicon powering those phones introduced additional AI capabilities to the mix. So yes, smartphones gained plenty of AI superpowers in 2024? But how many of those powers were actually super. I'm a bit of an AI skeptic when it comes to new features, more inclined to the "Nobody asked for this" stance than to believe that an AI feature is a big step forward. I see the value in AI on the phone -- anything that takes care of repetitive tasks or fits into my current workflow gets a thumb's up. Anything that comes across a glorified parlor trick with more sizzle than steak, I can do without. Among the AI features introduced to phones this past year, a fair amount wound up impressing me -- more than I would have guessed when I started jotting down the AI improvements I liked. But there are certainly a few that need to go back to the drawing board... or maybe not even be on my phone at all. Here's one person's take on the AI features that made the grade in 2024 and the ones that failed to impress. I find myself on a lot of email chains involving friends, school parents and different organizations, and I'll be honest -- a lot of times I don't have the time to read each message as it comes in. Or even worse, I'll have to refer back to a long string of emails and track down the exact one that has the pertinent information, which can be like finding a needle in a haystack when there's a lot of back and forth to sort through. That's the way things were before Apple Intelligence arrived, though. Now, the built-in Mail app has a summary tool at the top of each message. Tap it, and you can get a recap of the key points in any email or string of messages. Even better, the summaries are pretty accurate, so I can be sure that I'm getting the gist of what I need to know. This search aid developed by Samsung and Google made its debut on the Galaxy S24, but it's since fanned out to other Android phones. All you have to do is long press the home button and then circle or tap the thing you want to search (usually an image, but you can select text, too). You'll then get results in a pop-up window -- handy, because you can stay in the app you're already in, without having to retrace your steps when your search is complete. When Circle to Search debuted, I worried that it would be a glorified way to push you to e-commerce sites -- and indeed, the tool is very helpful if you see something you'd like to buy and tracking down where you can buy it. But over the year, Circle to Search has grown into a great tool for looking up things on the fly without having to stop and jump to another app. Online search may be degraded these days, but Circle to Search really helps cut through the cruft to find the information you wanted. If you're like me and you take a lot of screenshots on your phone to remember things, the addition of the Screenshots app with the Pixel 9 update has been a welcome addition to Google's phones. Yes, the app is a one-stop repository for all the screen-grabs you capture on your phone, but it wouldn't be that impressive if that's all Screenshots did. You can also search for text within the screenshot, and the AI on board the Pixel is smart enough to find exactly what you're looking for. Screenshots has other management tools that make it a wonderful addition to Google's phones. You can annotate screenshots, sort them into collections and easily share screenshots with other people. But the best feature is to set a reminder about a particular screenshot so that it surfaces at a particular time. When I'm going to an event, for example, I can set a reminder for a screenshot with the registration details, rather than have to search through my inbox for the exact email that has that same information. Screenshots can be a real time-saver, as it fits exactly into how I store information for later reference. Lots of mobile phones offer translation features, but Samsung's Galaxy AI has arguably the best implementation of it with live translation through the Interpreter app. If you do a lot of traveling, this is an essential tool to have at end, especially since you can download different languages on to your Galaxy phone so that the feature works without requiring a network connection. Once you've selected your languages and started a conversation, your Galaxy device will listen to what you're saying, then repeat an audible translation that it also displays on your phone's screen. If you want, you can make the transcript of the translation appear upside down on the phone so that both you and the person you're speaking with can see real-time translations without having to hand the phone back and forth. Galaxy AI also supports a Live Translate feature for phone calls that provides on-the-fly translations when you're talking on the phone with someone who speaks another language. This tool doesn't work quite as seamlessly -- there's a lot pauses as you wait to hear the translated speech, and it seems to work best if you speak less casually. But it's still an impressive display of on-device AI working to extend what you can do with your phone. Call recording is now a feature on the Phone app of every major flagship device. (At least it will be on Samsung's Galaxy flagships once One UI 7 gets released to more phones.) And where there's call recording, there's usually AI-generated transcripts of phone calls to go with those audio files. If that's a feature that's important to you, you won't find a better implementation than Call Notes on Google's latest Pixel phones. What I appreciate about Call Notes is its summary feature. (iPhones have a summary tool for phone call transcripts, too, though only on devices that support Apple Intelligence.) With the tap of a button, the AI on board your Pixel can highlight the key points of a phone call, saving you the trouble of having to dig through the transcript yourself. When I tested Call Notes for a Pixel 9 Pro review, I found the transcript prone to misheard words, which is not an uncommon problem with transcription features on mobile devices these days. But the summary tool is accurate enough that I'm confident the quality of Call Notes transcripts will improve over time. Perhaps the most welcome addition brought by AI to any phone are beefed-up photo editing tools that take some of the more challenging touch-up work out of your hands. I don't know about you, but I'm not a trained photo editor, and I don't want to have to learn the finer points of an image-editing app just to take care of some glare in a shot or to make the colors pop a little bit more. You'll find a cool image-editing capabilities in any of the best camera phones that came out in 2024. (You'll also find a couple duds, which we'll talk about in a bit.) All iPhone 16 models now offer a Clean Up tool in the Photos app that does a solid job of removing unwanted objects or people from a photo. Yes, that's a feature Android phones have long had, but it's nice to finally see it on at least some iPhones. I also appreciate the edit suggestion feature that the Galaxy S24 introduced. Again, I'm not an expert at image editing, so when I see something like a distracting shadow, it's nice to turn to an AI tool if I need to remove a shadow or add a background blur. Google has been adding tools like this to its Pixel phones for several generations now in the form of things like Magic Editor and Best Take. And because those features have been such a hitter, it's harder and harder for newcomers to make a similar splash. Still, the addition of Reimagine to Magic Editor expands that feature's toolkit in a helpful way, giving you the ability to use descriptive text to direct AI to make tweaks and improvements to your images. AI is still doing the heavy lifting, but ultimately control over the look of your images is still in your hands. While we're talking about image-editing tools, let's carve out a special space for my favorite addition -- the Instant Slow-Mo capability that debuted with the Galaxy S24 series. Part of the dilemma I face as an amateur photographer is knowing when to plan for an effect ahead of time -- in this case, knowing when I should be shooting slow-motion video instead of capturing video at its standard frame-rate. The Instant Slow-Mo feature takes that guesswork out of my hands. Now I can go into the Gallery app on a Samsung flagship phone and tap and hold on the part of a selected video that I want to slow down, lifting my finger when I want to speed things back up. Adjusting the speed of the effect is easy, and I can even fine tune when the slow motion kicks in. More importantly, I can capture footage spontaneously without having to think about what I need to do in post, as the AI powers of Instant Slow-Mo fill in the missing frames to complete the slow-motion effect. Samsung clearly had the best year in terms of AI features, and it didn't stop with the Galaxy S24's arrival way back in January. The Galaxy Z Fold 6 and Galaxy Z Flip 6 introduced us to Sketch to Image, which takes drawings you make in different apps and uses generative AI to convert them into realistic illustrations. Sketch to Image works best when you've got a modicum of talent and an S Pen-compatible phone, but the feature also supports primitive sketches you make with your finger. Best of all, Samsung extended this capability to the Galaxy S24, Galaxy S23 and Galaxy S22 as well as some older foldables, as Samsung has been pretty generous when it comes to delivering new AI capabilities to existing models. I read a wag on Bluesky dismissing smartphone AI features as glorified spellcheck, and while I think that's a bit harsh in some cases, it's uncomfortably close to the mark for the Writing Tools introduced via Apple Intelligence. Writing Tools are supposed to improve your writing with shortcuts for changing tone as well as commands that check... well... spelling and grammar. And they fill that brief in the broadest possible sense, though not always in a way that flatters your writing. Let's give some credit where credit is due. If you're composing a business letter, the Professional preset in Writing Tools is familiar enough with formatting and style of this very structured form of writing to convert your draft into a competently crafted text. The Describe Your Change feature added in the iOS 18.2 update also gives you more ontrol over the changes in tone that Writing Tools will impose on your writing -- not a bad development in the greater scheme of things and a sign of how Apple can improve tools even after they're released. But other Writing Tools presets are content to replace a few words with synonyms and strip out any type of personal voice from your writing. The end effect is usually text that doesn't read like a human being composed it -- and that's the opposite impression you should want to make in an email or a report. While we're throwing shade Apple's way, let's save some raspberries for Genmoji, the tool that creates customized emoji based on text descriptions that you offer. The feature does what it says in the description: you can indeed tell Apple Intelligence to whip you up an emoji of a fox dressed as an astronaut or a friend of yours clutching a dollar bill. But -- and I think this is a question we need to ask of any AI feature -- to what purpose? I say this as someone who has little use for emoji: a picture may be worth a thousand words, but words are still pretty useful when it comes to communicating with other people. But I have a hard enough time deciphering teeny-tiny emoji in text messages that care commonly used. I've now got to figure out what someone messaging me means when they send me a mushroom wearing a top hat? Let's file this one under AI party tricks and spend our time on more edifying AI capabilities, please. While we're on the subject of tools that show off the power of generative AI but serve little practical purpose, let's talk generative image creation -- features like Image Playground on the iPhone or Pixel Studio on the Pixel 9. With these capabilities, you can use text prompts to create an image that matches your description. On paper, that's an intriguing capability, and the results from either Image Playground or Pixel Studio impress up to a point. But the limitations are so severe -- Apple's Image Playground only supports two different styles at the moment, for example and the final product so similar, that apart from texting your output to friends, you likely won't have much use for the images. I used Pixel Studio a lot leading up to my Pixel 9 Pro review, and since then, I've not even touched the feature. That's not really an argument in favor of how essential generative image creation tools can be on the phone. It's only fair that we spend some time on Galaxy AI's lesser capabilities, and for me, that's Chat Assist. Like Writing Tools on the iPhone, Chat Assist looks to fine-tune the tone of any text messages you send -- making texts to co-workers sound a little more professional and texts to friends and family a little less stiff. The end result is usually texts so unnatural, your friends and family will wonder if they're being texted by your captor. I took Chat Assist out for a spin earlier this year to see if messages tweaked by Chat Assist came across as ones that I would have composed on my own. A few AI-crafted messages slipped by my test subjects, but they were mostly formal ones or ones sent to people who knew me casually. Close friends and family will spot Chat Assist-tweaked texts the moment you press send, so don't even bother using this feature for the most common form of messaging. And that makes me wonder why it's even included with Galaxy AI at all. As adept as Google is at adding AI-powered tools to its phones, it's due a few turkeys of its own, and this year's Add Me addition really didn't measure up to past Google updates. If you've forgotten Add me allows you to join the group shots you've taken by letting you hand off the camera to someone else and using AI to dictate where you should stand. Add Me then takes the two shots -- the one with you, and the one with out -- and combines them into one seamless photo. It's a solid idea, but the output isn't always as seamless as it should be, at least not when I've tried to use Add Me. The trouble lies with handing off the camera to someone else. Unless they angle the phone in the exact same way and at the exact same height, I've found my results tend to be off -- in one photo with some family members, I ended up looking like an oversized giant dropping in on a collection of hobbits. Maybe Add Me works more seamlessly the more familiar you get with the feature and the more you're able to account for any quirks in how different people frame the same shot. But the results I've seen are too erratic for me to trust Add Me. Some AI additions to phones are neither good nor bad -- they're just incomplete. That's how I feel about the AI enhancements to the digital assistants on board most major flagships, as there have been some strong improvements in the past year with the more significant additions yet to come. Google's digital assistant, now replaced by Gemini Live, is probably the furthest along as it's incorporated Google's AI chatbot to a strong overall effect. The change has made the Pixel's on-board assistant easier to talk to and more capable when it comes to understanding the context of your questions. And if you've bought one of the three Pixel 9 Pro models, you also get a year of Gemini Advanced and its more complex features at your disposal. Siri is getting an Apple Intelligence revamp, too, and so far that's meant a new design -- the face of your iPhone flashes when Siri is listening -- plus integration with ChatGPT, if you opt in to that. Siri supports more natural language, but the real improvement will come when the assistant understands and acts upon whatever happens to be on your screen -- a feature that we're expecting to in the coming months. Samsung is promising much the same thing with its One UI 7 update, which is in beta now ahead of its launch alongside the Galaxy S25. That's likely to occur later this month. A context-aware assistant is the key for many smartphones, so we'll be better able to assess who's got the smartest assistant once that crucial piece is in place.
[7]
We Asked If "AI Was Really Useful on Smartphones in 2024" and Here's What You Chose
If I were to ask you which tech trend dominated 2024, I am confident you would say AI. The term remained the buzz throughout the year, with all the tech titans taking their best shots to make their AI products stand out. Smartphone makers also boarded the hype train, plastering it all over their marketing. But has AI features proven useful on smartphones in 2024, or are they just gimmicks that should've been avoided? Let us discuss. This year, almost every smartphone brand jumped on the AI bandwagon before it left the station. New devices came out with a set of "smart" features that were more outlandish than the last. And somehow, it roped in generative AI to be the new marketable highlight. It started with Google's Pixel 8 series in 2023 and carried forward to this year with Samsung's S24 lineup. The company's Galaxy AI features came out of the box in 2024 with all of their flagship smartphones. It was the series' highlight, with every promotional piece saying "Galaxy AI is here". It included features like - Voice recorder transcripts and summaries, a Writing Assistant, real-time language translation on calls, Google's Circle to Search, an AI image editor, and generative wallpapers. In the following months, other brands were quick on their feet to integrate generative formulas for their devices. London-based fashion/tech startup Nothing introduced some AI features for the Nothing Phone (2a) like ChatGPT shortcuts and AI wallpapers. Meanwhile, OnePlus and Xiaomi pushed out AI features and bringing them to an affordable tier of smartphones. Apple usually waits for the dust to settle before jumping in. But they surprised everyone this year by announcing Apple Intelligence (hmm, clever?) at the WWDC event in June. This brought AI-powered Siri, new Writing tools, Genmoji, ChatGPT integration, Image playground, Notification summaries, and more to the iPhone 15 Pro and the 16 series. Google quickly followed with the Pixel 9 series, focusing heavily on AI once again. It was a display of the company's achievements so far. It introduced new AI breakthroughs such as a Pixel Screenshots app that can generate summaries from captured images, Pixel Studio to generate images, Best Take, Circle to Search, Add Me, call transcriptions, and more. The fact is, none of these AI features address any big-picture issues. Instead, they are mere solutions looking for a problem. A good example would be this Apple ad showing an office lackey sending a professional email using their iPhone's Writing Tools. Or this Google advert where someone asks for an image of a "Flamingo wearing a hat" because why not? These are just some of the many outlandish situations shown in promotional materials to forge the value of AI wizardry. But outside the marketing material, are people finding these features useful? We conducted a poll on X and Instagram to get a better idea, asking, "How often do you use AI on your phone & what do you use it for?". From the votes we received on X, 33% sided with "Occasionally", while 30% opted for "Rarely". The result was nearly identical on Instagram too. It clears the fact that people are somewhat enthusiastic about these additions to their devices. However, like us, they don't find the current feature set that helpful in everyday life. The most frequently used feature was Google's Circle to Search. Followed by ChatGPT to understand complex topics and ideation, while a few of you mentioned image editing as one of the use cases. Besides these, options like AI image generation, transcription, or summarization, were not mentioned even once. For now, the current set of AI tools on smartphones seem nothing more than gimmicks. It's a similar story to how Samsung advertised the air gestures back in the TouchWiz days. And that's not just my opinion, but something that is voiced by many across the internet. There are several threads like this one on Reddit discussing that AI has still to prove its merits on smartphones. As per this Sellcell report, 73% of iPhone users and 87% of Samsung users say AI features add little to no value. Only a few sets of features like Writing Tools, Notification summaries, and image cleanup are what the majority of people even find any use case for, as is evident from this graph. The notion was similar for Galaxy AI features as well. As someone who lingers around tech most of the day, I have the most exposure to the latest smartphones that come with all the fancy AI advantages. I get to daily drive these phones, but post the review period, I don't find any need to revisit them later. I tested out the later Pixel 9 (review) upon its launch. While I was excited about all the fun stuff Google has packed in, it didn't prove to be much useful after the honeymoon phase was over. Personally, I only use a select few options, like the aforementioned Circle to Search and Gemini Live. Circle to search is useful for finding links to cool items that I discover while scrolling through Instagram. However, it isn't something that I would consider to be AI. In this current iteration, it feels like a glorified shortcut to access Google Lens and its image reverse searching capabilities. Then, there's Gemini Live that I initiate conversations with only when I'm bored at home on weekends. It does a pretty good job, mimicking a human, even to the little nuances. But the conversation doesn't last that long, since the AI doesn't have an interesting personality. It only tries to mimic one while making sure it adheres to company policies, leading to many awkward silences from my end. But there is more hope for the future as this technology improves. I asked around the Beebom office and the consensus came about to be the same. At the moment, AI features on smartphones are flashy, but not functional. No brand I feel has caught a bolt of lightning in a bottle with this new tech, which is why it stayed nothing more than a novelty this year. While 2024 didn't turn out to be as fruitful of a year for AI implementation on smartphones, I am hopeful for the future. The technology could be limitless, and this year was just the start. And it's not like we didn't see any good results from it. OxygenOS 15's Reflection eraser and Nothing OS' ChatGPT shortcuts are some of my favorite features from this year. People are now aware that their phone has AI capabilities, and it's just a matter of time before Google, Samsung, and Apple understand what people are looking for. Maybe they'll finally look past removing people from the background and think about filling the empty spaces (not just literally!) with something more creative. Next year, even more phones in all price categories are set to feature AI functionality in one way or the other. And with more players, I hope that someone will crack the code of making these features into useful essentials that become a part of our daily routine. Something like Google search, social media apps, or Reels that we just can't stop scrolling.
Share
Share
Copy Link
Apple's delayed entry into AI with Apple Intelligence shows promise but faces criticism for its staggered rollout and mixed user reception. The tech giant aims to expand its AI offerings in 2025, balancing innovation with privacy concerns.
Apple, known for its cautious approach to new technologies, finally entered the AI race in 2024 with the introduction of Apple Intelligence. This move came after years of criticism for lagging behind competitors in AI-driven features 1. The company's strategy involved a staggered rollout of AI features throughout 2024, with more planned for 2025 2.
Apple Intelligence brought several new capabilities to iOS, iPadOS, and macOS devices:
Writing Tools and Clean Up: These features have been well-received, offering practical AI-powered text and image editing capabilities 5.
Image Playground and Genmoji: While Genmoji has been fun for users, Image Playground has faced criticism for its limitations in generating realistic images 3.
Visual Intelligence: Exclusive to iPhone 16 series, this feature has been praised for its convenience in identifying objects and translating text 5.
Siri Improvements: Despite some enhancements, Siri still lags behind competitors in handling complex queries 5.
The delayed and staggered rollout of Apple Intelligence features has been a point of contention. Many users who purchased iPhone 16 devices for AI capabilities were disappointed by the initial lack of features at launch 3. Some critics argue that Apple's AI offerings, while promising, are not groundbreaking compared to existing technologies from competitors 5.
Apple is expected to continue focusing heavily on AI in 2025:
Expanded Siri Capabilities: Plans include improved personal context understanding and "onscreen awareness" 1.
New Hardware: Rumors suggest a new home product with AI integration, possibly an iPad-like display for home control 2.
OS Updates: iOS 19 and macOS 16 are likely to feature significant AI enhancements 2.
Third-Party AI Integration: Apple may expand partnerships beyond OpenAI for broader AI tool support 2.
Apple's strategy differs from competitors by emphasizing privacy and on-device processing for AI features. The company aims to provide AI capabilities without compromising user data security 1. This approach, while potentially limiting some functionalities, aligns with Apple's long-standing commitment to user privacy.
As Apple continues to develop its AI offerings, the tech industry watches closely to see how the company will balance innovation with its core values and user expectations in 2025 and beyond.
Reference
Apple's foray into AI with Apple Intelligence has been met with disappointment, as users find the features limited, buggy, and less capable compared to competitors like Google's AI offerings.
5 Sources
5 Sources
Apple rolls out its AI features, Apple Intelligence, with a focus on privacy and security. The update brings new capabilities but faces criticism for inconsistent performance and battery drain issues.
4 Sources
4 Sources
Apple's new AI features, Apple Intelligence, are rolling out with iOS 18 updates. While promising, analysts doubt their immediate impact on iPhone 16 sales, citing production cuts and delayed feature releases.
8 Sources
8 Sources
Apple's voice assistant Siri lags behind competitors, causing delays in product launches and raising questions about the company's AI strategy. This struggle reflects broader challenges in the consumer tech industry's push for AI integration.
3 Sources
3 Sources
A comprehensive look at the AI landscape in 2024, highlighting key developments, challenges, and future trends in the rapidly evolving field.
8 Sources
8 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved